AI coding assistant for VS Code and JetBrains powered by Codestral and Devstral
Mistral Code is an AI coding assistant from Paris-based Mistral AI, offering inline code completion, multi-file context editing, and agentic chat inside VS Code and JetBrains IDEs. It is powered by Codestral for completion tasks and Devstral for complex reasoning and agent workflows — both EU-hosted models with no customer data used for training.
Headquarters
Paris, France
Founded
2023
Pricing
EU Data Hosting
Yes
Employees
501-1000
Free
Pay-as-you-go
Pay-as-you-go
Contact Sales
Billing: monthly, pay-as-you-go
Every European developer using GitHub Copilot faces the same uncomfortable fact: their code snippets travel to Microsoft's Azure infrastructure, processed under US cloud terms, stored in data centres that may fall outside EU jurisdiction. For individual developers building side projects, the risk is theoretical. For developers at regulated EU companies — financial services, healthcare, legal tech — it creates real compliance friction with procurement, legal, and data protection officers.
Mistral Code is the answer that Paris-based Mistral AI built. Launched in 2025 and expanded in 2026, it brings AI coding assistance to VS Code and JetBrains IDEs with a structural guarantee that competitors cannot match: all inference runs on Mistral's EU-hosted infrastructure, under French law, with a contractual commitment that customer code is never used for model training.
The product runs on two underlying models. Codestral handles fast inline completions — fill-in-the-middle tasks, function autocomplete, and single-line suggestions where latency matters. Devstral is the heavier reasoning model behind the agentic chat interface: multi-step refactoring, repository-level debugging chains, and autonomous task execution across multiple files. The split mirrors how GitHub Copilot uses different model tiers internally, but here the distinction is visible to the user.
Mistral AI itself is Europe's most prominent AI lab, having raised over €1 billion and built models that compete head-to-head with GPT-4 on standard benchmarks. Mistral Code is not a side project — it's a product line from a company with genuine model research depth.
Codestral is Mistral's code-specialised model, trained specifically for fill-in-the-middle completion tasks. On the HumanEval and MultiPL-E benchmarks, it scores competitively against GitHub Copilot's backend models for Python, TypeScript, Go, and Rust. In practice, this means fast single-keystroke completions that require minimal manual correction on common patterns.
The inline suggestion experience in both VS Code and JetBrains is close to Copilot's — grey ghost text appearing as you type, accepted with Tab. Latency on the EU servers is low enough that completions arrive within 200-400ms for most file sizes, comparable to Copilot for users in Western Europe where round-trip times to EU servers are short.
Where Codestral shows visible improvement over Copilot is in European-language codebases that use non-English comments and variable names — French, German, Spanish. Mistral's multilingual training means the model understands context that English-optimised models sometimes misread.
Devstral powers the chat panel where you can describe multi-step tasks in natural language. Unlike simple question-answer chat, Devstral can read multiple files, understand module dependencies, propose changes across different parts of a codebase, and execute those changes as a diff for review.
A concrete example: given a legacy REST controller in Java and an instruction to refactor it to use Spring WebFlux reactive patterns, Devstral reads the file, identifies all blocking calls, generates replacements across the controller and its test class, and presents both as a unified diff. This saves 20-40 minutes of mechanical work that would otherwise fall to the developer.
Devstral is slower than the agentic modes in Cursor (which uses Claude Sonnet 3.7 or GPT-4o as its reasoning backbone). For tasks involving 10+ files, the latency difference is noticeable. For typical three-to-five file refactors, the gap is small enough that EU compliance requirements justify it.
Mistral Code installs as a VS Code extension (via the marketplace) or a JetBrains plugin (via the plugin marketplace). Both offer feature parity: inline completion, diff review, agentic chat panel, and La Plateforme API key configuration. Developers who switch between VS Code for web work and IntelliJ for JVM projects get a consistent experience without maintaining separate tools or subscriptions.
Setup involves entering a La Plateforme API key, which you can obtain on a free tier. The extension then routes all requests through Mistral's EU servers — there is no option to route through a US endpoint even if you wanted to. This is by design.
Because Mistral Code uses standard La Plateforme API keys, developers and organisations already using Mistral's API for other purposes (LLM inference, embeddings, agents) can consolidate billing and usage monitoring. Token consumption across Mistral Code and any other La Plateforme usage appears in a single dashboard. For finance teams, this is a meaningful operational simplification.
Mistral Code does not have a separate subscription — it runs on La Plateforme API tokens. The free tier provides a monthly token allowance that covers realistic individual developer usage: roughly 500 completions per day before hitting limits, depending on completion length.
For paid usage, Codestral API pricing is $0.20 per million input tokens and $0.60 per million output tokens. A developer making 2,000 completion requests daily, each averaging 500 input and 200 output tokens, spends roughly $8 per month — significantly below GitHub Copilot's $10/month individual plan and Cursor's $20/month Pro plan.
Devstral agentic usage costs more per session because reasoning tasks generate longer completion chains. A complex multi-file refactor might consume 50,000-100,000 tokens, costing $0.05-0.10 per task. Teams running agentic workflows at scale should monitor token consumption through the La Plateforme dashboard.
Enterprise pricing covers dedicated deployment options — VPC hosting, on-premise inference, custom SLAs — for organisations whose data classification requirements prohibit even EU-hosted third-party APIs.
Mistral Code's compliance profile is straightforward for EU organisations because Mistral AI SAS is French by incorporation. There is no US parent company, no cross-Atlantic data transfer, and no CLOUD Act exposure for data processed through their API.
All Codestral and Devstral inference runs in EU data centres. Mistral's Data Processing Agreement (available from their legal team) includes a contractual no-training clause: code submitted for completion or chat is not used to improve Mistral's models. This addresses the concern that killed GitHub Copilot adoption at many European enterprises — the fear that proprietary code becomes training data for models serving competitors.
Mistral AI is actively working on EU AI Act compliance, as Codestral and Devstral qualify as general-purpose AI models under the Act's definitions. French regulators are familiar with the company, reducing the jurisdictional uncertainty that applies to US-based AI providers.
ISO 27001 certification is not yet listed for the Mistral Code product specifically, but Mistral's general infrastructure certifications cover the same servers. Organisations requiring specific certifications should request the latest compliance documentation from Mistral's enterprise team.
EU regulated enterprises where DPOs have blocked or restricted GitHub Copilot or Cursor due to US data transfer concerns. Mistral Code's French infrastructure and contractual no-training guarantee address the two most common objections raised in compliance reviews.
Teams already using Mistral's API for LLM or agent workloads. Adding Mistral Code consolidates AI tooling under one vendor, one DPA, one billing account, and one set of procurement docs.
Python, TypeScript, and Rust developers where Codestral's benchmark performance is strongest. Developers primarily working in legacy languages like COBOL or Fortran will see less benefit.
Budget-conscious individual developers who want AI coding assistance below GitHub Copilot pricing, with the free tier covering light daily usage entirely.
Mistral Code is not yet the complete package that GitHub Copilot represents after five years of development. The single-region EU data centres, the absence of Vim/Neovim support, and the slower agentic mode are real gaps. But for developers and organisations who prioritise EU data sovereignty, Mistral Code is the only EU-native coding assistant with frontier-class underlying models. If your work involves code that cannot leave EU infrastructure, Mistral Code goes from "one option among many" to "the obvious choice."
Yes. All inference runs on EU-hosted Mistral infrastructure. Mistral AI SAS is incorporated in France, subject to GDPR by default, and offers a Data Processing Agreement covering no-training commitments. Enterprise customers can arrange dedicated VPC or on-premise deployments for maximum isolation.
VS Code and the full JetBrains suite — IntelliJ IDEA, WebStorm, PyCharm, GoLand, Rider. Both offer feature parity including inline completion and agentic chat. Vim, Neovim, and Emacs are not supported as of April 2026.
Codestral is competitive on fill-in-the-middle benchmarks, and Mistral Code's EU compliance story is significantly stronger — French infrastructure, no US data transfer, contractual no-training guarantee. GitHub Copilot has a larger integration ecosystem, more mature enterprise rollout tooling, and broader IDE support. Copilot is better for teams where compliance is not the priority; Mistral Code is better for EU-regulated organisations.
Yes. Developers can obtain a La Plateforme API key on the free tier and use it with Mistral Code extensions. The free tier applies rate limits that cover typical individual developer usage of several hundred completions daily. Paid usage is billed per token through La Plateforme.
Code snippets are processed on Mistral's EU-based servers in France. Mistral contractually commits to not using code for model training. No data is routed through US infrastructure. Enterprise deployments can use dedicated VPC or on-premise Mistral inference for complete infrastructure isolation.
AI-powered Git client with virtual branches and stacked workflows for modern developers
Alternative to Github
AI-powered full-stack web app generation from natural language prompts
Foundational AI models for autonomous code generation
Alternative to Github Copilot, Cursor