Mistral Large 2
Single-node inference model with 128k context window supporting dozens of languages and 80+ coding languages.
128k context across dozens of languages and code
Mistral Large 2 is a text generation model developed by Mistral, a French AI company. It has 123 billion parameters and a 128,000-token context window, making it suited for long-document processing and extended conversations within a single inference session. The model supports dozens of natural languages, including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean.
One of the defining characteristics of Mistral Large 2 is that it is designed to run on a single node despite its large parameter count, enabling high-throughput deployment without multi-node infrastructure. It also supports over 80 programming languages, including Python, Java, C, C++, JavaScript, and Bash, making it applicable to code generation and analysis tasks. These properties make it a practical choice for multilingual applications, long-context document workflows, and coding assistants.
What Mistral Large 2 supports
Long Context Window
Processes up to 128,000 tokens in a single request, enabling analysis of lengthy documents, codebases, or extended conversations without truncation.
Multilingual Text Generation
Generates and understands text in dozens of languages including French, German, Spanish, Arabic, Hindi, Chinese, Japanese, and Korean.
Code Generation
Supports code generation and comprehension across 80+ programming languages, including Python, Java, C, C++, JavaScript, and Bash.
Single-Node Inference
Designed to run at large throughput on a single node despite having 123 billion parameters, reducing infrastructure complexity for deployment.
Instruction Following
Responds to complex, multi-step instructions in natural language, supporting task completion across writing, summarization, and question answering.
Function Calling
Supports function calling and tool use, allowing the model to interact with external APIs and structured workflows in agentic applications.
Ready to build with Mistral Large 2?
Get Started FreeBenchmark scores
Scores represent accuracy — the percentage of questions answered correctly on each test.
| Benchmark | What it tests | Score |
|---|---|---|
| MMLU-Pro | Expert knowledge across 14 academic disciplines | 69.7% |
| GPQA Diamond | PhD-level science questions (biology, physics, chemistry) | 48.6% |
| MATH-500 | Undergraduate and competition-level math problems | 73.6% |
| AIME 2024 | American math olympiad problems | 11.0% |
| LiveCodeBench | Real-world coding tasks from recent competitions | 29.3% |
| HLE | Questions that challenge frontier models across many domains | 4.0% |
| SciCode | Scientific research coding and numerical methods | 29.2% |
Common questions about Mistral Large 2
What is the context window size for Mistral Large 2?
Mistral Large 2 has a context window of 128,000 tokens, allowing it to process long documents or extended conversations in a single request.
How many parameters does Mistral Large 2 have?
Mistral Large 2 has 123 billion parameters. It is designed to run on a single node at high throughput despite this scale.
What languages does Mistral Large 2 support?
The model supports dozens of natural languages including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean, as well as 80+ programming languages such as Python, Java, C, C++, JavaScript, and Bash.
What is the knowledge cutoff date for Mistral Large 2?
A specific training cutoff date is not listed in the available metadata for Mistral Large 2. For the most accurate information, consult Mistral's official documentation.
What types of tasks is Mistral Large 2 best suited for?
Based on its design, Mistral Large 2 is well suited for long-context document processing, multilingual text generation, code generation across 80+ languages, and single-node deployments requiring high throughput.
What people think about Mistral Large 2
The Reddit threads provided do not contain discussions specifically about Mistral Large 2, focusing instead on other models such as Llama 4, GPT-5.2, and Google Gemini.
As a result, no community sentiment or use case patterns specific to Mistral Large 2 can be derived from the available threads.
Google Gemini 3.1 Pro Preview Soon?
Llama 4 Maverick scores on seven independent benchmarks
GPT-5.2 is the new champion of the Elimination Game benchmark, which tests social reasoning, strategy, and deception in a multi-LLM environment. Claude Opus 4.5 and Gemini 3 Flash Preview also made very strong debuts.
Parameters & options
Explore similar models
Start building with Mistral Large 2
No API keys required. Create AI-powered workflows with Mistral Large 2 in minutes — free.