Mistral Large 24.02
Single-node inference model with 128k context window supporting dozens of languages and 80+ coding languages.
128k context with multilingual and code support
Mistral Large 24.02 is a text generation model developed by Mistral, built around 123 billion parameters and designed to run on a single node for large-throughput inference. It features a 128,000-token context window, making it suited for long-document processing and extended conversational tasks. The model supports dozens of natural languages, including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean.
Beyond natural language, Mistral Large 24.02 supports over 80 programming languages, including Python, Java, C, C++, JavaScript, and Bash, making it applicable to code generation and analysis tasks. Its single-node inference design means it can deliver high throughput without requiring distributed infrastructure. This combination of broad language coverage, large context capacity, and coding support makes it well-suited for multilingual applications, long-context document workflows, and software development assistance.
What Mistral Large 24.02 supports
Long Context Window
Processes up to 128,000 tokens in a single request, enabling analysis of long documents, codebases, or extended conversations without truncation.
Multilingual Text Generation
Generates and understands text in dozens of languages including French, German, Spanish, Arabic, Hindi, Chinese, Japanese, and Korean.
Code Generation
Supports code generation and comprehension across 80+ programming languages, including Python, Java, C, C++, JavaScript, and Bash.
Single-Node Inference
Runs at high throughput on a single node despite its 123 billion parameter size, reducing infrastructure complexity for deployment.
Instruction Following
Responds to detailed instructions and multi-step prompts, supporting structured task completion such as summarization, classification, and Q&A.
Function Calling
Supports function calling capabilities, allowing the model to invoke external tools or APIs based on structured prompts.
Ready to build with Mistral Large 24.02?
Get Started FreeBenchmark scores
Scores represent accuracy — the percentage of questions answered correctly on each test.
| Benchmark | What it tests | Score |
|---|---|---|
| MMLU-Pro | Expert knowledge across 14 academic disciplines | 69.7% |
| GPQA Diamond | PhD-level science questions (biology, physics, chemistry) | 48.6% |
| MATH-500 | Undergraduate and competition-level math problems | 73.6% |
| AIME 2024 | American math olympiad problems | 11.0% |
| LiveCodeBench | Real-world coding tasks from recent competitions | 29.3% |
| HLE | Questions that challenge frontier models across many domains | 4.0% |
| SciCode | Scientific research coding and numerical methods | 29.2% |
Common questions about Mistral Large 24.02
What is the context window size for Mistral Large 24.02?
Mistral Large 24.02 has a context window of 128,000 tokens, allowing it to process long documents or extended conversations in a single request.
How many parameters does Mistral Large 24.02 have?
The model has 123 billion parameters and is designed to run on a single node for efficient large-throughput inference.
Which programming languages does Mistral Large 24.02 support?
The model supports over 80 coding languages, including Python, Java, C, C++, JavaScript, and Bash.
Which natural languages does Mistral Large 24.02 support?
It supports dozens of natural languages, including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean, among others.
What is the knowledge cutoff date for Mistral Large 24.02?
The training date is listed as not available in the current metadata. For the most accurate information on the knowledge cutoff, refer to Mistral's official documentation.
Parameters & options
Explore similar models
Start building with Mistral Large 24.02
No API keys required. Create AI-powered workflows with Mistral Large 24.02 in minutes — free.