Skip to main content
MindStudio
Pricing
Blog About
My Workspace
Text Generation Model

Mistral Small 24.02

Single-node inference model with 128k context window supporting dozens of languages and 80+ coding languages.

Publisher Mistral
Type Text
Context Window 128,000 tokens
Training Data n/a
Input $1.00/MTok
Output $3.00/MTok
Provider Amazon Bedrock

128k context window across dozens of languages

Mistral Small 24.02 is a text generation model developed by Mistral, designed to run on a single node while supporting a 128,000-token context window. It covers dozens of natural languages including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean, as well as over 80 coding languages such as Python, Java, C, C++, JavaScript, and Bash. The model has 123 billion parameters, which enables high-throughput inference without requiring multi-node infrastructure.

This model is well-suited for long-context applications where fitting large documents or extended conversations into a single prompt is necessary. Its broad language coverage makes it applicable to multilingual workflows, while its coding language support makes it useful for code generation and analysis tasks. The single-node inference design is a practical consideration for teams managing deployment costs and infrastructure complexity.

What Mistral Small 24.02 supports

Long Context Window

Supports up to 128,000 tokens in a single context, enabling processing of long documents or extended multi-turn conversations without truncation.

Multilingual Text Generation

Generates and understands text in dozens of natural languages including French, German, Spanish, Arabic, Hindi, Chinese, Japanese, and Korean.

Code Generation

Supports over 80 coding languages including Python, Java, C, C++, JavaScript, and Bash for code writing and analysis tasks.

Single-Node Inference

Runs at large throughput on a single node due to its 123 billion parameter architecture, reducing multi-node infrastructure requirements.

Instruction Following

Responds to structured prompts and instructions, making it applicable for task-oriented workflows such as summarization, translation, and Q&A.

Ready to build with Mistral Small 24.02?

Get Started Free

Benchmark scores

Scores represent accuracy — the percentage of questions answered correctly on each test.

Benchmark What it tests Score
MMLU-Pro Expert knowledge across 14 academic disciplines 52.9%
GPQA Diamond PhD-level science questions (biology, physics, chemistry) 38.1%
MATH-500 Undergraduate and competition-level math problems 56.3%
AIME 2024 American math olympiad problems 6.3%
LiveCodeBench Real-world coding tasks from recent competitions 14.1%
HLE Questions that challenge frontier models across many domains 4.3%
SciCode Scientific research coding and numerical methods 15.6%

Common questions about Mistral Small 24.02

What is the context window size for Mistral Small 24.02?

Mistral Small 24.02 supports a context window of 128,000 tokens, allowing large documents or long conversations to be processed in a single prompt.

Which natural languages does this model support?

The model supports dozens of languages including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean, among others.

How many coding languages does Mistral Small 24.02 support?

The model supports over 80 coding languages, including Python, Java, C, C++, JavaScript, and Bash.

What infrastructure is required to run this model?

Mistral Small 24.02 is designed for single-node inference. Its 123 billion parameters allow it to run at large throughput on a single node without requiring multi-node setups.

Is a training data cutoff date available for this model?

The training date is listed as not available in the current metadata. For the most accurate information, refer to Mistral's official documentation.

Parameters & options

Max Temperature 1
Max Response Size 16,000 tokens

Start building with Mistral Small 24.02

No API keys required. Create AI-powered workflows with Mistral Small 24.02 in minutes — free.