Skip to main content
MindStudio
Pricing
Blog About
My Workspace
Text Generation Model

Ministral 3 3B

Designed for edge deployment, it delivers high performance across diverse hardware, including local setups.

Publisher Mistral
Type Text
Context Window 256,000 tokens
Training Data n/a
Input $0.10/MTok
Output $0.10/MTok
OPEN SOURCE

Compact open-weight model for edge deployment

Ministral 3 3B is a 3-billion-parameter language model developed by Mistral AI as part of the Ministral 3 family. It is the smallest model in that family and is released as open-weight, meaning the model weights are publicly available for download and local use. The model supports a 256,000-token context window and includes both language and vision capabilities in a compact form factor.

Ministral 3 3B is designed specifically for edge deployment, making it suitable for running on local hardware, embedded systems, and resource-constrained environments. Its small parameter count allows it to operate efficiently across a wide range of hardware configurations without requiring cloud infrastructure. It is well-suited for developers building on-device applications, offline workflows, or latency-sensitive pipelines where a smaller footprint is a requirement.

What Ministral 3 3B supports

Long Context Window

Processes up to 256,000 tokens in a single request, enabling analysis of lengthy documents or extended conversations without truncation.

Text Generation

Generates coherent natural language output for tasks such as summarization, question answering, and instruction following.

Vision Understanding

Supports image input alongside text, allowing the model to interpret and respond to visual content as part of multimodal prompts.

Open-Weight Access

Released with publicly available model weights under an open license, allowing local deployment and fine-tuning without API dependency.

Edge Deployment

Optimized for running on local and resource-constrained hardware, including consumer devices, without requiring cloud infrastructure.

Ready to build with Ministral 3 3B?

Get Started Free

Benchmark scores

Scores represent accuracy — the percentage of questions answered correctly on each test.

Benchmark What it tests Score
MMLU-Pro Expert knowledge across 14 academic disciplines 52.4%
GPQA Diamond PhD-level science questions (biology, physics, chemistry) 35.8%
LiveCodeBench Real-world coding tasks from recent competitions 24.7%
HLE Questions that challenge frontier models across many domains 5.3%
SciCode Scientific research coding and numerical methods 14.4%

Common questions about Ministral 3 3B

What is the context window size for Ministral 3 3B?

Ministral 3 3B supports a context window of 256,000 tokens, allowing it to process large amounts of text in a single request.

Is Ministral 3 3B open source?

Yes, Ministral 3 3B is released as an open-weight model, meaning the model weights are publicly available for download, local deployment, and fine-tuning.

What hardware can Ministral 3 3B run on?

The model is designed for edge deployment and is intended to run across diverse hardware configurations, including local consumer setups and resource-constrained environments.

Does Ministral 3 3B support vision or image inputs?

Yes, according to Mistral's announcement, Ministral 3 3B includes vision capabilities alongside its language capabilities.

What is the training data cutoff for Ministral 3 3B?

The training date is listed as not available in the current metadata. Refer to Mistral's official documentation for the most up-to-date information on training data cutoff.

What people think about Ministral 3 3B

Community reception to the Ministral 3 family release was generally positive, with threads on r/LocalLLaMA accumulating hundreds of upvotes and discussion focused on the breadth of the open-weight release spanning 3B to 675B parameters. Users highlighted the availability of smaller models like the 3B variant as particularly useful for local and edge deployment scenarios.

Some community members noted the rapid pace of Mistral's model releases as a point of interest, while discussions also touched on benchmark comparisons and practical use cases for running smaller models on consumer hardware. The Ministral-3 dedicated thread drew focused conversation about the 3B and 8B variants and their suitability for on-device applications.

View more discussions →

Parameters & options

Max Temperature 1
Max Response Size 16,000 tokens

Start building with Ministral 3 3B

No API keys required. Create AI-powered workflows with Ministral 3 3B in minutes — free.