Skip to main content
MindStudio
Pricing
Blog About
My Workspace
Text Generation Model

Ministral 3 8B

Built for edge deployment, it delivers high performance across diverse hardware, including local setups.

Publisher Mistral
Type Text
Context Window 256,000 tokens
Training Data n/a
Input $0.15/MTok
Output $0.15/MTok
OPEN SOURCE

Efficient open-source model built for edge deployment

Ministral 3 8B is a text generation model developed by Mistral AI, part of the Ministral 3 model family. It is open source and designed with edge deployment in mind, meaning it is optimized to run efficiently across a range of hardware configurations, including local setups without cloud infrastructure. The model supports a 256,000-token context window, enabling it to process and reason over long documents in a single pass.

Ministral 3 8B is well-suited for developers and organizations that need a capable language model deployable on-device or in resource-constrained environments. Its 8-billion parameter size makes it practical for local inference while still handling a broad range of text generation tasks. The open-source availability means it can be downloaded, fine-tuned, and self-hosted without requiring API access.

What Ministral 3 8B supports

Long Context Window

Processes up to 256,000 tokens in a single request, allowing the model to handle long documents, codebases, or extended conversations without truncation.

Edge Deployment

Optimized to run on diverse hardware including local machines, making it suitable for on-device inference without relying on cloud infrastructure.

Text Generation

Generates coherent, contextually relevant text across tasks such as summarization, question answering, and instruction following.

Open Source

Released as an open-source model, allowing developers to download, self-host, and fine-tune the weights without proprietary restrictions.

Local Inference

Supports running entirely on local hardware setups, enabling private, offline use cases without sending data to external servers.

Ready to build with Ministral 3 8B?

Get Started Free

Benchmark scores

Scores represent accuracy — the percentage of questions answered correctly on each test.

Benchmark What it tests Score
MMLU-Pro Expert knowledge across 14 academic disciplines 64.2%
GPQA Diamond PhD-level science questions (biology, physics, chemistry) 47.1%
LiveCodeBench Real-world coding tasks from recent competitions 30.3%
HLE Questions that challenge frontier models across many domains 4.3%
SciCode Scientific research coding and numerical methods 20.8%

Common questions about Ministral 3 8B

What is the context window size for Ministral 3 8B?

Ministral 3 8B supports a context window of 256,000 tokens, allowing it to process very long inputs in a single pass.

Is Ministral 3 8B open source?

Yes, Ministral 3 8B is released as an open-source model, meaning the weights can be downloaded and used or fine-tuned independently.

What hardware can Ministral 3 8B run on?

The model is built for edge deployment and is designed to run across diverse hardware configurations, including local setups and devices without dedicated cloud infrastructure.

Who developed Ministral 3 8B?

Ministral 3 8B was developed by Mistral AI and is part of the Ministral 3 model family.

What is the training data cutoff for Ministral 3 8B?

A specific training data cutoff date is not provided in the available metadata for this model.

What people think about Ministral 3 8B

Community discussion around Ministral 3 8B on Reddit has been generally positive, with users in the LocalLLaMA subreddit noting its release as part of a broader wave of Mistral model launches. The release thread received 280 upvotes and 61 comments, reflecting meaningful interest from the local inference community.

Some discussion touched on Mistral's rapid model release cadence rather than focusing on specific benchmarks or limitations of the 3 8B variant. Users interested in base models and local deployment appear to be the primary audience engaging with this model.

View more discussions →

Parameters & options

Max Temperature 1
Max Response Size 16,000 tokens

Start building with Ministral 3 8B

No API keys required. Create AI-powered workflows with Ministral 3 8B in minutes — free.