Skip to main content
MindStudio
Pricing
Blog About
My Workspace
Text Generation Model Deprecated

Mixtral 8x22B Instruct

High-performance, cost-efficient sparse model using 39B active parameters out of 141B.

Publisher Mistral
Type Text
Context Window 64,000 tokens
Training Data September 2023
Input $0.50/MTok
Output $0.50/MTok
Provider DeepInfra

Mixtral 8x22B Instruct

Mixtral 8x22B sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.

Ready to build with Mixtral 8x22B Instruct?

Get Started Free

Documentation & links

Parameters & options

Max Temperature 1
Max Response Size 64,000 tokens

Start building with Mixtral 8x22B Instruct

No API keys required. Create AI-powered workflows with Mixtral 8x22B Instruct in minutes — free.