Text Generation Model Deprecated
Mixtral 8x7B Instruct
High-quality, efficient sparse model outperforming larger models in speed and benchmarks.
Publisher
Mistral
Type Text
Context Window 4,096 tokens
Training Data September 2023
Input $0.24/MTok
Output $0.24/MTok
Provider
DeepInfra
Overview
Mixtral 8x7B Instruct
Mixtral 8x7B is a high-quality sparse mixture of experts model (SMoE) with open weights. Licensed under Apache 2.0. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. It is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs. In particular, it matches or outperforms GPT3.5 on most standard benchmarks.
Ready to build with Mixtral 8x7B Instruct?
Get Started FreeResources
Documentation & links
Configuration
Parameters & options
Max Temperature 1
Max Response Size 2,500 tokens
Related models
Explore similar models
Start building with Mixtral 8x7B Instruct
No API keys required. Create AI-powered workflows with Mixtral 8x7B Instruct in minutes — free.