Llama 3.2 90B Instruct
Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.
Multilingual reasoning and dialogue at 90B scale
Llama 3.2 90B Instruct is a large language model developed by Meta, part of the Llama 3.2 collection of multilingual generative models. It uses an auto-regressive transformer architecture and has been fine-tuned with supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. The model accepts text input and produces text output, with a context window of 128,000 tokens.
This model is optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. It is instruction-tuned, meaning it is designed to follow user prompts and engage in conversational interactions across multiple languages. The 90B parameter scale makes it suited for tasks requiring deeper language understanding and reasoning compared to the smaller 1B and 3B variants in the same Llama 3.2 family.
What Llama 3.2 90B Instruct supports
Multilingual Dialogue
Handles conversational interactions across multiple languages, optimized specifically for multilingual dialogue use cases as part of the Llama 3.2 instruction-tuned design.
Long-Context Processing
Processes inputs and maintains context across a 128,000-token context window, enabling work with lengthy documents or extended conversations.
Instruction Following
Fine-tuned with SFT and RLHF to follow user instructions accurately and align responses with human preferences for helpfulness and safety.
Agentic Retrieval
Supports agentic retrieval workflows where the model can reason over retrieved information to complete multi-step tasks.
Text Summarization
Optimized for summarization tasks, condensing long-form text into concise outputs as a primary use case in the instruction-tuned variant.
Reasoning
Applies multi-step reasoning to complex prompts using the transformer architecture at 90 billion parameters.
Ready to build with Llama 3.2 90B Instruct?
Get Started FreeCommon questions about Llama 3.2 90B Instruct
What is the context window for Llama 3.2 90B Instruct?
Llama 3.2 90B Instruct supports a context window of 128,000 tokens, allowing it to process long documents or extended multi-turn conversations in a single request.
What languages does this model support?
The model is part of Meta's multilingual Llama 3.2 collection and is optimized for multilingual dialogue use cases, though the specific list of supported languages is detailed in Meta's official model documentation.
How was this model trained and aligned?
Llama 3.2 90B Instruct uses an auto-regressive transformer architecture. The instruction-tuned version was trained using supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
What is this model best suited for?
According to Meta's documentation, the model is optimized for multilingual dialogue, agentic retrieval, and summarization tasks. It is an instruction-tuned model designed for conversational and task-completion use cases.
What is the knowledge cutoff date for Llama 3.2 90B Instruct?
A specific training cutoff date is not provided in the available metadata. For the most accurate information, refer to Meta's official model card on the Llama website.
Documentation & links
Parameters & options
Explore similar models
Start building with Llama 3.2 90B Instruct
No API keys required. Create AI-powered workflows with Llama 3.2 90B Instruct in minutes — free.