Skip to main content
MindStudio
Pricing
Blog About
My Workspace
Text Generation Model

Gemini 3.1 Pro

Google's frontier reasoning model delivering enhanced software engineering, agentic reliability, and multimodal intelligence across a 1M-token context window.

Publisher Google
Type Text
Context Window 1,048,576 tokens
Training Data February 2026
Input $2.00/MTok
Output $12.00/MTok
LATESTLARGE CONTEXTREASONINGMULTI-MODALTOOLS

Frontier reasoning across a 1M-token context

Gemini 3.1 Pro is a frontier reasoning model developed by Google, released in February 2026 as a major upgrade to the Gemini 3 series. It supports multimodal inputs — including text, images, video, audio, and code — within a single model, and offers a context window of 1,048,576 tokens, equivalent to roughly 1,500 A4 pages. The model scores 77.1% on the ARC-AGI-2 benchmark and introduces a medium thinking level designed to balance cost, speed, and reasoning depth.

Gemini 3.1 Pro is built for developers, enterprises, and researchers working on demanding, multi-step workflows. It is particularly suited to agentic coding, structured planning, financial modeling, multimodal analysis, and workflow automation. The model is accessible through the Gemini API, Google AI Studio, Vertex AI, Gemini CLI, Android Studio, and the Gemini app for Pro and Ultra subscribers.

What Gemini 3.1 Pro supports

Long Context Window

Processes up to 1,048,576 tokens in a single request, enabling analysis of entire codebases, lengthy documents, or extended multi-turn conversations without truncation.

Multi-Step Reasoning

Applies structured reasoning chains to complex problems, achieving a 77.1% score on the ARC-AGI-2 benchmark across logic, planning, and inference tasks.

Multimodal Input

Accepts and reasons over text, images, video, audio, and code within a single unified model, without requiring separate specialized models per modality.

Agentic Task Execution

Supports autonomous, long-horizon task execution with improved tool orchestration and stability, suited for structured domains like finance and spreadsheet workflows.

Tool Use

Accepts tool definitions as inputs and can invoke external functions or APIs during a response, enabling integration with custom workflows and data sources.

Code Generation

Produces and analyzes code across multiple programming languages, with measurable gains on SWE benchmarks and real-world software engineering environments.

Configurable Thinking

Offers a medium thinking level setting that allows users to tune the trade-off between reasoning depth, response speed, and token cost per request.

Ready to build with Gemini 3.1 Pro?

Get Started Free

Benchmark scores

Scores represent accuracy — the percentage of questions answered correctly on each test.

Benchmark What it tests Score
GPQA Diamond PhD-level science questions (biology, physics, chemistry) 94.1%
HLE Questions that challenge frontier models across many domains 44.7%
SciCode Scientific research coding and numerical methods 58.9%
ARC-AGI-2 Novel abstract reasoning and pattern recognition 77.1%
SWE-bench Verified Real GitHub issues requiring multi-file code fixes 80.6%
SWE-bench Pro Challenging real-world software engineering tasks 54.2%
Terminal-Bench 2.0 Agentic coding and terminal command tasks 68.5%
τ²-bench Retail Agentic tool use in retail scenarios 90.8%
τ²-bench Telecom Agentic tool use in telecom scenarios 99.3%
MCP-Atlas Tool Use Structured tool use via Model Context Protocol 69.2%
BrowseComp Complex web browsing and information retrieval 85.9%
MMMLU Multilingual and multimodal understanding 92.6%

Common questions about Gemini 3.1 Pro

What is the context window size for Gemini 3.1 Pro?

Gemini 3.1 Pro has a context window of 1,048,576 tokens, which is approximately equivalent to 1,500 A4 pages of text.

What is the training data cutoff for Gemini 3.1 Pro?

Based on the available metadata, Gemini 3.1 Pro has a training date of February 2026. For precise knowledge cutoff details, refer to the official model card on Google DeepMind.

What input types does Gemini 3.1 Pro support?

Gemini 3.1 Pro supports multimodal inputs including text, images, video, audio, and code. It also accepts tool definitions for function calling and configurable numeric parameters.

Where can I access Gemini 3.1 Pro?

The model is available through the Gemini API, Google AI Studio, Vertex AI, Gemini CLI, Android Studio, and the Gemini app for Pro and Ultra subscribers. It can also be accessed via OpenRouter.

What benchmarks has Gemini 3.1 Pro been evaluated on?

Gemini 3.1 Pro achieves a 77.1% score on the ARC-AGI-2 benchmark, which Google states effectively doubles the reasoning performance of Gemini 3 Pro. It has also been evaluated on SWE benchmarks for software engineering tasks.

What people think about Gemini 3.1 Pro

Community reception on r/singularity was largely positive at launch, with the benchmark announcement thread accumulating over 2,300 upvotes and 528 comments. Users frequently highlighted the ARC-AGI-2 score and the 1M-token context window as notable technical achievements.

Some community members raised questions about hallucination rates, with a dedicated thread asking whether Google had addressed accuracy issues seen in prior Gemini versions. Practical use cases discussed included coding assistance, long-document analysis, and agentic workflows.

View more discussions →

Parameters & options

Max Temperature 2
Max Response Size 65,536 tokens
Thinking Budget Select
Default: auto
ManualAuto
Thinking Budget Limit Number

Must be less than Max Response Size

Range: 1–24576

Start building with Gemini 3.1 Pro

No API keys required. Create AI-powered workflows with Gemini 3.1 Pro in minutes — free.