Skip to main content
MindStudio
Pricing
Blog About
My Workspace
Text Generation Model

o1-pro

OpenAI's most powerful reasoning model, built to tackle the hardest problems with extended thinking time and enhanced compute.

Publisher OpenAI
Type Text
Context Window 200,000 tokens
Training Data December 2024
Input $150.00/MTok
Output $600.00/MTok

Extended reasoning for complex, high-stakes problems

o1-pro is a text generation model developed by OpenAI and released in December 2024. It is built on the same foundation as the o1 model family but allocates significantly more compute and longer reflection time per query, which allows it to work through multi-step problems more carefully before producing a response. It supports a 200,000-token context window and can generate up to 100,000 tokens in a single output, and it accepts both text and image inputs.

The model is designed for tasks where accuracy on difficult problems takes priority over response speed. It performs well on advanced mathematics, scientific reasoning, and complex coding challenges, with benchmark scores including 94.8% on MATH, 92.4% on HumanEval, and 77.3% on GPQA. o1-pro was initially available exclusively through the ChatGPT Pro subscription plan before becoming accessible via the OpenAI API in March 2025.

What o1-pro supports

Extended Reasoning

The model spends additional compute time reflecting before responding, allowing it to work through multi-step problems and catch errors that faster inference would miss.

Large Context Window

Supports up to 200,000 tokens of input, equivalent to roughly 300 pages of text, making it suitable for long documents and large codebases.

High Output Capacity

Can generate up to 100,000 tokens in a single response, enabling detailed, long-form answers to complex queries.

Image Input

Accepts image inputs alongside text prompts, outputting detailed text responses based on visual and written context.

Advanced Math Solving

Achieves 94.8% pass@1 on the MATH benchmark, reflecting strong performance on graduate-level and competition mathematics.

Code Generation

Scores 92.4% on HumanEval, demonstrating reliable ability to write, analyze, and debug code across programming tasks.

Scientific Reasoning

Scores 77.3% on GPQA, a benchmark of graduate-level questions in biology, chemistry, and physics.

Ready to build with o1-pro?

Get Started Free

Common questions about o1-pro

What is the context window for o1-pro?

o1-pro supports a context window of 200,000 tokens for input, which is roughly equivalent to 300 pages of text.

How much does o1-pro cost to use via the API?

As of its API release in March 2025, o1-pro is priced at $150 per million input tokens and $600 per million output tokens, making it one of the higher-priced models in OpenAI's API catalog.

What is the knowledge cutoff date for o1-pro?

o1-pro was released in December 2024. OpenAI has indicated the o1 model family has a knowledge cutoff of October 2023.

How does o1-pro differ from the standard o1 model?

o1-pro uses more compute and longer reflection time per query compared to the standard o1 model. This is intended to produce more accurate and reliable answers on complex tasks, at the cost of slower response times and higher pricing.

What types of inputs does o1-pro accept?

o1-pro accepts both text and image inputs and returns text outputs. It does not generate images or audio.

What people think about o1-pro

Community discussion around o1-pro has focused largely on its pricing and the timing of its API release, with many users expressing surprise that OpenAI made it available via API in March 2025 alongside GPT-4.5. Some users in the LocalLLaMA subreddit found the decision unusual given the model's high cost relative to other available options.

A recurring concern across threads is whether the pricing of o1-pro and similar flagship models reflects a deliberate strategy rather than standard market positioning. Practical use cases discussed include high-stakes reasoning tasks where accuracy is critical, though many users noted the cost makes it impractical for general or high-volume applications.

View more discussions →

Parameters & options

Max Temperature 1
Max Response Size 100,000 tokens

Start building with o1-pro

No API keys required. Create AI-powered workflows with o1-pro in minutes — free.