Skip to main content
MindStudio
Pricing
Blog About
My Workspace
Text Generation Model

DeepSeek V4 Flash

DeepSeek V4 Flash is an efficiency-focused MoE model with 284B total parameters (13B active) and a 1M-token context window. It's tuned for fast inference and high-throughput use cases while still holding up on reasoning and coding tasks.

Publisher DeepSeek
Type Text
Context Window 1,000,000 tokens
Training Data April 2026
Input $0.14/MTok
Output $0.28/MTok
Provider DeepInfra
FLAGSHIPLATESTREASONING

DeepSeek V4 Flash

DeepSeek V4 Flash is an efficiency-focused MoE model with 284B total parameters (13B active) and a 1M-token context window. It's tuned for fast inference and high-throughput use cases while still holding up on reasoning and coding tasks.

Ready to build with DeepSeek V4 Flash?

Get Started Free

Documentation & links

Parameters & options

Max Temperature 1
Max Response Size 384,000 tokens
Reasoning Effort Select

Non-think for fast responses, High for complex problem-solving, Max to push reasoning to its fullest extent.

Default: high
Non-thinkHighMax

Start building with DeepSeek V4 Flash

No API keys required. Create AI-powered workflows with DeepSeek V4 Flash in minutes — free.