Text and image to video generation
Kling 2.6 is a video generation model developed by Kling, capable of producing videos from text prompts or input images. It supports both text-to-video and image-to-video workflows, accepting text descriptions, image URLs, and selection-based inputs to guide the generation process. The model was added to MindStudio in March 2026 and carries a training date of December 2025.
Kling 2.6 is suited for creators and developers who need to generate video content programmatically without managing their own infrastructure. Its dual input modality — text and image — makes it applicable to a range of use cases including content creation, storyboarding, and visual prototyping. The model operates under the identifier kling-video-v2.6-std and is accessible through MindStudio without requiring separate API key configuration.
What Kling 2.6 supports
Text to Video
Generates video clips from natural language text prompts. Users describe a scene or action in text, and the model produces a corresponding video output.
Image to Video
Animates or extends a provided image into a video sequence. Accepts an image URL as input to anchor the visual content of the generated video.
Multimodal Input
Accepts a combination of text, image URLs, and select-type inputs within a single request. This allows fine-grained control over generation parameters alongside content inputs.
Large Context Window
Supports a context window of up to 10,000 tokens, allowing detailed and lengthy text prompts to guide video generation.
Ready to build with Kling 2.6?
Get Started FreeCommon questions about Kling 2.6
What input types does Kling 2.6 accept?
Kling 2.6 accepts three input types: text (for written prompts), imageUrl (for image-to-video workflows), and select (for choosing from predefined options). This allows both text-to-video and image-to-video generation.
What is the context window size for Kling 2.6?
Kling 2.6 has a context window of 10,000 tokens, which allows for detailed text prompts when generating video content.
What is the training data cutoff for Kling 2.6?
According to the model metadata, Kling 2.6 has a training date of December 2025.
Do I need an API key to use Kling 2.6 on MindStudio?
No. Kling 2.6 is available directly through MindStudio without requiring users to configure or supply their own API keys.
What is the model identifier for Kling 2.6?
The model's identifier in MindStudio is kling-video-v2.6-std, and its slug is kling-video-v2-6-std.
What people think about Kling 2.6
Community discussion around AI video generation models like Kling 2.6 has been largely positive, with users highlighting notable improvements in motion realism and subject coherence compared to earlier generations. The viral "Will Smith Eating Spaghetti" thread, which garnered over 5,000 upvotes, is frequently cited as a benchmark for measuring progress in video generation quality over time.
Users commonly raise concerns about occasional artifacts, unnatural motion in complex scenes, and limitations when generating videos with multiple interacting subjects. The model is frequently discussed in the context of creative and entertainment use cases, including short-form content, meme generation, and visual storytelling.
Documentation & links
Parameters & options
Description of what to exclude from the video.
Explore similar models
Start building with Kling 2.6
No API keys required. Create AI-powered workflows with Kling 2.6 in minutes — free.