Ray 2
Luma AI's flagship video generation model that creates strikingly realistic, physically accurate video clips from text or image inputs.
Realistic video generation from text or images
Ray 2 is a large-scale AI video generation model developed by Luma Labs, released in January 2025. It runs on approximately 10 times the compute of its predecessor, Ray 1.6, and is built on a multi-modal architecture trained directly on video sequences rather than individual frames. This training approach gives the model an understanding of natural motion, lighting behavior, and physical object interactions. Ray 2 accepts text prompts, images, or video as input and generates clips ranging from 5 to 9 seconds, extendable up to 30 seconds, at resolutions up to 1080p with optional 4K upscaling.
Ray 2 supports multiple aspect ratios including 16:9, 9:16, 1:1, and 21:9, and includes keyframe control so users can define start frames, end frames, or both for precise scene direction. A speed-optimized variant called Ray 2 Flash delivers comparable visual quality in roughly one-third the render time, making it suitable for rapid iteration. The model is available through Luma AI's own platform and via Amazon Bedrock, where AWS serves as the exclusive cloud provider for fully managed access. It is used across industries including advertising, entertainment, architecture, fashion, film, and music production.
What Ray 2 supports
Text-to-Video
Generates video clips from natural language text prompts. Outputs range from 5 to 9 seconds and can be extended up to 30 seconds.
Image-to-Video
Animates a provided image into a video clip, using the image as a visual starting point. Accepts image URLs as direct input.
Keyframe Control
Allows users to specify start frames, end frames, or both to direct scene composition and transitions. Provides precise control over how a video begins and ends.
High-Resolution Output
Renders video at up to 1080p natively with optional 4K upscaling. Supports aspect ratios of 16:9, 9:16, 1:1, and 21:9.
Ray 2 Flash Variant
A speed-optimized version of Ray 2 that produces comparable visual quality in approximately one-third the standard render time. Designed for rapid prototyping and iterative workflows.
Multi-Modal Input
Accepts text prompts, static images, and existing video clips as inputs within a single workflow. Enables flexible creative pipelines across different source material types.
Cloud API Access
Available through Amazon Bedrock as a fully managed API, with AWS serving as the exclusive cloud provider. No infrastructure setup is required to call the model.
Ready to build with Ray 2?
Get Started FreeCommon questions about Ray 2
What is the context window for Ray 2?
Ray 2 has a context window of 1,000 tokens, which applies to the text prompt input used to guide video generation.
How long are the videos Ray 2 produces?
Ray 2 generates clips between 5 and 9 seconds by default. Clips can be extended up to 30 seconds in total length.
What resolutions and aspect ratios does Ray 2 support?
Ray 2 outputs video at up to 1080p with optional 4K upscaling. Supported aspect ratios include 16:9, 9:16, 1:1, and 21:9.
When was Ray 2 released and who made it?
Ray 2 was released in January 2025 by Luma Labs (also known as Luma AI).
Where can I access Ray 2 via API?
Ray 2 is available through Luma AI's own platform and through Amazon Bedrock, where AWS is the exclusive cloud provider offering fully managed API access.
What is Ray 2 Flash and how does it differ from Ray 2?
Ray 2 Flash is a speed-optimized variant of Ray 2 that delivers comparable visual quality in roughly one-third the render time. It is intended for use cases that require faster iteration, such as prototyping.
Parameters & options
Image URL to use as a start frame.
Image URL to use as an end frame.
Explore similar models
Start building with Ray 2
No API keys required. Create AI-powered workflows with Ray 2 in minutes — free.