Skip to main content
MindStudio
Pricing
Blog About
My Workspace
ClaudeVideo GenerationWorkflows

How to Make an AI Short Film for Under $200: Full Production Workflow with Claude and Seedance

Learn the complete AI filmmaking workflow used to produce a short film for under $200 using Claude Co-work, Luma Canvas, Nano Banana Pro, and Seedance 2.0.

MindStudio Team
How to Make an AI Short Film for Under $200: Full Production Workflow with Claude and Seedance

From Zero to Short Film: What This Workflow Actually Costs

Traditional short film production — even a stripped-down indie project — can easily hit $5,000 to $50,000 once you account for crew, equipment, location fees, and post-production. AI video generation has fundamentally changed that cost structure.

This guide covers a complete AI short film production workflow — from concept to final export — using Claude for creative development, Luma Canvas for scene visualization, Nano Banana Pro for consistent image generation, and Seedance 2.0 for video clips. Total budget: under $200. The result is a shareable, polished short film that maps to a real production process, not just random generation experiments.

If you follow the workflow in order, you’ll avoid the most common (and expensive) mistakes. If you skip ahead, you’ll probably pay for it in wasted generation credits.


The Tools in This Stack and What Each One Does

Before jumping into production, here’s an honest look at each tool and the specific job it performs in this workflow.

Claude: Story Development and Production Planning

Claude handles the creative and organizational backbone of the entire production. Using Claude’s Projects feature — which lets you maintain persistent context across conversations — you can keep your script, shot list, character notes, and visual direction all in one place.

This context-persistence is the key advantage. It means Claude understands what scene you’re describing when you ask it to write a prompt for shot 14, without you re-explaining the whole film. That consistency matters enormously once you’re deep in production.

Luma Canvas: Scene and Environment Visualization

Luma Canvas is Luma AI’s creative workspace for image generation and style exploration. It’s most useful early in production for establishing visual tone — think mood boards, wide establishing shots, and environment concepts. It handles large-scale scene generation well.

Nano Banana Pro: Character-Consistent Image Generation

Keeping characters recognizable across scenes is the hardest problem in AI filmmaking. Nano Banana Pro is in this stack specifically for its ability to lock character appearances across multiple generations. You define a character’s visual parameters once, and the tool holds that reference as you generate shots from different angles, lighting conditions, and scenes.

Seedance 2.0: Image-to-Video Generation

Seedance 2.0 — developed by ByteDance — is where still images become video clips. It handles camera physics and subject motion well, producing output that feels cinematic rather than jittery or artificial. The model takes a source image plus a motion prompt and returns a clip ready for editing. It’s the most expensive tool in the stack, which is why everything before it is designed to minimize wasted generations.


Step 1: Develop Your Concept with Claude

The fastest way to waste your budget is generating video before your story is defined. Spend real time here — it’s cheap and everything downstream depends on it.

Set Up a Project in Claude

Start a new Claude Project and write a system prompt that defines your film’s creative parameters:

  • Genre and emotional tone (e.g., “quiet psychological drama, tense, muted color palette”)
  • Target length (a 2–3 minute short is the right scope for this budget)
  • Visual style references (describe films or aesthetics that match what you’re going for)
  • Hard constraints — locations, characters, any thematic requirements

This project context stays active throughout production. Every time you return to Claude for prompts, shot descriptions, or creative decisions, that foundation is there.

Write the Script

Ask Claude to generate a short script based on your parameters. A 2-minute film at standard pacing needs roughly 6–10 distinct shots. Keep the scope tight and the story self-contained.

A useful starting prompt:

“Write a 2-minute short film script in [genre]. Two characters maximum. One or two locations. The story should resolve visually — minimal dialogue.”

The minimal dialogue recommendation is practical, not stylistic. AI-generated lip sync adds complexity and cost. Let visuals carry the story where possible.

Build a Shot List

Once the script is locked, ask Claude to convert it into a structured shot list. Each entry should include:

  • Scene and shot number
  • Shot type (wide, medium, close-up, POV, over-the-shoulder)
  • Action description
  • Camera movement (static, slow push-in, pan left, tracking)
  • Lighting and mood notes

This shot list is your production contract. Every image and video clip you generate from this point maps to a row in this list. Deviating from it mid-production is how films lose visual coherence.


Step 2: Build Your Visual Bible

This phase is about establishing how your film looks before you generate a single final shot. It’s cheap to iterate here. It’s expensive to iterate during video generation.

Define the Visual Style in Luma Canvas

Generate 8–12 concept images in Luma Canvas that represent your film’s aesthetic. These aren’t finished shots — they’re tone references. Experiment with:

  • Color temperature and saturation level
  • Lighting quality (hard direct light vs. soft diffused)
  • Time of day and atmosphere
  • Level of realism vs. stylization

When you find a look you want to commit to, save the exact prompt language that produced it. Those specific phrases — camera style, lighting descriptors, mood language — carry forward into every prompt you write from here. Minor variations in language produce major visual drift across a film.

Generate Character Reference Sheets

For each character, generate a reference set using Nano Banana Pro. The goal is 4–6 images per character showing:

  • Front-facing portrait
  • Three-quarter view
  • Full body in the costume they wear throughout the film
  • At least one image in your established lighting style

These reference images are your consistency anchor for the entire production. Use them as inputs wherever the tool supports character reference uploads.

Generate Location References

Run the same process for each location in your script. Generate 3–4 images per environment. You’re not aiming for finished shots — you’re establishing what the space looks like so your subsequent prompts can describe it accurately.


Step 3: Generate Scene Images for Each Shot

Work through your shot list sequentially. For each planned shot, you need one strong source image before you can generate video.

Write Image Prompts with Claude

Return to Claude with your shot list and start building image prompts. A strong prompt for this workflow includes:

  • Subject: character name, pose, action, expression
  • Environment: location description matching your location references
  • Lighting: source direction, quality, time of day
  • Composition: shot type translated into visual language (e.g., “character fills left third of frame, looking off-screen right”)
  • Style: your locked style language from the visual bible

You can ask Claude to write prompts for a block of shots at once. Feed it five shots from your list and ask for five prompts — then review and refine each one.

Generate and Select in Nano Banana Pro

Run each prompt through Nano Banana Pro using your character reference images as anchors. Generate 3–5 variations per shot. Select based on:

  • Character matches reference sheets
  • Composition matches intended shot type
  • Lighting is consistent with established visual style
  • Image is clean and readable enough for Seedance 2.0 to animate effectively

That last point matters more than it sounds. Overly complex or ambiguous compositions confuse video generation models. If the source image is hard to read, the video will be worse. Regenerate if something doesn’t pass this check — don’t rationalize a weak image forward.


Step 4: Animate Your Shots with Seedance 2.0

With a strong source image for each shot, you’re ready to generate video clips.

Write Motion Prompts

Each clip needs a motion prompt that describes what moves and how — separate from the image prompt, which describes what’s in the scene. Effective motion prompts include:

  • Subject motion: what is the character doing? (turning head slowly, walking toward camera, reaching for an object)
  • Camera motion: static, slow push-in, pan, tracking shot
  • Ambient motion: environmental detail that should move (curtains shifting, smoke, foliage)
  • Pace: is the motion deliberate and slow, or sharp and quick?

Example:

“Camera slowly pushes in. Character turns head slightly left, expression shifting from neutral to concern. Subtle ambient light flicker. Background softly out of focus.”

Claude can write these too. With your shot list in context, you can generate motion prompts for an entire scene in one pass.

Generate Clips in Seedance 2.0

Upload your source image, add your motion prompt, and set clip duration. For most narrative shots, 3–6 seconds is the practical range. Shorter clips are cheaper and give you more editorial flexibility. Longer clips risk wandering.

Generate 2–3 variations per shot. Key settings to manage:

  • Aspect ratio: Lock this early and don’t change it. 16:9 for cinematic, 9:16 for vertical platforms, 1:1 for social square formats.
  • Motion intensity: Lower settings for contemplative shots, higher for action or tension. Most drama scenes work best at moderate intensity.
  • Seed locking: When a generation produces great lighting physics or motion quality, lock the seed and vary the prompt to iterate from that foundation.

Step 5: Post-Production — Editing, Sound, and Final Export

You have clips. Now you build the film.

Edit in DaVinci Resolve

DaVinci Resolve is free, professional-grade, and handles everything this workflow requires. Import all your generated clips and assemble them in shot list order.

A few things to keep in mind when editing AI-generated footage:

  • Cut shorter than you think: AI clips often work better at 2–3 seconds than 5. Don’t hold longer than the emotion requires.
  • Prefer straight cuts: Simple cuts read better than complex transitions with AI footage. A well-timed cut carries more impact.
  • Watch rough cuts without audio first: If the visual rhythm feels off, fix it before you build audio around it.

Add Music and Sound Design

This is where a $200 film separates from an amateur experiment. Good audio does more to elevate perceived production quality than almost anything else.

For music:

  • Suno or Udio: AI music generation, pay-per-use or subscription. Generate score elements based on scene tempo and mood.
  • Epidemic Sound: Licensed library music, affordable for short-term use
  • Free Music Archive or Pixabay: Royalty-free options if the budget is very constrained

For sound effects, Freesound.org offers a large library of Creative Commons-licensed audio. Layer ambient room tone and environmental sounds under everything — it grounds AI footage and makes it feel real.

Color Grade and Finalize

Apply a consistent color grade across all clips in DaVinci Resolve’s Color page. A single LUT applied uniformly will unify footage that has slight visual inconsistencies between generation sessions. Free LUTs are widely available and can establish a cinematic look quickly.

Add title cards, credits, and any text overlays last. Keep typography clean and minimal.


The $200 Budget, Broken Down

Here’s a realistic cost breakdown for a 2–3 minute short film:

Tool / ServiceEstimated Cost
Claude (monthly subscription)$20
Luma Canvas (image generation credits)$20–30
Nano Banana Pro (character-consistent generation)$30–50
Seedance 2.0 (video generation credits)$60–80
Music (Suno or licensed library)$10–20
DaVinci ResolveFree
Sound effects (Freesound.org)Free
Total$140–200

A few notes on keeping costs in range:

  • Generate all images before touching Seedance 2.0. Images are cheap. Video is not.
  • A 2-minute film with 30 clips at 4 seconds each equals 120 seconds of final footage. With 2–3 variations per clip, you’re generating 360–540 seconds of raw video. Know your numbers before you start.
  • Revise prompts in Claude before re-generating. A Claude conversation costs fractions of a cent. A bad video generation costs real credits.

How MindStudio Fits Into This Workflow

Running this workflow manually — Claude tab, Luma tab, Nano Banana Pro tab, Seedance tab — works fine for a one-off project. But if you’re producing AI video content regularly, the coordination overhead compounds fast.

MindStudio’s AI Media Workbench brings image and video generation models into a single workspace, with no separate accounts or API keys required. You can access the major image and video generation models alongside Claude from one interface, and — more importantly — chain them into automated workflows.

For a production workflow like this one, that means you could build an agent that takes a shot description as input, calls Claude to generate an image prompt, routes that prompt to an image generation model, and queues the result for video generation — all triggered in sequence rather than managed step by step.

MindStudio also has pre-built templates for content creation workflows that you can adapt for film production without starting from scratch. For teams or independent creators producing AI content at volume, this kind of pipeline automation meaningfully reduces the manual overhead that eats into production time.

You can try it free at mindstudio.ai. If AI video is a regular part of your work, it’s worth seeing how much coordination you can eliminate.


Common Mistakes That Blow the Budget

These patterns come up consistently with first-time AI filmmakers.

Starting video generation before visuals are defined. The temptation is immediate — you have a concept and want to see it moving. Every clip generated from an undefined visual direction is a likely waste. Build the visual bible first.

Underinvesting in audio. A film with imperfect visuals and strong audio reads as more professional than the reverse. Budget time and money here accordingly.

Accepting inconsistent character generations. If a character looks different in shot 7 than in shot 3, regenerate. One inconsistent character read will undermine the whole film’s credibility. Don’t rationalize it forward.

Over-generating options. Having 200 clips sounds like a good problem until you’re making editorial decisions across 200 clips. Three variations per shot, maximum. Be decisive.

Skipping the shot list. Improvising in generation can produce interesting individual images, but it almost never produces a coherent film. The shot list is the production structure that makes editing possible.


Frequently Asked Questions

How long does it take to make an AI short film using this workflow?

A focused 2–3 day sprint is realistic for an experienced creator working full-time on the project. Working part-time — evenings and weekends — typically takes 2–3 weeks for a first project. The image generation and video generation phases take the most time, primarily because you’re iterating on prompts and selecting between variations. The editing phase, if the shot list is solid, moves faster than most people expect.

Do I need filmmaking experience to use these tools?

No formal background is required, but basic film literacy helps. Knowing the difference between a wide shot and a close-up, understanding how camera angle affects emotional tone, and having some intuition for pacing will all improve your results. If you’re new to this, an hour watching video essays on film language before starting will pay off throughout the entire production.

How do I maintain visual consistency across scenes in AI video generation?

Consistency comes from three things: locked character reference images, consistent style prompt language, and the discipline to regenerate when something drifts. Save every prompt fragment that produces a strong result. When you return to generate shots for a new scene, use those exact phrases — don’t paraphrase. Small prompt variations compound into large visual inconsistencies across a complete film.

Can I monetize an AI short film made with these tools?

It depends on the specific tools and their current terms of service. Most AI generation platforms allow commercial use on paid plans, but the rights to generated content vary. Claude, Luma AI, and Seedance each have different policies. Read the terms for each platform before planning commercial distribution — don’t assume your subscription covers it.

What length of short film works best for this budget?

Two to three minutes is the sweet spot. Under that, you have room for more iterations and higher quality per shot. Over three minutes, you’re either compromising on quality or exceeding the budget — usually both. A tight 90-second to 2-minute film is often more effective than a sprawling 5-minute piece anyway. Constraint is good for storytelling.

Is Claude better than other AI models for scriptwriting and production planning?

Claude performs particularly well for long-horizon creative tasks that require maintaining consistency across a complex project — which describes film production accurately. Its Projects feature and long context window mean your creative direction stays coherent from script through shot list through prompt generation. Other models like GPT-4o and Gemini can handle similar tasks. The more important factor is using a model with persistent context management, not a fresh conversation for every request.


Key Takeaways

  • An AI short film under $200 is achievable with the right stack: Claude for creative development, Luma Canvas for visual references, Nano Banana Pro for character-consistent image generation, and Seedance 2.0 for video clips.
  • Pre-production is where the film is actually made. A clear script, visual bible, and shot list are what separate a coherent film from a collection of unrelated AI clips.
  • Character consistency is the hardest problem in AI filmmaking — solve it in the image generation phase, not in the edit.
  • Audio does more to elevate perceived quality than almost any visual adjustment. Don’t underspend here.
  • If you’re producing AI video content at volume, MindStudio’s AI Media Workbench lets you chain these tools into automated pipelines rather than managing them one tab at a time — try it free at mindstudio.ai.

Presented by MindStudio

No spam. Unsubscribe anytime.