One Prompt Built an Entire Headphone Brand: 5 Things Claude Code + Higgsfield Generated Autonomously
A single Claude Code prompt produced a brand identity, 3 product lines, product photos, Instagram ads, and UGC videos. Here's exactly what was generated.
One Prompt. One Brand. Five Outputs Claude Code Didn’t Ask for Help With.
A single Claude Code prompt built a complete headphone brand from scratch last week — and the result wasn’t a mood board or a brief. It was a brand called Murmur: three distinct product lines, product photography for each, Instagram ads, and UGC-style videos, all generated without a human touching a design tool. That’s 5 categories of brand assets from one instruction.
The prompt that kicked it off was blunt: “Build me a headphone brand from scratch. I want you to do research, build the branding, build the product catalog, and for each of them, I want you to generate assets — a product photo, an Instagram ad, and a UGC video.” Claude Code handled the rest, using the Higgsfield MCP to connect to Higgsfield’s image and video generation infrastructure.
If you’ve been watching the AI creative tooling space, you know the claim “end-to-end brand generation” gets thrown around constantly. What’s different here is that the outputs are specific, documented, and reproducible — and the workflow that produced them has real structure underneath it.
Here’s what actually happened, and what it tells you about where autonomous creative pipelines are right now.
Claude Didn’t Just Generate Assets — It Built the Brand First
One coffee. One working app.
You bring the idea. Remy manages the project.
Before a single image was generated, Claude did market research. It analyzed the headphone market, identified positioning, defined a target buyer, established a visual identity, and named the brand Murmur. Then it created three product lines: an over-ear model (the flagship, called Halo), wireless earbuds, and open-back wired headphones.
Only after that scaffolding existed did it start generating assets.
For each product, it produced: a product photo, an Instagram ad, and a UGC video. The Halo’s product photo came back clean. The Instagram ad had a minor issue — duplicate header text — but that’s the kind of thing you fix with a single follow-up prompt, since Claude retains the reference image context. The UGC video showed a person wearing the headphones, listening to music, and smiling at the camera. It looked, by most accounts, indistinguishable from something shot with a real actor.
The Murmur demo is the clearest example yet of what “agentic creative work” actually means in practice. It’s not Claude generating one image when you ask. It’s Claude making a sequence of decisions — research, naming, product architecture, asset generation — without being prompted at each step. If you want to understand how these multi-step autonomous sequences are structured under the hood, the agentic workflow patterns behind Claude Code are worth reviewing before you build your own pipeline.
The Infrastructure Behind the Demo
The Murmur brand didn’t emerge from Claude alone. It required Higgsfield, a platform that aggregates access to the best image and video generation models, connected to Claude via the Higgsfield MCP.
Setup is three commands, pulled directly from the higgsfield.ai/mcp-cli page. You install the CLI, run the OAuth flow to authenticate, and install the Higgsfield agent skills. That’s it. The agent skills are the part most people skip past — they’re pre-built behavioral defaults that tell Claude how to approach generation tasks without you having to specify everything from scratch each time.
For the Murmur demo, the MCP was used (not the CLI). That distinction matters more than it sounds. The MCP exposes all of Higgsfield’s tools simultaneously, which means Claude has access to everything — but it also means significantly higher token costs per task. The CLI is purpose-built for agentic workflows: faster, more efficient, and cheaper for the same outputs. The Murmur demo used the MCP because it was running in Claude’s web interface, not Claude Code. When you move to Claude Code for production pipelines, the CLI is the right call.
Higgsfield’s Marketing Studio is what makes the video outputs possible. It’s not a generic video model — it’s a set of pre-trained styles: Hypermotion, Unboxing, UGC, and others. When Claude was told to create a launch video for the Halo using Marketing Studio’s Hypermotion variant, the output was fast cuts, zooms, close-up product details, and background animations. The kind of video that would have required a studio shoot and a post-production team six months ago.
What the Sensitive Content Block Reveals
Not everything worked on the first try. When Claude attempted to generate a 16x9 Hypermotion video for the Halo, Higgsfield flagged it as sensitive content and refunded the credits. It happened twice.
Remy doesn't write the code. It manages the agents who do.
Remy runs the project. The specialists do the work. You work with the PM, not the implementers.
The fix is instructive. Instead of just retrying, the approach was to ask Claude to read the blocked prompt, identify which specific words or phrases triggered the flag, and then regenerate with those removed. It worked. The video that came through after that iteration is the one that appeared in the demo’s intro.
This is a pattern worth building into your workflow explicitly. Sensitive content blocks aren’t random — they’re triggered by specific language in the prompt. If you can get Claude to diagnose the flag rather than just retry blindly, you recover faster and you learn something you can bake into a skill file so it doesn’t happen again.
The skill file structure is markdown: a name, a description, a “when to invoke” section, step-by-step instructions, and hard rules. The Hypermotion video skill was reverse-engineered from the best-performing prompt — someone identified the output they liked most, copied the prompt that produced it, and asked Claude to turn it into a reusable skill stored at .claude/skills/. Now every future Hypermotion request follows that recipe instead of improvising. For teams building content systems at scale, Claude Code’s built-in simplify and batch commands can help keep these skill files from ballooning into overengineered configurations.
The Reference Image Problem (and Why It Keeps Coming Up)
The Murmur demo worked well. A follow-up demo with a sleep supplement product didn’t — at first.
When Claude was asked to generate Instagram ads for the supplement, it produced images of a generic blue bottle labeled “sleep support.” The actual product looked nothing like that. The issue: no one told Claude to preserve the reference image exactly. It inferred what the product might look like from the description and generated accordingly.
The fix required dragging the actual product image into Claude Code and adding explicit language: “When you are creating these advertisements for the sleep supplement product, it has to appear as shown in this reference image every single time. It must appear exactly like this. Same color, same text. Don’t change anything.”
After that instruction, the outputs came back correct. The bottle appeared as it should. The ads had real copy — “Melatonin does not equal sleep. Try the formula. 28,000 parents swear by” — and the Instagram story format the research suggested would convert.
This is one of the most consistent failure modes in AI image generation pipelines: the model will hallucinate product details unless you’re explicit about reference fidelity. It’s not a bug you work around once — it’s a constraint you encode into every skill file and every prompt template you build.
The same pipeline tested two different models for the supplement ads: Nano Banana 2 for some ad types, GPT Image 2 for others. Running multiple models within the same generation batch is how you get variation without having to manually configure each run.
The Tracking Layer That Makes This Scalable
A one-off brand generation is a demo. A system that generates 30 new ad creatives every Monday morning while you sleep is infrastructure.
The workflow that connects those two things runs through Google Sheets, populated via the GWS CLI — Google Workspace CLI — which connects Claude Code to Sheets, Docs, Gmail, Calendar, and Drive without the overhead of multiple MCP servers. If you haven’t set this up yet, the full walkthrough for Google Workspace CLI with Claude Code covers the exact configuration. The tracking schema captures: product, style, image or video, model, prompt, status, result URL, and job ID.
The creative slate tab is where the planning lives. It holds 30+ variations with priority scores, value propositions, headlines, avatar types, and styles. The status column is what makes automation possible: Claude picks up rows with a blank status, generates the assets, marks them complete, and logs the result URL and job ID. No manual tracking.
The routine structure is: Sunday planning (Claude analyzes performance data and adds 50 new generation ideas to the sheet) and Monday execution (Claude picks 30 blank-status rows and generates them). You wake up Monday morning with a batch of completed assets, URLs, and job IDs already logged.
The research layer that informs the creative slate is a 617-line markdown document called advertising masterclass.md — a deep research file on 2026 organic ad best practices across TikTok, Meta, and X. It covers what captures attention, what converts, and how it differs by platform. Claude references this file when building the creative slate, which is why the output includes specific angles: curiosity, contrarian, pattern interrupt, question, stat flash. These aren’t random — they’re drawn from documented best practices.
For teams building similar pipelines at scale, MindStudio offers a no-code path to the same kind of orchestration: 200+ models, 1,000+ integrations, and a visual builder for chaining agents and workflows without writing the coordination layer from scratch.
The Non-Obvious Part: Skills Are Self-Improving
The Hypermotion skill didn’t come from a template. It came from identifying the best output in a batch, copying the prompt that produced it, and asking Claude to reverse-engineer a skill from it.
That’s a different mental model than most people have about AI workflows. You’re not designing the skill upfront — you’re discovering it from outputs you actually like, then codifying it. Every time you run the skill and give feedback (“I like A and B, I don’t like C”), the skill file gets updated. The next run is better than the last.
This is also why the skill file format matters. It’s not a system prompt buried in a config file. It’s readable markdown with explicit sections: name, description, when to invoke, steps, hard rules. You can open it, read it, edit it, and understand exactly what it’s doing. The hard rules section is where you encode things like “never use these five phrases that triggered content flags” or “always preserve the reference image exactly.”
The .claude/skills/ directory becomes a library of institutional knowledge about what works. If you’re building Claude Code skills for content automation, the same pattern applies: your best outputs become your recipes, and your recipes get better over time.
This is also where the abstraction question gets interesting. When your creative pipeline is defined by skill files — structured markdown documents that carry intent, rules, and steps — you’re essentially writing specs for your creative process. Tools like Remy take a similar approach to software: you write an annotated markdown spec, and a complete full-stack TypeScript application gets compiled from it, including backend, database, auth, and deployment. The source of truth is the document; the output is derived. The parallel isn’t exact, but the direction is the same — more of your work lives in readable, editable documents rather than buried in code or configuration.
What to Actually Do With This
If you want to replicate the Murmur demo, the path is straightforward. Go to higgsfield.ai/mcp-cli, copy the three CLI installation commands, and run them in Claude Code. Authenticate via the OAuth flow. Install the agent skills. That’s your foundation.
From there, the order of operations matters. Build your research layer first — the advertising masterclass approach, or something equivalent for your category. Set up your Google Sheets tracking schema with the GWS CLI before you start generating anything. The schema should include product, style, image/video, model, prompt, status, result URL, and job ID from day one. Retrofitting tracking onto an existing pipeline is painful.
Then generate a batch of 10-15 assets without worrying too much about consistency. Look at what comes back. Find the two or three outputs you actually like. Reverse-engineer skills from those. Now you have recipes.
The reference image instruction is non-negotiable if you’re working with real products. Drag the image into Claude Code. State explicitly that the product must appear exactly as shown. Make this a hard rule in every skill file that involves product imagery.
For the agentic workflow patterns that connect all of this — the Sunday planning routine, the Monday execution routine, the status-based filtering — the key insight is that the Google Sheet is the coordination layer. Claude doesn’t need to remember what it generated last week. The sheet does. That’s what makes the system scale beyond what any individual session can hold.
One opinion: the Murmur demo is more significant than it looks. Not because the individual outputs are perfect — some of them aren’t — but because the workflow is coherent. Research informs creative strategy. Creative strategy populates a tracking sheet. The tracking sheet drives generation. Generation results feed back into skill improvement. That loop, running on a weekly cadence, compounds. A team running this for six months will have a library of tested creative assets, documented skills, and performance data that no manual process could produce at the same cost.
The question isn’t whether AI can build a brand. Murmur answers that. The question is whether you’ve built the infrastructure to make it repeatable.