Skip to main content
MindStudio
Pricing
Blog About
My Workspace

How to Build an AI Ad Creative Agency Pipeline with Claude Code and Higgsfield in an Afternoon

Claude Code + Higgsfield CLI + Google Sheets as a production database. Here's the full pipeline for autonomous ad creative generation on a schedule.

MindStudio Team RSS
How to Build an AI Ad Creative Agency Pipeline with Claude Code and Higgsfield in an Afternoon

You Can Build a Week’s Worth of Ad Creatives While You Sleep

Most ad creative workflows have a human bottleneck somewhere in the middle. You generate an idea, hand it to a designer, wait, iterate, wait again. Or you do it yourself and the iteration speed is limited by how fast you can click through tools. Either way, the constraint is human time.

You can close that gap in an afternoon. The pipeline covered here — Higgsfield CLI + GWS CLI + Claude Code + Google Sheets tracking schema — runs autonomously on a schedule, logs every generation with its prompt and job ID, and wakes up Monday morning with a batch of completed ad creatives waiting for review.

This is not a “prompt Claude and see what happens” workflow. It’s a production system with a database, reusable skills, and a planning layer that improves over time. Here’s how to build it.


What You’re Actually Building (And Why It Compounds)

The output isn’t just ads. It’s a system that generates ads, tracks them, learns from them, and generates better ones next week.

The Google Sheet is the core of this. Every generation gets a row: product, style, image or video, model used, the exact prompt, status, result URL, and job ID. That schema sounds boring until you realize it means Claude Code can look at 45 past generations, identify what worked, and plan 50 new variations without you touching anything.

The creative slate tab — the planning layer — holds 30+ variations with priority scores, value propositions, headlines, avatar types, and video styles. Claude populates this from your historical data and a research document you provide. The Monday routine picks up every row with a blank status and generates them. Sunday adds 50 new rows. The queue never empties.

The compounding effect: every generation you do makes the next batch better, because the agent has more data to reason about. A human creative director working alone can’t iterate that fast. A team of humans costs more than the API calls.


What You Need Before Starting

Accounts and subscriptions:

  • Higgsfield account on a paid plan (required for API access)
  • Claude Code (the desktop app, not just Claude.ai)
  • Google account (for GWS CLI access to Sheets, Docs, Drive)

Knowledge prerequisites:

  • You should be comfortable running terminal commands. The setup is three copy-paste commands, but you need to not panic when a terminal opens.
  • Basic familiarity with Claude Code projects helps. If you haven’t used Claude Code before, the Claude Code agentic workflow patterns post covers the mental model.

Assets you’ll want ready:

  • Product reference images (PNG or JPG). These matter more than you’d think — more on that in the troubleshooting section.
  • A rough sense of what ad formats you want: Instagram stories, square posts, 16:9 video, UGC-style, hypermotion, unboxing.

Building the Pipeline, Step by Step

Step 1: Create your project folder and install the Higgsfield CLI

Open Claude Code. Create a new blank folder — call it something like higgsfield-studio — and open it as your project.

Go to higgsfield.ai and navigate to the MCP and CLI page. You’ll see three commands. Copy all three and paste them into Claude Code with a prompt like:

“This project is being set up as a creative marketing studio. Install the Higgsfield CLI using the commands below, run the OAuth flow so I can sign in, and then install the Higgsfield agent skills.”

Claude Code will run the install, open a browser tab for OAuth, and then install the default agent skills. You’ll authorize the connection in the browser and come back to a confirmed session.

Why CLI and not MCP? The Higgsfield MCP exposes all tools simultaneously, which means every agent call carries the full tool manifest in context. That’s a significant token overhead for tasks where you only need a handful of operations. The CLI is purpose-built for agentic use: faster, cheaper, and the right default for anything running on a schedule. This distinction matters when you’re running 30+ generations per batch.

Now you have: Higgsfield CLI installed, authenticated, and default agent skills loaded into your project.

Step 2: Install the GWS CLI and set up your tracking sheet

The GWS CLI (Google Workspace CLI) connects Claude Code to your Google account — Sheets, Docs, Gmail, Calendar, Drive — via bash commands rather than MCP servers or raw API calls. It’s the same efficiency argument as the Higgsfield CLI: less overhead, faster execution, better for agents running in loops.

If you haven’t set this up before, the Google Workspace CLI with Claude Code automation post walks through the full setup. Once it’s installed and authenticated, come back here.

With both CLIs running, prompt Claude Code to:

“Look at all the assets I’ve generated in Higgsfield. Pull the job IDs, prompts, statuses, and asset types. Then use the GWS CLI to create a Google Sheet with this data organized by product and style. Include columns for: product, style, image/video, model, prompt, status, result URL, and job ID. Add a planning tab and a by-product summary tab.”

Claude will pull your Higgsfield history, structure it, and write it to a new Google Sheet. For a fresh account with 45 generations, this takes a few minutes.

Now you have: A live Google Sheet with your full generation history, structured for agent-readable tracking.

Step 3: Build your advertising research document

This step is optional in the sense that you can skip it. It’s not optional if you want the output to be good.

Open a new chat in your Claude Code project and ask it to do deep research on organic ad best practices for your target platforms — TikTok, Meta, X, or wherever you’re running. Ask it to produce a full markdown file called advertising-masterclass.md that lives in the project. Ask it to cover what captures attention, what converts, and how it differs by platform.

The resulting document in the demo runs 617 lines. It covers attention mechanics, platform-specific formats, headline patterns, CTA structures, and content archetypes. When your agent plans new creative variations, it reads this document. The difference in output quality between “generate some ads” and “generate ads informed by a 617-line research doc” is not subtle.

You can also bring in external research — Twitter threads, YouTube transcripts, Perplexity summaries. Paste them in, ask Claude to synthesize them into the masterclass doc. This is how you give your agent the subject matter expertise it doesn’t have by default.

Now you have: A research document that acts as your agent’s creative director briefing.

Step 4: Generate your creative slate

Tag the advertising-masterclass.md file in a new Claude Code prompt (use @ to reference it). Then ask Claude to:

“Look at all the generations we’ve done. Read the advertising masterclass doc. Help me plan a batch of new variations — mix different value props, headlines, avatar types, and video styles. Give each one a priority score. Put the full creative slate into the Google Sheet as a new tab called ‘creative slate’.”

The output should be 30+ rows with columns for priority, value prop, headline, avatar type, style, and notes. Claude may not add a status column automatically — if it doesn’t, tell it to. That status column is what the Monday routine uses to know what hasn’t been generated yet.

Now you have: A prioritized creative slate in your Google Sheet, ready for batch generation.

Step 5: Build a reusable skill from your best output

This is where the system starts to compound. Find a generation you like — a specific video or image that hit the quality bar you want. Copy its prompt. Then open a new chat and say:

“This prompt above produced my favorite output from Higgsfield Marketing Studio. It was a hypermotion fast-paced product launch video with fast cuts, zooms, and close-ups. Turn this into a skill that lives in .claude/skills/ so that any time I ask for a hypermotion-style video, you use this as the recipe.”

Plans first. Then code.

PROJECTYOUR APP
SCREENS12
DB TABLES6
BUILT BYREMY
1280 px · TYP.
yourapp.msagent.ai
A · UI · FRONT END

Remy writes the spec, manages the build, and ships the app.

Claude will create a markdown file in .claude/skills/ — something like hypermotion-video.md. The file structure is: name, description, when to invoke, steps, and hard rules. The hard rules section is where you put constraints like “always preserve the reference image exactly” and “never use [flagged phrases that triggered content blocks].”

Skills are how you stop pulling the slot machine. Without a skill, every generation is a fresh guess. With a skill, you’re running a recipe that you’ve already validated. For the Claude Code content marketing skill system, the same logic applies — skills are the unit of reusable quality.

Now you have: A .claude/skills/ directory with at least one validated skill that Claude Code will invoke by name.

Step 6: Set up your Sunday/Monday routines

In Claude Code, you can set up routines — prompts that inject on a schedule. Two routines run this system:

Sunday planning routine: Look at the Google Sheet and any performance data you’ve pulled in. Read the advertising masterclass. Add 50 new rows to the creative slate tab with blank status fields.

Monday execution routine: Go to the Google Sheet. Find all rows in the creative slate with a blank status. Generate the first 30. Mark each one as “complete” when done, write the result URL and job ID back to the sheet.

The blank-status filter is the key mechanism. It ensures the Monday routine never re-generates something already done, and the Sunday routine never overwrites existing rows. The system is idempotent by design.

Scale from there: planning twice a week, generation twice a week, batch sizes increasing as you validate the output quality.


The Failure Modes Worth Knowing About

Reference image drift. If you don’t explicitly tell the model to preserve the product’s appearance exactly — same color, same text, same label — it will hallucinate a plausible-looking product. The prompt needs to say something like “it must appear exactly as shown in this reference image, same color, same text, do not change anything.” Drag the actual image file into the prompt. This is the most common reason a first batch comes back useless.

Sensitive content blocks. Higgsfield’s content filter will occasionally reject prompts and refund your credits. When this happens, ask Claude to read the blocked prompt, identify which words or phrases likely triggered the filter, and regenerate with those removed. Then add those phrases to the hard rules section of your skill file so they never appear again. The skill file is your institutional memory for what the filter rejects.

Skill invocation failures. Claude Code sometimes runs a default Higgsfield skill instead of your custom one, especially right after you create it. Close and reopen the Claude Code app — this forces it to re-index the .claude/skills/ directory. Also check that your voice dictation (if you use one) isn’t autocorrecting skill names. “Hypermotion” becoming “remotion” in a prompt will cause the wrong skill to fire.

Not a coding agent. A product manager.

Remy doesn't type the next file. Remy runs the project — manages the agents, coordinates the layers, ships the app.

BY MINDSTUDIO

Model selection for text accuracy. Image-to-video generation will mangle text on product labels more often than image-to-image. If label accuracy matters, either use image generation instead of video for text-heavy products, or design a label variant that’s just a logo — no small metadata text that the model will distort. The pipeline supports using different models for different ad types in the same batch: Nano Banana 2 for some rows, GPT Image 2 for others, based on what each model handles well.

Sheet schema drift. If you ask Claude to add columns mid-project without specifying where, it may add them in positions that break your routines’ column references. Define the schema explicitly in your project’s CLAUDE.md or in a system prompt, and tell Claude to always append new columns to the right rather than inserting them.


Where to Take This Further

The pipeline as described gets you to autonomous batch generation with a tracking database. A few natural extensions:

Bring in real performance data. Connect your Meta Ads Manager or TikTok analytics export to the Google Sheet. Now the Sunday planning routine can look at which value props actually converted, not just which ones Claude thought would convert. The research document becomes a feedback loop rather than a static briefing.

Extend the skill library. The hypermotion skill is one recipe. Build skills for UGC-style, unboxing, static Instagram story, carousel slide. Each skill encodes a validated output format. If you’re building skills for content repurposing as well as ad creative, the Claude Code skills for social media content repurposing post covers the same pattern applied to a different output type.

Connect to a publishing layer. Once you trust the output quality enough, the batch can flow directly to a scheduling tool or ad platform. The job IDs and result URLs are already in the sheet — a second agent can read those and queue them for posting.

Spec out a tracking app. The Google Sheet works well for a solo operator. If you’re running this for multiple clients or products, you’ll eventually want a proper database with a UI. Tools like Remy take a different approach to that problem: you write a spec — annotated markdown describing your data model, rules, and edge cases — and it compiles a complete TypeScript backend, SQLite database, and frontend from it. The spec is the source of truth; the generated code is derived output. Worth knowing about when the Sheet starts to feel limiting.

Build the multi-model comparison layer. The pipeline already supports running different models on different rows. Formalize this: add a “model” column to the creative slate, have the Sunday routine assign models based on ad type, and track which model produces better results for which format. Over time you’ll have empirical data on which model to use for which job.

If you want to build this kind of pipeline without writing the orchestration code yourself, MindStudio offers a no-code path: 200+ models including the ones used here, 1,000+ integrations, and a visual builder for chaining agents and workflows. Useful if the CLI setup is a barrier for your team.

The system described here took an afternoon to set up. The value comes from running it for weeks — from the compounding effect of a creative slate that grows, a skill library that improves, and a tracking database that eventually tells you what actually works.

That’s the part no single generation session can give you.

Presented by MindStudio

No spam. Unsubscribe anytime.