Skip to main content
MindStudio
Pricing
Blog About
My Workspace

Automate Weekly Ad Generation with Claude Code: 4 Skill Files and Routines That Run Without You

Skill files as reusable prompt recipes plus Claude Code routines on a cron schedule — here's how to build a self-running creative pipeline.

MindStudio Team RSS
Automate Weekly Ad Generation with Claude Code: 4 Skill Files and Routines That Run Without You

4 Skill Files That Turn Claude Code Into a Self-Running Ad Machine

Claude Code can now schedule its own work. Four skill files, two routines, and a Google Sheet tracker — and you wake up to finished ad creative instead of a to-do list.

The specific mechanism is a skill file: a markdown document stored in .claude/skills/ that defines trigger conditions, hard rules, and pre-generation questions. It’s invoked via slash command or natural language. When Claude Code reads it, it stops improvising and follows a recipe. That distinction — improvised vs. recipe-driven — is the entire difference between outputs you can trust and outputs that are a coin flip.

Here’s what that system actually looks like, built out end to end.


The Problem With Prompting From Scratch Every Time

Most people using AI for ad creative are essentially pulling a slot machine lever. They type something vague, get something mediocre, iterate manually, and repeat. The outputs aren’t consistent because the inputs aren’t consistent. There’s no memory of what worked. There’s no enforcement of brand rules. There’s no way to hand it off.

The deeper problem is that this approach doesn’t scale. You’re still the bottleneck. You’re still the one who has to show up, type the prompt, evaluate the output, and decide what to do next. The AI is fast, but you’re slow — and the whole pipeline moves at your speed.

Not a coding agent. A product manager.

Remy doesn't type the next file. Remy runs the project — manages the agents, coordinates the layers, ships the app.

BY MINDSTUDIO

Skill files break that bottleneck. They’re not magic. They’re just structured documentation that the agent reads before acting. But the effect is significant: you go from “generate me an ad” producing wildly different results every time to a consistent, auditable, improvable process.


Skill File 1: The Hypermotion Video Recipe

The first skill worth building is for your highest-performing video format. The reason to start here is that video generation is the most expensive and most variable output — getting it wrong costs real money and real time.

The Hypermotion skill in this system was built by reverse-engineering a winning output. The creator took the exact prompt from their best-performing Higgsfield Marketing Studio generation, pasted it into Claude, and said: turn this into a skill that lives in .claude/skills/ so that any time I ask for a Hypermotion-style video, it’s always consistent.

That’s the right workflow. You don’t design skills from theory. You generate a bunch of outputs, find the one that actually worked, and extract the recipe from it.

The skill file itself is plain markdown. It has a name (Hypermotion Video), a description (Generate a Hypermotion style premium product launch video via Higgsfield Marketing Studio), a “when to invoke” section, hard rules (things the agent must never do), and pre-generation questions. The pre-generation questions are particularly important — in this case, Claude asked: “Do you want a model in the ad? UGC or product only?” That single question prevents a whole category of wrong outputs.

The hard rules section is where you encode institutional knowledge. If certain words or phrases have historically triggered Higgsfield’s sensitive content filter — and they will, because the filter is aggressive and not always logical — you put those words in the hard rules and the agent never uses them again. The workaround for content blocks is itself instructive: have Claude read the rejected prompt, identify the flagged words, regenerate without them. Do that a few times and you have a list. Put that list in the skill file. Now the problem doesn’t recur.

One thing to watch: when you first create a skill, Claude Code may not immediately recognize it. The fix is to close and reopen the app. The skill will then appear and can be invoked via slash command (e.g., /hypermotion). If your dictation tool autocorrects “hypermotion” to something else — the creator’s Glido voice tool changed it to “remotion” — Claude will call the wrong skill. Worth knowing before you spend credits debugging what turns out to be a typo.


Skill File 2: The Instagram Ad Generator With Brand Lock

The second skill is for static ads, and it has one job above all others: keep the product looking exactly like the reference image.

This sounds obvious. It isn’t. Without explicit instruction, image generation models will interpret your product loosely. They’ll change the label color. They’ll drop the text. They’ll generate something that looks vaguely like your product but isn’t. In the demo, the first batch of sleep supplement ads came back with generic blue bottles that said “sleep support” — not the actual product. The reference image had been provided, but the agent hadn’t been told to treat it as inviolable.

One coffee. One working app.

You bring the idea. Remy manages the project.

WHILE YOU WERE AWAY
Designed the data model
Picked an auth scheme — sessions + RBAC
Wired up Stripe checkout
Deployed to production
Live at yourapp.msagent.ai

The fix is a hard rule in the skill file: the product must appear exactly as shown in the reference image. Same color. Same text. No changes. This is the kind of constraint that feels obvious in retrospect but gets missed constantly in ad-hoc prompting.

The Instagram ad skill should also encode your copy strategy. This is where a research document like advertising_masterclass.md — a 617-line Claude-generated breakdown of what converts on TikTok, Meta, and X in 2026 — earns its place. You don’t paste the whole thing into every prompt. You store it in the project and reference it in the skill’s “when to invoke” section: “consult advertising_masterclass.md for copy angles and platform-specific formatting.” The agent reads it when it needs it. Your ads get better because the agent has done the research.

The skill should also specify which models to use for which formats. Higgsfield gives you access to Flux 2, GPT Image 1/1.5/2, and Nano Banana Pro, among others. Different models perform differently for different use cases. Encoding model selection in the skill means you’re not making that decision fresh every time — and it means your tracking data is actually comparable across runs.

If you’re thinking about how this kind of spec-driven approach applies beyond ad generation, Remy takes a similar philosophy to full-stack app development: you write an annotated markdown spec, and the complete TypeScript backend, database, auth, and deployment get compiled from it. The spec is the source of truth; everything else is derived output.


Skill File 3: The Weekly Planning Routine

Skills handle individual generation tasks. Routines handle the schedule.

Claude Code routines inject a prompt on a set cadence. They’re not a separate product — they’re a feature you configure by telling Claude Code what to do and when. The Sunday planning routine looks like this: every Sunday evening, read the Google Sheet tracker, pull in any available performance data from Instagram or Meta, analyze what’s working, and add 50 new generation ideas to the creative slate tab.

The Google Sheet tracker is the connective tissue here. It’s built and maintained by Claude Code via the GWS CLI — a command-line tool that gives the agent read/write access to Google Sheets, Docs, Gmail, Calendar, and Drive. The schema matters: job ID, status, prompt, model, sizing, result URL. That’s the minimum. With that data, the agent can look at past generations and make decisions — which models produced the best outputs, which copy angles haven’t been tested yet, which product lines are underrepresented.

The planning routine doesn’t generate anything. It thinks. It looks at the data, consults the advertising masterclass, and produces a prioritized list of what to make next. That list goes into the sheet with a blank status column. The blank status is the signal that something hasn’t been generated yet.

This separation of planning and execution is important. If you collapse them into one routine, you get a system that generates things without a clear record of why. Keeping them separate means you can audit the plan before execution, adjust priorities, and maintain a history of decisions.

Remy is new. The platform isn't.

Remy
Product Manager Agent
THE PLATFORM
200+ models 1,000+ integrations Managed DB Auth Payments Deploy
BUILT BY MINDSTUDIO
Shipping agent infrastructure since 2021

Remy is the latest expression of years of platform work. Not a hastily wrapped LLM.

For builders who want to skip the infrastructure work and get to the agent logic, MindStudio offers a no-code path: 200+ models, 1,000+ integrations, and a visual builder for chaining agents and workflows — useful when you want the orchestration without writing the plumbing.


Skill File 4: The Monday Generation Routine

The Monday morning routine is the execution half of the pair. It reads the sheet, finds every row with a blank status, generates the creative, and marks each row complete with the result URL and job ID when it’s done.

The blank-status filter is what prevents duplicate work. Without it, you’d need to manually track what’s been generated. With it, the agent handles its own queue. You add items to the sheet (or the Sunday routine adds them), and the Monday routine processes them in order.

The generation routine should reference the relevant skill files for each row. If a row is tagged as “Hypermotion video,” the routine invokes the Hypermotion skill. If it’s tagged as “Instagram static ad,” it invokes the Instagram ad skill. The skill files do the heavy lifting; the routine just orchestrates the queue.

This is also where the Higgsfield CLI matters more than the MCP connector. When you’re running an agent that makes dozens of API calls in a single session, the MCP’s behavior of loading all tools into context on every call becomes a real cost. The CLI avoids that overhead — it’s faster and cheaper for agent pipelines specifically because it doesn’t carry that context penalty. For a one-off generation in Claude web, the MCP connector is fine. For a Monday morning routine processing 30 items, use the CLI.

The Claude Code skills for social media content repurposing pattern is directly applicable here — the same skill-file architecture that handles YouTube-to-LinkedIn repurposing handles ad creative generation. The format is identical; only the domain changes.


Making Skills Self-Improving

The most underrated property of this system is that skills can be updated based on output quality.

After the Monday routine runs, you review the outputs. Some are good. Some aren’t. You tell Claude Code: “I liked outputs 2 and 4. I didn’t like 1, 3, and 5. Here’s why.” Claude Code updates the skill file to encode those preferences. The next run produces better outputs because the recipe has been refined.

This is the compounding effect. Week one, your skill files are rough. Week four, they’ve been refined by real output data. Week eight, they’re producing consistently usable creative. The system gets better without you having to redesign it from scratch — you’re just giving feedback on outputs and the skill files absorb it.

The same logic applies to the advertising masterclass document. It’s not static. You can update it with new research, new platform changes, new copy angles that are working. The agents reference it on every run. Better research document, better outputs.

For builders interested in how self-improvement loops work at the skill level, the AutoResearch approach to self-improving Claude Code skills is worth reading — it applies a similar feedback loop using binary scoring to automatically improve prompt quality overnight.


What This System Actually Produces

TIME SPENT BUILDING REAL SOFTWARE
5%
95%
5% Typing the code
95% Knowing what to build · Coordinating agents · Debugging + integrating · Shipping to production

Coding agents automate the 5%. Remy runs the 95%.

The bottleneck was never typing the code. It was knowing what to build.

To be concrete about what “working” looks like here: the demo produced a full headphone brand — market research, brand identity, three product lines, product photos, Instagram ads, and UGC videos — from a single Claude prompt in approximately five minutes. That’s not a polished, production-ready campaign. The text rendering in videos is imperfect. Some images don’t match the reference exactly on the first pass. The sensitive content filter will block things that seem completely innocuous.

But that’s the wrong frame. The question isn’t whether a single run produces perfect outputs. The question is whether the system produces enough usable material, consistently enough, to change your testing velocity. If you’re currently producing 10 ad variants per week manually and this system produces 50 per week with one hour of your time, the math is different regardless of the imperfections.

The skill files are what make the outputs usable rather than random. Without them, you’re generating noise. With them, you’re generating testable variations of a known-good format.

The Claude Code content marketing skill system covers the broader architecture of skill-based automation — if you’re building this for content beyond ads, the same principles apply and the post goes deeper on the skill design patterns.


The Honest Ceiling

There’s a real ceiling here that’s worth naming. If you don’t know what good ad creative looks like — if you can’t evaluate whether a headline is compelling or a video is engaging — the system won’t save you. It will produce more output faster, but more bad output faster isn’t an improvement.

The advertising masterclass document is a partial solution. Bringing in real expertise — whether that’s a research document, a swipe file, or actual performance data from your ad accounts — raises the floor. But the system is only as good as the judgment you bring to evaluating and refining the skill files.

What the system genuinely solves is the production bottleneck. The creative thinking still has to come from somewhere. The skill files encode that thinking so it can be applied consistently at scale. That’s the actual value proposition, and it’s a real one — just not the one that gets promised in most AI marketing content.

Presented by MindStudio

No spam. Unsubscribe anytime.