How to Chain Claude Code Skills into Scheduled Autonomous Pipelines: A Step-by-Step Guide
Chain Claude Code's modular skills into a scheduled pipeline that researches, writes, repurposes, and posts content with one human checkpoint.
You’re Doing the Work Your Pipeline Should Be Doing
Every week, the same sequence: find a topic worth writing about, draft a script, cut it into tweets and LinkedIn posts, wait for approval, publish. If you’re doing this manually, you’re spending 4–6 hours on work that a chained set of Claude Code skills can handle in under an hour of your time — most of which is a single approval step.
The pipeline looks like this: a scheduled topic research skill fires automatically, hands its output to a script-writing skill, which feeds a content repurposing skill, which pauses for your review, then posts. That’s the full loop — topic research → script writing → content repurposing → human-in-loop approval → post. Each skill is modular, each handoff is explicit, and the only moment you’re required is the approval gate.
This post walks through how to build that pipeline from scratch using Claude Code skills, a scheduler, and a Telegram bridge for the approval step.
What You’re Actually Building (and Why It Holds Together)
The key mental model here comes from how skills work in Claude Code. A skill is a markdown file — a specialist playbook that tells Claude how to do one job reliably. The insight that makes chaining work is that a skill designed for one task can accept structured input from another skill’s output.
Plans first. Then code.
Remy writes the spec, manages the build, and ships the app.
Think of the transcription skill example from the source material: you build it once, and it becomes a reusable component in short-form video creation, newsletter generation, and blog drafts. The skill doesn’t care which pipeline called it. It just does its job and returns structured output.
That composability is what makes the pipeline below possible. You’re not building one monolithic automation — you’re building five small, testable, replaceable pieces and connecting them in sequence.
For teams managing multiple clients or content verticals, this also maps cleanly onto a multi-client skill architecture for content marketing, where shared skills live at the root level and client-specific overrides sit in individual folders.
What You Need Before Starting
Claude Code setup:
- Claude Code installed and working locally
- A Claude account (not just an API key — this matters for
/ultra reviewlater, and for the Channels/Telegram bridge) - At least one project folder with a
claude.mdfile
Skills to install:
skill creator— install with/plugin install skill creatorin your Claude Code terminal. This is the official Anthropic skill that drafts, tests, and packages new skills from plain English descriptions. You’ll use it to build the research and repurposing skills.GSD(Get Shit Done) — install via the terminal command from the GSD repo, then run/gsd-helpinside Claude Code to see available commands. This manages the plan → execute → verify phases for multi-step work.
Optional but recommended:
ClaudeMem— for cross-session memory so the pipeline remembers what topics it already covered. Install via the plugin marketplace commands in Claude Code.- A Telegram bot token — for the human-in-loop approval step. The Claude Code Channels + Telegram setup guide covers this end to end.
Knowledge assumed:
- You know how to open a Claude Code terminal session
- You’ve written or edited a
claude.mdfile before - You’re comfortable running shell commands
Building the Pipeline, Step by Step
Step 1: Install the skill creator and scaffold your first skill
Open Claude Code in your project folder and run:
/plugin install skill creator
Once installed, describe your topic research skill in plain English. Something like:
“Build a skill that searches for trending topics in [your niche] using web search, scores each topic by estimated audience interest and originality, and outputs a ranked list of 5 topics with a one-sentence rationale for each. Output as structured JSON.”
The skill creator will draft the skill, test it, iterate, and package it as a .md file you can reuse. You don’t touch the skill file format manually — that’s the point.
Now you have: a topic-research.md skill that produces consistent, structured JSON output every time it runs.
Step 2: Build the script-writing skill
Repeat the process for the script-writing skill. The critical detail here is that this skill must accept the JSON output from Step 1 as its input. Tell the skill creator:
“Build a skill that takes a JSON list of ranked topics (with rationale) and writes a 500-word video script for the top-ranked topic. The script should follow [your format: hook, three points, CTA]. Reference the brand voice profile from the shared brand context folder.”
Not a coding agent. A product manager.
Remy doesn't type the next file. Remy runs the project — manages the agents, coordinates the layers, ships the app.
That last sentence — referencing the shared brand context folder — is what separates a generic output from one that sounds like you. If you’ve built a voice-profile.md in a shared context folder, the skill will pull from it automatically when it runs.
Now you have: a script-writer.md skill that takes structured topic data and returns a formatted script.
Step 3: Build the content repurposing skill
Same process. This skill takes the script output and produces platform-specific variants:
“Build a skill that takes a video script and outputs: one LinkedIn post (250 words, professional tone), three tweet-thread posts (280 chars each), and one Instagram caption (150 words, conversational). Use the shared brand voice profile. Output as a single JSON object with keys: linkedin, tweets, instagram.”
The structured JSON output is deliberate — it makes the approval step in Step 5 much cleaner, because you can display each variant separately rather than parsing a blob of text.
Now you have: a content-repurposer.md skill that turns one script into three platform-ready pieces.
If you want to go deeper on the repurposing side, the Claude Code social media content repurposing skill guide covers platform-specific formatting in more detail.
Step 4: Create the orchestrator skill
This is the meta-skill that chains the others. It’s a short file — maybe 30 lines — that does three things:
- Calls
topic-research.mdand captures output - Passes that output to
script-writer.mdand captures output - Passes the script to
content-repurposer.mdand captures output - Writes all outputs to a predictable folder:
outputs/content-pipeline/[date]/ - Sends a Telegram message with the draft content and a two-button approval prompt (Approve / Request Changes)
The folder structure matters. One of the most frustrating things about Claude Code out of the box is that outputs land wherever Claude decides to put them. Hardcoding an output path in the orchestrator skill means you always know where to look.
For the Telegram step, you’re using Anthropic’s SDK bridge to connect Claude Code to your Telegram bot. The Claude Code Dispatch remote control guide explains how this connection works if you haven’t set it up yet.
Now you have: an orchestrator that runs the full pipeline and pauses for human review before anything gets posted.
Step 5: Add GSD for the execution phase
GSD’s value here is in the execute phase of complex multi-step runs. When you kick off the orchestrator, wrap it in a GSD session:
/gsd start
GSD spawns fresh sub-agents for each task in the pipeline, which means each skill runs with a clean context window rather than inheriting the accumulated noise from previous steps. This is the fix for context rot — the failure mode where Claude starts forgetting requirements halfway through a long session.
The plan → execute → verify structure also gives you checkpoints. If the script-writing step produces something off, GSD’s verify phase catches it before it reaches the repurposing step.
Now you have: a pipeline that runs cleanly across multiple steps without degrading mid-session.
Step 6: Schedule it
Claude Code supports cron-style scheduled tasks. Add a schedule entry that fires the orchestrator every weekday at a time that makes sense for your review cadence — say, 7:00 AM, so the draft is waiting for you when you start work.
How Remy works. You talk. Remy ships.
The cron expression for weekdays at 7:00 AM:
0 7 * * 1-5
If you’re running this on a VPS rather than your local machine (which you should, if you want it to run while your laptop is closed), the schedule lives on the server. Mark Kashef’s system does exactly this — his Meta Ads CLI report fires at 7:30 AM daily via a scheduled task, and the output arrives in Telegram before he opens his laptop.
Now you have: a fully scheduled pipeline that runs without you initiating it.
Step 7: Wire up ClaudeMem for topic deduplication
Without memory, the topic research skill will eventually resurface topics you’ve already covered. ClaudeMem fixes this by maintaining a cross-session SQLite database with vector search.
After installing ClaudeMem via the plugin marketplace, add one instruction to your topic research skill:
“Before scoring topics, run a semantic search against ClaudeMem for each candidate topic. Exclude any topic with a similarity score above 0.85 to previously covered topics.”
ClaudeMem’s three-layer retrieval — compact index first, timeline around relevant entries second, full details only when needed — means this check adds minimal tokens to each run. The repo reports roughly 10x token savings on retrieval compared to dumping all past context at session start.
Now you have: a pipeline that remembers what it’s already published and won’t repeat itself.
When Things Break (and They Will)
The orchestrator loses track of which step it’s on. This is context rot. The fix is GSD — it’s designed specifically for this. If you’re not using GSD and the pipeline is long, you’ll hit this around step 3 or 4.
The repurposing skill ignores the brand voice profile. Check that the skill explicitly references the file path to your voice profile, not just a vague instruction to “use brand voice.” Claude needs a concrete file path to read.
The Telegram approval message never arrives. Usually a bot token issue or a network timeout on the Anthropic SDK bridge. Test the Telegram connection independently before debugging the pipeline.
ClaudeMem excludes topics it shouldn’t. The 0.85 similarity threshold is a starting point. If it’s too aggressive, lower it to 0.75. If topics are repeating, raise it to 0.90. Tune it after a week of runs.
The scheduled task fires but nothing happens. Check whether Claude Code is actually running on the machine when the cron fires. If you’re running locally, your laptop needs to be on. This is the main reason to move to a VPS for scheduled work.
One thing worth knowing about the /ultra review command: if you want to run a code review on the orchestrator skill itself before deploying it, you need Claude Code version 2.1.86 or later and a Claude account login — an API key alone won’t work. It’s not necessary for this pipeline, but it’s useful if you’re building something you’ll hand to a client.
Where to Take This Further
Hire a contractor. Not another power tool.
Cursor, Bolt, Lovable, v0 are tools. You still run the project.
With Remy, the project runs itself.
The pipeline above handles one content vertical. The natural next step is multi-client architecture: a root-level claude.md with shared skills, individual client folders with override claude.md files and per-client brand context, and per-client memory stored separately so one client’s topics don’t bleed into another’s.
If you want to add an agent council layer — where multiple specialized agents (research, comms, distribution) coordinate via /standup and /discuss slash commands — that’s the direction Mark Kashef’s system goes. His entire hive mind, including agent conversations, scheduled jobs, and memory, runs on a local SQLite database with zero cloud cost. The architecture is more complex, but the foundation is identical to what you’ve built here: modular skills, structured handoffs, and a Telegram interface for human touchpoints.
For teams who want to build this kind of orchestration without writing the skill files themselves, MindStudio offers a visual builder for chaining agents and workflows across 200+ models and 1,000+ integrations — a different path to the same outcome.
The Claude Code agentic workflow patterns post covers five patterns that extend naturally from what you’ve built here, including parallel execution and self-correcting loops.
One opinion: the human-in-loop approval step is worth keeping even as you get more confident in the pipeline’s output quality. Not because the pipeline will fail — it mostly won’t — but because the approval moment is when you actually read what’s going out under your name. That’s not overhead. That’s editorial judgment, and it’s the part that’s hard to automate.
If the broader question is how to turn this kind of pipeline spec into a production application — say, a dashboard where clients can see pipeline status, approve content, and view history — Remy takes a different approach: you write the application as an annotated markdown spec, and it compiles into a complete TypeScript backend, SQLite database, auth, and frontend. The spec stays as the source of truth; the code is derived from it.
The pipeline you’ve built is the core. Everything else is additive.