How to Build a Content Repurposing Engine with Claude Code
Turn any transcript or blog post into LinkedIn posts, X threads, and newsletters automatically using Claude Code skills and MCP integrations.
The Problem With Manual Content Repurposing
You record a podcast episode. You write a long-form blog post. You produce a webinar. Each one contains more ideas than you’ll ever publish — because turning a 45-minute transcript into five LinkedIn posts, three X threads, and a newsletter takes hours of focused work most people don’t have.
The obvious answer is to batch this work. But batching still means doing it. What actually solves the problem is building a content repurposing engine using Claude Code: a system of connected skills that ingests raw source content — transcripts, articles, recordings — and automatically produces formatted, publish-ready content for every platform you care about.
This guide walks through exactly how to build that system. You’ll understand what skills to create, how to connect them with MCP integrations, and how to chain everything into an automated workflow that runs from a single input.
What a Content Repurposing Engine Actually Does
Before getting into the build, it’s worth being precise about what this system does — and doesn’t do.
A content repurposing engine is not a generic “summarize this” prompt. It’s a structured, multi-step workflow where each step produces a specific output format with specific quality standards. The engine:
- Ingests source content — a raw transcript, a blog post URL, or a document
- Extracts core ideas — identifies the key claims, stories, data points, and arguments
- Reformats for each platform — applies platform-specific structure, tone, and length rules
- Publishes or stages — either pushes to a scheduling tool or writes to a Notion database for review
Each of those steps is a separate Claude Code skill. That separation is what makes the system reliable. If your LinkedIn formatter produces bad output, you fix that one skill — you don’t touch the extraction logic or the newsletter builder.
If you’re new to how skills work structurally, this overview of Claude Code skills explains the fundamentals before you start building.
The Four Skills You Need
Skill 1: Source Ingestion and Idea Extraction
This is the entry point. It takes raw content — a transcript pasted in, a URL, or a file — and produces a structured JSON object with the core ideas, key quotes, main argument, supporting points, and any notable data or stats.
The output schema matters here. Every downstream skill depends on this structure, so it needs to be consistent. A good extraction schema looks like:
{
"title": "string",
"main_argument": "string",
"key_points": ["string"],
"notable_quotes": ["string"],
"data_points": ["string"],
"audience": "string",
"tone": "string"
}
Write the skill as a code script rather than a markdown instruction. Code scripts outperform markdown instructions for agent tasks because they enforce output structure and handle edge cases predictably. Your extraction skill should validate the JSON output schema before passing it downstream.
Skill 2: LinkedIn Post Generator
LinkedIn has specific conventions. Posts that perform well tend to have a strong opening line that creates tension or states a counterintuitive idea, a short body that delivers the insight, and a closing question or call to action. They avoid walls of text and use line breaks aggressively.
This skill takes the extracted ideas JSON and produces 3–5 LinkedIn post variants. Each variant should:
- Open with a hook (not “I recently learned that…”)
- Stay under 1,300 characters for the visible preview
- Use natural paragraph breaks, not bullet points
- End with a question or clear takeaway
The skill file (skill.md) should reference a separate linkedin_style_guide.md that contains your brand voice, examples of posts that have performed well, and formatting rules. Keeping your skill.md focused on process steps rather than cramming everything into one file makes both easier to maintain.
Skill 3: X Thread Writer
X threads work differently. A good thread has a premise tweet that can stand alone, numbered context tweets that each add one idea, and a closing tweet that either summarizes or asks for a response. Each tweet must fit in 280 characters.
This skill should output an array of tweet objects with character counts pre-calculated. If any tweet exceeds 280 characters, the skill should flag it before passing the output downstream.
A useful addition: generate two versions of the opening tweet. One punchy and direct, one that leads with a question. This lets you A/B test without running the whole skill again.
Skill 4: Newsletter Section Builder
Newsletters need different treatment. They’re read in an inbox with more time and attention than social posts. The goal isn’t a hook — it’s a readable summary with enough depth to feel worth subscribing for.
This skill produces a newsletter-ready section: a short intro paragraph, the main body in 3–5 paragraphs, and a “key takeaway” block at the bottom. The output is formatted in HTML or Markdown depending on which email platform you use.
For teams using Beehiiv, ConvertKit, or Substack, the formatting requirements differ slightly. You can handle this with a parameter in the skill that takes a platform argument and adjusts output formatting accordingly.
Connecting Skills With MCP Integrations
Skills are the processing layer. MCP servers are what connect your skills to the actual tools where content lives and gets published. Understanding what MCP servers are and how they work is essential before wiring up the integrations.
For a content repurposing engine, you’ll typically need three MCP connections:
Notion MCP
Notion is the staging area. Every piece of generated content gets written to a Notion database first, where a human can review, edit, and approve before publishing. Your Notion database should have columns for:
- Source content title
- Platform (LinkedIn, X, Newsletter)
- Generated content (rich text)
- Status (Draft / Approved / Published)
- Generated date
The MCP integration with Notion handles both reading and writing. When the engine runs, it writes all outputs to the database. When you approve, a separate automation pushes approved items to the scheduling tool.
Blotato or Buffer MCP
For actual scheduling, tools like Blotato connect directly to LinkedIn and X APIs. Using Blotato with Claude Code to schedule and publish social media posts walks through this integration in detail. The key thing to get right: Blotato expects specific payload structures, so your X thread writer and LinkedIn generator need to format output that matches those expected inputs exactly.
Email Platform MCP
ConvertKit, Beehiiv, and Mailchimp all have APIs. For newsletter content, the MCP either creates a draft campaign or appends a section to an existing draft. Check whether your platform’s API supports draft creation — not all of them do at every plan level.
Chaining the Skills Into a Single Workflow
With the four skills built and the MCP connections configured, you need to chain them. Chaining Claude Code skills into end-to-end workflows follows a specific pattern: each skill receives the output of the previous step as its input.
The full chain looks like this:
Input (transcript / URL / document)
→ Skill 1: Extraction → structured JSON
→ Skill 2: LinkedIn → post variants
→ Skill 3: X Thread → thread array
→ Skill 4: Newsletter → formatted section
→ MCP: Notion → write all outputs to staging DB
→ MCP: Blotato → schedule approved social posts
→ MCP: Email Platform → create newsletter draft
Each arrow is a handoff. The structured JSON from Skill 1 flows into Skills 2, 3, and 4 in parallel — those three don’t need to run sequentially. They all read from the same extraction output.
This is the key architectural decision: run the platform-specific skills in parallel, not in sequence. It cuts total processing time significantly when you’re handling longer source content.
For the orchestration layer, you can implement this as a sequential-to-parallel workflow pattern where the extraction runs first, then the three formatters run concurrently, then the MCP writes happen.
Handling Brand Voice Consistently
One of the most common problems with repurposing at scale is drift — the outputs start sounding generic because nothing is anchoring the agent to your specific voice and style.
The solution is a shared brand context file: a document that every skill reads before generating output. It contains your writing style, vocabulary preferences, things you never say, your target audience description, and 5–10 examples of content you’ve published that represents your voice at its best.
This is the business brain pattern for Claude Code — a shared reference that all skills load from the same source, so updates propagate automatically across every skill in the system.
Your brand context file should be version-controlled and updated when you notice the outputs drifting. Think of it as the document that trains the system to sound like you, not like a generic content writer.
Processing YouTube Transcripts and Other Video Sources
The engine described so far assumes text input — a pasted transcript or a blog post URL. But a lot of high-value source content comes from video.
YouTube automatically generates transcripts for most videos. You can retrieve them via the YouTube Data API and pass them directly into Skill 1. The extraction skill handles the messy, unformatted nature of auto-generated transcripts reasonably well, but it helps to add a pre-processing step that strips filler words and timestamps before extraction runs.
If you work with video content regularly, repurposing YouTube videos into multi-platform social posts with Claude Code covers the YouTube-specific workflow in more detail.
Podcast transcripts from tools like Descript, Otter.ai, or Riverside come pre-formatted with speaker labels and timestamps. Your Skill 1 prompt should handle these formats — either strip speaker labels before processing, or use them to identify key quotes attributed to specific speakers.
What Good Output Actually Looks Like
It’s worth being concrete about what you should expect from a well-tuned engine — and where it still falls short.
What works well:
- LinkedIn posts that sound like a real person, not a press release
- X threads where each tweet actually adds something instead of padding
- Newsletter sections that hit the key points without being dry summaries
Where you still need a human:
- Emotional resonance — the engine can identify a good story but sometimes flattens it
- Platform timing and context — the engine doesn’t know about news cycles or trending topics
- The opening hook — good hooks usually need a human edit to land properly
The realistic workflow isn’t fully automated publishing. It’s automated drafting. You drop in source content, the engine produces five LinkedIn drafts, two thread options, and a newsletter section, and you spend 10 minutes picking and lightly editing instead of two hours writing from scratch. That’s still a dramatic reduction in time spent.
Where Remy Fits
If you want to take this further — building a web interface where team members can submit source content, review outputs, approve posts, and track what’s been published — that’s a full-stack app problem, not just a workflow problem.
That’s where Remy comes in. Remy compiles a spec document into a full-stack application: a real backend, a SQL database, authentication, and a frontend. You describe what the app does in a structured spec, and the code is derived from that.
For a content repurposing engine, the spec might describe:
- A submission form where users paste a transcript or URL
- A processing queue that triggers the Claude Code skill chain
- A review dashboard showing generated content by platform and status
- Approval actions that push approved content to Blotato or email draft
You own the code, the database, and the deployment. And because the spec is the source of truth, updating the app means updating the spec — not manually editing TypeScript across five files.
You can try Remy at mindstudio.ai/remy if you want to wrap your repurposing workflow inside a real, deployed application your whole team can use.
FAQ
What source formats can a Claude Code content repurposing engine handle?
The engine can handle any text-based source: raw transcripts, pasted blog posts, markdown documents, and plain text files. With an MCP integration to a web scraping tool, it can also ingest content from URLs. For video content, you feed in the transcript — either from YouTube’s auto-generated captions, a transcription service like Otter.ai, or a tool like Descript. The extraction skill is format-agnostic as long as it receives text.
How do I make sure the outputs match my brand voice?
The most reliable approach is a shared brand context file that every formatting skill reads before generating output. This file contains your writing style, vocabulary preferences, audience description, and concrete examples of content that represents your voice. The business brain pattern for Claude Code explains how to set this up so all skills share the same context without duplication.
Can the engine publish directly without human review?
It can, but most teams don’t recommend it for social media. The better setup is to route all outputs to a staging area (typically a Notion database) where a human approves before publishing. This catches the 10–15% of outputs that need editing before they’re ready. For newsletters specifically, direct publishing is riskier because mistakes go to your entire list — a draft-and-review step is worth keeping.
How many platforms can one repurposing engine handle?
There’s no hard limit. The architecture supports adding a new skill for each new platform: Pinterest descriptions, YouTube video descriptions, Instagram captions, Substack notes. Each skill is independent, so adding one doesn’t affect the others. Most teams start with LinkedIn, X, and email, then add platforms once the core workflow is working reliably.
Do I need coding experience to build this?
You need to be comfortable writing structured text files (skill.md files and JSON schemas) and following step-by-step configuration for MCP servers. Claude Code handles the actual skill execution. Installing and customizing marketplace skills is a good starting point if you want to use pre-built skill templates rather than writing from scratch.
What’s the difference between this and just prompting ChatGPT to reformat content?
Consistency and repeatability. A one-shot prompt gives you a one-off result with no quality guarantee next time. A skill-based engine applies the same extraction logic, the same formatting rules, and the same brand voice every time it runs. The skill files act as standard operating procedures for your AI agent — documented, versioned, and improvable over time. You also get the MCP integrations that push outputs directly to your tools, which a chat interface doesn’t give you.
Key Takeaways
- A content repurposing engine is a chain of four skills: extraction, LinkedIn formatting, X thread writing, and newsletter building — each with a specific input schema and output structure.
- MCP integrations connect the skill chain to Notion (staging), Blotato or Buffer (social scheduling), and your email platform (newsletter drafts).
- Run platform-specific skills in parallel after extraction to reduce total processing time.
- A shared brand context file keeps all outputs sounding like you, not like generic AI content.
- The realistic output of this system is automated drafting, not fully automated publishing — expect to spend 10 minutes reviewing instead of two hours writing.
- To wrap this workflow in a proper team-facing application, Remy compiles a spec into a full-stack app with a real backend, database, and frontend your whole team can use.