Skip to main content
MindStudio
Pricing
Blog About
My Workspace
ClaudeWorkflowsAutomation

How to Build an AI News Digest Agent with Claude Code and Trigger.dev

Build a scheduled agent that monitors a YouTube channel every 8 hours, detects new videos, extracts key highlights, and delivers them automatically.

MindStudio Team
How to Build an AI News Digest Agent with Claude Code and Trigger.dev

Why Manual News Monitoring Doesn’t Scale

Keeping up with AI news through YouTube is a losing battle. You subscribe to a channel, miss a few uploads, and suddenly there’s a backlog of hour-long videos you’ll never get through. Most of the content isn’t even worth your time once you sit down with it.

An AI news digest agent fixes this without friction. Every 8 hours, the agent checks your target YouTube channel for new uploads, feeds the content to Claude for summarization, and drops a clean digest in your inbox. You get the signal without the scroll.

This guide walks through building that agent end to end. You’ll use Trigger.dev for scheduling and task orchestration, the YouTube Data API v3 to detect new videos, Claude for highlight extraction, and Resend for email delivery. The result is a fully automated Claude-powered workflow that runs in the background and only contacts you when there’s something worth reading.

What the Agent Does

Before writing any code, it helps to understand the full execution flow.

Every 8 hours, the agent:

  1. Queries the YouTube Data API for videos published on a target channel within the last 8 hours
  2. Skips silently if nothing new was uploaded
  3. For each new video, fetches the title, description, and available transcript
  4. Sends that content to Claude with a structured prompt to extract key highlights
  5. Formats all summaries into a single HTML digest
  6. Delivers the digest to a specified email address

The architecture is deliberately simple: one Trigger.dev scheduled task, three external API calls (YouTube, Anthropic, Resend), and no database. Using the 8-hour publish window as the filter means you don’t need to track which videos you’ve already processed.

This same pattern applies to any content automation workflow—RSS feeds, newsletters, podcast transcripts. The YouTube version is just the most concrete example to build first.

What You’ll Need Before Starting

This is a TypeScript project. You don’t need to be a TypeScript expert, but you should be comfortable reading and editing it.

Accounts and API keys:

  • Trigger.dev account (free tier works for this)
  • Anthropic API key (from console.anthropic.com)
  • YouTube Data API v3 key (from Google Cloud Console)
  • Resend account for email delivery (free tier: 100 emails/day)

Local requirements:

  • Node.js 18 or higher
  • npm or pnpm
  • Trigger.dev CLI

One thing to sort out upfront: You need a channel’s actual ID, not its handle. Channel IDs start with UC. To find one, go to the channel page, open browser DevTools, and search the page source for "channelId". Several free tools online will also look it up by channel URL if you’d rather skip the manual step.

Set Up Your Trigger.dev Project

Install the CLI and scaffold a new project:

npm install -g @trigger.dev/cli@latest
npx trigger.dev@latest init

Follow the prompts and choose the blank template. This creates a trigger/ folder where your tasks live.

Install the packages you’ll need:

npm install @anthropic-ai/sdk googleapis resend youtube-transcript
npm install -D @types/node

Your project structure will look like this:

/my-digest-agent
  /trigger
    youtube-digest.ts
  package.json
  trigger.config.ts

Configure Environment Variables

In your Trigger.dev dashboard, add these environment variables before deploying:

ANTHROPIC_API_KEY=sk-ant-...
YOUTUBE_API_KEY=AIza...
RESEND_API_KEY=re_...
DIGEST_RECIPIENT=you@yourdomain.com
YOUTUBE_CHANNEL_ID=UCxxxxxxxxxxxxxxxxxxxxxx

These are available inside your task at runtime via process.env.

Fetch New Videos from YouTube

The YouTube Data API’s search.list endpoint accepts a publishedAfter parameter. Set it to 8 hours ago and you get only videos published since the last run—no state tracking needed.

Create trigger/youtube-digest.ts with the fetch logic:

import { google } from "googleapis";

const youtube = google.youtube({
  version: "v3",
  auth: process.env.YOUTUBE_API_KEY,
});

interface VideoItem {
  id: string;
  title: string;
  description: string;
  publishedAt: string;
  url: string;
}

async function getRecentVideos(channelId: string): Promise<VideoItem[]> {
  const eightHoursAgo = new Date(
    Date.now() - 8 * 60 * 60 * 1000
  ).toISOString();

  const response = await youtube.search.list({
    part: ["snippet"],
    channelId,
    type: ["video"],
    order: "date",
    publishedAfter: eightHoursAgo,
    maxResults: 10,
  });

  const items = response.data.items ?? [];

  return items.map((item) => ({
    id: item.id?.videoId ?? "",
    title: item.snippet?.title ?? "Untitled",
    description: item.snippet?.description ?? "",
    publishedAt: item.snippet?.publishedAt ?? "",
    url: `https://youtube.com/watch?v=${item.id?.videoId}`,
  }));
}

A search.list call costs 100 API units. The free YouTube quota is 10,000 units per day. Running every 8 hours against a single channel uses 300 units/day—well within limits.

Fetching Transcripts

Video descriptions are often thin. For richer highlights, pull the actual transcript when it’s available:

import { YoutubeTranscript } from "youtube-transcript";

async function getTranscript(videoId: string): Promise<string> {
  try {
    const entries = await YoutubeTranscript.fetchTranscript(videoId);
    return entries.map((e) => e.text).join(" ");
  } catch {
    // No captions available — caller handles fallback
    return "";
  }
}

Not every video has captions. Auto-generated ones sometimes take time to appear after upload. Always handle the empty case gracefully.

Extract Highlights with Claude

This is where the Claude API does the work. Given a video’s title, description, and optional transcript, Claude identifies the key points and returns structured output.

import Anthropic from "@anthropic-ai/sdk";

const anthropic = new Anthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
});

interface VideoSummary {
  title: string;
  url: string;
  oneLiner: string;
  highlights: string[];
}

async function extractHighlights(
  video: VideoItem,
  transcript: string
): Promise<VideoSummary> {
  const content = transcript
    ? `Transcript:\n${transcript.slice(0, 6000)}`
    : `Description:\n${video.description}`;

  const message = await anthropic.messages.create({
    model: "claude-3-5-sonnet-20241022",
    max_tokens: 512,
    messages: [
      {
        role: "user",
        content: `You are summarizing a YouTube video for an AI news digest. Be concise and factual.

Video title: ${video.title}

${content}

Return your response as JSON with this structure:
{
  "oneLiner": "One sentence describing what this video is about",
  "highlights": ["Key point 1", "Key point 2", "Key point 3"]
}

Focus on concrete facts, announcements, and insights. Skip filler content.`,
      },
    ],
  });

  const raw =
    message.content[0].type === "text" ? message.content[0].text : "";

  try {
    const parsed = JSON.parse(raw);
    return {
      title: video.title,
      url: video.url,
      oneLiner: parsed.oneLiner ?? video.title,
      highlights: parsed.highlights ?? [],
    };
  } catch {
    return {
      title: video.title,
      url: video.url,
      oneLiner: video.title,
      highlights: [video.description.slice(0, 200)],
    };
  }
}

Getting Better Results from Claude

A few prompt adjustments that make a real difference:

  • Be specific about your domain. If you’re tracking AI research content, say so: “You are summarizing AI research and product announcements.” This focuses Claude on what matters for that niche rather than treating every video the same way.
  • Cap transcript length. Auto-generated transcripts can exceed 10,000 tokens. Truncating to 6,000 characters keeps API costs low without sacrificing much.
  • Ask for JSON explicitly. Structured output is easier to format downstream and reduces parse failures. Newer Claude models also support native structured output mode if you want to enforce a schema without any prompt engineering.
  • Use Sonnet, not Opus, for this task. Summarization and highlight extraction don’t require Opus-level reasoning. Claude 3.5 Sonnet gives essentially the same output at a fraction of the cost.

Deliver the Digest Automatically

Once you have summaries, format them into HTML and send via Resend:

import { Resend } from "resend";

const resend = new Resend(process.env.RESEND_API_KEY);

function formatDigest(summaries: VideoSummary[], since: Date): string {
  const header = `<h2>AI News Digest — ${summaries.length} new video${
    summaries.length !== 1 ? "s" : ""
  }</h2>`;

  const meta = `<p style="color: #666; font-size: 14px;">
    Published since ${since.toUTCString()}
  </p>`;

  const items = summaries
    .map(
      (s) => `
    <div style="margin-bottom: 24px; border-left: 3px solid #0070f3; padding-left: 16px;">
      <h3 style="margin: 0 0 6px 0;">
        <a href="${s.url}">${s.title}</a>
      </h3>
      <p style="margin: 0 0 8px 0; color: #444;">${s.oneLiner}</p>
      <ul>
        ${s.highlights.map((h) => `<li>${h}</li>`).join("")}
      </ul>
    </div>
  `
    )
    .join("");

  return `<div style="font-family: sans-serif; max-width: 640px;">
    ${header}${meta}${items}
  </div>`;
}

async function sendDigest(
  summaries: VideoSummary[],
  since: Date
): Promise<void> {
  const html = formatDigest(summaries, since);

  await resend.emails.send({
    from: "digest@yourdomain.com",
    to: process.env.DIGEST_RECIPIENT!,
    subject: `AI News Digest — ${new Date().toLocaleDateString()}`,
    html,
  });
}

If you prefer Slack, swap resend.emails.send for a fetch POST to your Slack incoming webhook URL. The formatting changes; the structure doesn’t.

Wire Everything Together with Trigger.dev

Bring all the pieces into a single scheduled task:

import { schedules } from "@trigger.dev/sdk/v3";

export const youtubeDigestTask = schedules.task({
  id: "youtube-news-digest",
  cron: "0 */8 * * *", // runs at midnight, 8am, and 4pm UTC
  maxDuration: 300, // 5-minute timeout
  run: async () => {
    const channelId = process.env.YOUTUBE_CHANNEL_ID!;
    const windowStart = new Date(Date.now() - 8 * 60 * 60 * 1000);

    // Step 1: Check for new videos
    const videos = await getRecentVideos(channelId);

    if (videos.length === 0) {
      console.log("No new videos found. Exiting.");
      return { processed: 0 };
    }

    console.log(`Found ${videos.length} new video(s). Processing...`);

    // Step 2: Extract highlights for each
    const summaries: VideoSummary[] = [];

    for (const video of videos) {
      const transcript = await getTranscript(video.id);
      const summary = await extractHighlights(video, transcript);
      summaries.push(summary);
    }

    // Step 3: Send the digest
    await sendDigest(summaries, windowStart);

    console.log(`Digest sent with ${summaries.length} video(s).`);
    return { processed: summaries.length };
  },
});

Deploy and Test

With your environment variables configured in the Trigger.dev dashboard:

npx trigger.dev@latest deploy

This packages your code and registers the schedule. To test without waiting 8 hours, use the Run now button in the dashboard or start the local dev server:

npx trigger.dev@latest dev

Local dev runs the task immediately with live log output, so you can debug the YouTube fetch, Claude response, and email delivery before committing to a deploy.

Monitoring Multiple Channels

To watch several channels, define an array and loop:

const CHANNELS = [
  { id: "UCxxxxxx", name: "AI Explained" },
  { id: "UCyyyyyy", name: "Two Minute Papers" },
  { id: "UCzzzzzz", name: "Fireship" },
];

// Inside the task's run function:
const allSummaries: VideoSummary[] = [];

for (const channel of CHANNELS) {
  const videos = await getRecentVideos(channel.id);
  for (const video of videos) {
    const transcript = await getTranscript(video.id);
    const summary = await extractHighlights(video, transcript);
    allSummaries.push(summary);
  }
}

if (allSummaries.length > 0) {
  await sendDigest(allSummaries, windowStart);
}

Combine all channel summaries before sending so the recipient gets one digest per window, not one per channel.

Common Mistakes to Avoid

Exceeding YouTube API quotas. The free tier gives you 10,000 units per day. Each search.list call costs 100 units. Three channels × three runs per day × 100 units = 900 units. You have a large buffer, but monitoring 10+ channels at high frequency will hit the cap fast. Cache results or request a quota increase before scaling up.

Assuming transcripts are always available. Auto-generated captions can take several minutes to appear after a video goes live. If your task runs within that window, getTranscript will fail. The try/catch handles this gracefully, but be aware that early digests may rely on descriptions rather than full transcripts.

JSON parsing failures from Claude. Even with explicit JSON instructions, Claude occasionally returns explanatory text before the JSON block. The try/catch fallback in extractHighlights handles parse failures, but for a more robust solution use Anthropic’s structured outputs feature to enforce the response schema at the API level.

Task timeouts. Processing 10 videos sequentially—each with a transcript fetch and a Claude API call—can take two or three minutes. Set maxDuration to at least 300 seconds and consider processing videos in parallel with Promise.all if the list grows large.

Timezone confusion in cron syntax. Trigger.dev interprets cron expressions in UTC. The schedule 0 */8 * * * runs at midnight, 8am, and 4pm UTC. If you want digests to align with your local timezone, adjust the cron accordingly.

How MindStudio Fits

If you want this same scheduled AI digest capability without writing TypeScript, or if you want to extend a Claude Code agent with additional infrastructure without building it from scratch, MindStudio offers two useful options.

The first is MindStudio’s Agent Skills Plugin (@mindstudio-ai/agent), an npm SDK that lets any AI agent—including those built on Claude—call 120+ typed capabilities as simple method calls. Instead of setting up Resend, handling retry logic, and writing email formatting code manually, the agent calls:

import { MindStudioAgent } from "@mindstudio-ai/agent";

const agent = new MindStudioAgent();

await agent.sendEmail({
  to: "you@yourdomain.com",
  subject: "AI News Digest",
  body: digestText,
});

The plugin handles rate limiting, retries, and authentication. Your agent code stays focused on the Claude calls and content processing rather than infrastructure. Methods like agent.searchGoogle(), agent.runWorkflow(), and agent.generateImage() follow the same pattern.

The second option is building the entire workflow in MindStudio’s visual builder—no code required. MindStudio supports autonomous background agents that run on a schedule, with built-in integrations for YouTube, email delivery, and Slack. You get the same YouTube-monitoring, Claude-summarizing, email-delivering automation, deployed without managing a TypeScript project or Trigger.dev account.

You can try MindStudio free at mindstudio.ai.

Extending the Agent

Once the basic digest is running reliably, a few additions make it significantly more useful.

Add a relevance filter. Include a short system prompt describing your role and interests—“I work in enterprise AI infrastructure”—and ask Claude to rate each video’s relevance on a scale of 1 to 5. Filter out anything below a 3 before building the digest. Over time, you get shorter digests with higher signal.

Track trends across digests. Store each digest’s key themes in a lightweight KV store like Upstash Redis. Once you have a few weeks of history, ask Claude to compare the current digest’s topics against recent ones. Are more channels covering the same subject? Is the overall tone shifting? This turns a simple summary tool into a light trend detector.

Expand beyond YouTube. The same architecture—schedule, fetch, summarize with Claude, deliver—applies to RSS feeds, newsletters, and podcast transcripts. Wrap each source in its own fetch function and route everything through the same extractHighlights and sendDigest functions. One digest, many sources.

Post to Slack with threading. Instead of a single HTML email, post each video summary as a Slack message with highlights in a thread. Teams can react and discuss specific items without digging through a long email. Slack’s incoming webhooks are easy to integrate and cost nothing to use.

Frequently Asked Questions

Do I need a paid YouTube API quota for this to work?

No. The YouTube Data API free tier provides 10,000 units per day. This agent uses roughly 300 units per day monitoring a single channel at 8-hour intervals—well within the free quota. If you expand to many channels or much higher frequency, request a quota increase in Google Cloud Console before hitting the limit.

What happens if Claude returns invalid JSON?

The extractHighlights function wraps the JSON parse in a try/catch block. If parsing fails, the function falls back to using the video description as the highlight text and the video title as the one-liner. The digest still sends—just with less structured content for that video. For a more robust solution, Anthropic’s structured outputs feature (available on claude-3-5-sonnet and later) enforces a valid JSON schema at the API level without requiring any special prompt handling.

Can this agent monitor multiple YouTube channels at once?

Yes. Define an array of channel IDs and loop over them within the same scheduled task. Collect all summaries before sending so your recipient gets one digest per 8-hour window rather than one per channel. If the channel list grows large, use Promise.all to process them in parallel and stay within Trigger.dev’s execution time limits.

How much does this cost to run monthly?

The rough breakdown:

  • Trigger.dev: free tier covers scheduled tasks; paid plans start at $5/month for higher volume
  • YouTube Data API: free within the 10,000 unit/day quota
  • Anthropic API: Claude 3.5 Sonnet costs ~$3 per million input tokens; summarizing 5 videos with 3,000-token transcripts three times a day comes to roughly $0.15/month
  • Resend: free tier covers 100 emails/day; paid plans start at $20/month

For a personal digest watching a handful of channels, total monthly cost is under $1.

What if a video doesn’t have a transcript available?

The getTranscript function catches the error and returns an empty string. The extractHighlights function detects this and passes the video description to Claude instead. For channels with detailed descriptions, the summaries are still useful. For channels that rely entirely on spoken content with minimal descriptions, the highlights will be thinner—but the agent still delivers something.

Can I use a different AI model instead of Claude?

Yes. Any model with a chat completion API works here. Replace the Anthropic client with the OpenAI SDK, adjust the API call syntax, and the prompt stays largely the same. Claude performs particularly well for this task because it follows structured JSON instructions reliably and handles long transcript inputs cleanly, but the architecture isn’t tied to a specific model.

Key Takeaways

  • A Trigger.dev scheduled task with an 8-hour cron handles polling and orchestration without any custom infrastructure—deploy once and it runs indefinitely.
  • Using publishedAfter set to 8 hours ago eliminates the need for a persistent store of seen video IDs. The time window is the state.
  • Claude’s summarization accuracy improves with domain-specific prompts and explicit JSON schemas. Sonnet is the right model for this task; Opus isn’t necessary.
  • Transcript fallback to video description ensures the agent delivers something useful even when captions aren’t available.
  • The same schedule-fetch-summarize-deliver pattern extends cleanly to RSS feeds, newsletters, and other content sources.

If you’d rather build this kind of scheduled AI workflow without managing a TypeScript codebase, MindStudio’s visual builder handles the same automation with no setup required—or extend your existing Claude-based agents using the Agent Skills Plugin for email, search, and 120+ other capabilities out of the box.

Presented by MindStudio

No spam. Unsubscribe anytime.