Skip to main content
MindStudio
Pricing
Blog About
My Workspace

How to Build a Voice Agent with Claude Code and ElevenLabs in 15 Minutes

Build a fully functional voice agent using Claude Code and ElevenLabs that books calendar appointments and answers questions from your website.

MindStudio Team RSS
How to Build a Voice Agent with Claude Code and ElevenLabs in 15 Minutes

What You’re Actually Building Here

Voice agents used to require months of engineering work, a team of specialists, and a painful amount of infrastructure glue. That’s no longer true.

In this guide, you’ll build a fully functional voice agent that can answer questions about your website and book calendar appointments — using Claude Code and ElevenLabs — in roughly 15 minutes of actual setup time. No audio engineering background required. No proprietary telephony stack.

By the end, you’ll have a working agent that:

  • Listens and responds in natural spoken language via ElevenLabs text-to-speech and speech-to-text
  • Uses Claude as its reasoning engine to understand intent and generate responses
  • Checks a calendar and books appointments on command
  • Pulls in context from your website to answer visitor questions accurately

Let’s get into it.


Prerequisites and What You’ll Need

Before you start, make sure you have the following ready.

Accounts:

  • An ElevenLabs account — the free tier works for testing, but a paid plan gives you better voice quality and higher usage limits
  • An Anthropic API key for Claude — you’ll access this through the Anthropic Console
  • A Google account (for Google Calendar integration, if that’s your calendar of choice)

Tools:

  • Node.js 18+ installed locally
  • Claude Code CLI installed (npm install -g @anthropic-ai/claude-code)
  • Basic comfort with running terminal commands — you don’t need to write code, but you’ll execute a few commands
REMY IS NOT
  • a coding agent
  • no-code
  • vibe coding
  • a faster Cursor
IT IS
a general contractor for software

The one that tells the coding agents what to build.

Time estimate: 15–25 minutes for a working prototype. Add another 30–60 minutes if you want to customize voice settings, extend the knowledge base, or harden the calendar logic.


Step 1 — Set Up Your ElevenLabs Voice

ElevenLabs handles both the voice output (text-to-speech) and voice input (speech-to-text) for your agent. Setting this up takes about three minutes.

Create a Conversational AI Agent in ElevenLabs

  1. Log into your ElevenLabs dashboard and navigate to Conversational AI in the left sidebar.
  2. Click Create Agent.
  3. Choose a base voice. ElevenLabs has dozens of presets — pick one that fits your use case. A professional, neutral voice works well for appointment booking. You can always change this later.
  4. Under the System Prompt field, paste a brief description of your agent’s role. Something like:

“You are a helpful assistant for [Your Business Name]. You answer questions about our services and help visitors book appointments. Be concise, friendly, and professional.”

  1. Note your Agent ID — you’ll need this shortly.
  2. Grab your ElevenLabs API key from Settings → API Keys.

Configure Voice and Turn-Taking

In the agent settings, you’ll see options for:

  • Interruption sensitivity — how easily the agent pauses when a user speaks mid-sentence
  • Response latency — ElevenLabs offers several latency tiers; “balanced” is a good starting point
  • Voice stability and similarity boost — leave these at defaults unless you want to tune the character of the voice

That’s it for ElevenLabs initial setup. The platform handles WebSocket connections, audio streaming, and voice activity detection out of the box.


Step 2 — Wire Up Claude Code as the Brain

Claude Code is Anthropic’s AI coding agent that runs in your terminal and can write, edit, and execute code. But for this use case, you’re going to use Claude as the reasoning layer that interprets user intent and decides what the agent should do.

Initialize Your Project

Open your terminal and create a new project directory:

mkdir voice-agent && cd voice-agent
npm init -y
npm install @anthropic-ai/sdk dotenv node-fetch

Create a .env file:

ANTHROPIC_API_KEY=your_key_here
ELEVENLABS_API_KEY=your_key_here
ELEVENLABS_AGENT_ID=your_agent_id_here
GOOGLE_CALENDAR_ID=your_calendar_id_here

Connect Claude to Your ElevenLabs Agent

ElevenLabs Conversational AI supports custom Large Language Model (LLM) endpoints. This means you can point ElevenLabs at Claude instead of using their default model. Here’s how:

  1. In your ElevenLabs agent settings, go to LLM Settings.
  2. Select Custom LLM.
  3. Point it to a lightweight proxy server you’ll run locally (or deploy) that forwards requests to the Anthropic API.

Create a file called claude-proxy.js:

const Anthropic = require("@anthropic-ai/sdk");
const http = require("http");
require("dotenv").config();

const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

const server = http.createServer(async (req, res) => {
  if (req.method !== "POST") return res.end();

  let body = "";
  req.on("data", (chunk) => (body += chunk));
  req.on("end", async () => {
    const payload = JSON.parse(body);

    const response = await client.messages.create({
      model: "claude-opus-4-5",
      max_tokens: 1024,
      system: payload.system || "",
      messages: payload.messages,
    });

    res.writeHead(200, { "Content-Type": "application/json" });
    res.end(
      JSON.stringify({
        choices: [{ message: { content: response.content[0].text } }],
      })
    );
  });
});

server.listen(3000, () => console.log("Claude proxy running on port 3000"));

Run it with node claude-proxy.js. For production, you’d deploy this to a server and point ElevenLabs at the public URL.


TIME SPENT BUILDING REAL SOFTWARE
5%
95%
5% Typing the code
95% Knowing what to build · Coordinating agents · Debugging + integrating · Shipping to production

Coding agents automate the 5%. Remy runs the 95%.

The bottleneck was never typing the code. It was knowing what to build.

Step 3 — Add Calendar Booking Capability

This is where the agent stops being a chatbot and starts being genuinely useful. You’ll give it the ability to check your calendar for open slots and create bookings.

Set Up Google Calendar Access

  1. Go to the Google Cloud Console and create a new project.
  2. Enable the Google Calendar API.
  3. Create a Service Account and download the JSON credentials file.
  4. Share your Google Calendar with the service account’s email address (give it “Make changes to events” permission).
  5. Save the credentials file as google-credentials.json in your project root.

Install the Google client library:

npm install googleapis

Create the Calendar Tool

Create calendar.js:

const { google } = require("googleapis");

const auth = new google.auth.GoogleAuth({
  keyFile: "google-credentials.json",
  scopes: ["https://www.googleapis.com/auth/calendar"],
});

const calendar = google.calendar({ version: "v3", auth });

async function getAvailableSlots(date) {
  const start = new Date(date);
  start.setHours(9, 0, 0, 0);
  const end = new Date(date);
  end.setHours(17, 0, 0, 0);

  const res = await calendar.freebusy.query({
    requestBody: {
      timeMin: start.toISOString(),
      timeMax: end.toISOString(),
      items: [{ id: process.env.GOOGLE_CALENDAR_ID }],
    },
  });

  const busy = res.data.calendars[process.env.GOOGLE_CALENDAR_ID].busy;
  // Return available 30-minute slots (simplified)
  return generateSlots(start, end, busy);
}

async function bookAppointment({ name, email, date, time, duration = 30 }) {
  const startTime = new Date(`${date}T${time}`);
  const endTime = new Date(startTime.getTime() + duration * 60000);

  await calendar.events.insert({
    calendarId: process.env.GOOGLE_CALENDAR_ID,
    requestBody: {
      summary: `Appointment with ${name}`,
      description: `Contact: ${email}`,
      start: { dateTime: startTime.toISOString() },
      end: { dateTime: endTime.toISOString() },
      attendees: [{ email }],
    },
  });

  return { success: true, slot: `${date} at ${time}` };
}

module.exports = { getAvailableSlots, bookAppointment };

Register Calendar Tools with Claude

Update your proxy server to expose these as tool calls that Claude can invoke during a conversation. Claude’s function-calling API lets you define tools as JSON schemas — Claude will call them when it decides it needs calendar data to respond correctly.

Add this to your proxy:

const tools = [
  {
    name: "get_available_slots",
    description: "Check available appointment slots for a given date",
    input_schema: {
      type: "object",
      properties: {
        date: { type: "string", description: "Date in YYYY-MM-DD format" },
      },
      required: ["date"],
    },
  },
  {
    name: "book_appointment",
    description: "Book an appointment for a user",
    input_schema: {
      type: "object",
      properties: {
        name: { type: "string" },
        email: { type: "string" },
        date: { type: "string" },
        time: { type: "string", description: "Time in HH:MM format" },
      },
      required: ["name", "email", "date", "time"],
    },
  },
];

Pass tools into your client.messages.create() call. When Claude returns a tool_use content block, execute the matching function from calendar.js and return the result.


Step 4 — Build the Website Q&A Layer

An appointment-booking agent is more useful when it can also answer questions about your business — services, pricing, hours, policies. This keeps users from bouncing when they have a quick question before booking.

Scrape and Embed Your Website Content

For a quick setup, you’ll pre-process your website content into a simple text file that gets injected into Claude’s system prompt.

npm install cheerio

Create scrape-site.js:

const fetch = require("node-fetch");
const cheerio = require("cheerio");
const fs = require("fs");

async function scrapePages(urls) {
  let content = "";

  for (const url of urls) {
    const res = await fetch(url);
    const html = await res.text();
    const $ = cheerio.load(html);

    // Remove nav, footer, scripts
    $("nav, footer, script, style").remove();
    const text = $("body").text().replace(/\s+/g, " ").trim();
    content += `\n\n--- ${url} ---\n${text}`;
  }

  fs.writeFileSync("site-content.txt", content);
  console.log("Site content saved.");
}

scrapePages([
  "https://yoursite.com",
  "https://yoursite.com/services",
  "https://yoursite.com/pricing",
  "https://yoursite.com/faq",
]);

Plans first. Then code.

PROJECTYOUR APP
SCREENS12
DB TABLES6
BUILT BYREMY
1280 px · TYP.
yourapp.msagent.ai
A · UI · FRONT END

Remy writes the spec, manages the build, and ships the app.

Run it once: node scrape-site.js. This generates a site-content.txt file you can regenerate whenever your site changes.

Inject Context into the System Prompt

Update your Claude proxy to read this file and include it in every system prompt:

const siteContent = fs.readFileSync("site-content.txt", "utf8");

const systemPrompt = `You are a helpful voice assistant for [Your Business].

Use the following website content to answer questions accurately:

${siteContent}

When a user wants to book an appointment, collect their name, email, preferred date, and preferred time. Then use the available tools to check slots and confirm the booking.

Keep responses concise — this is a voice conversation, so avoid long lists or complex formatting.`;

The “keep responses concise” instruction is important. Voice responses that would read fine as text often feel unnatural when spoken aloud. Claude is good at adapting when you explicitly prompt it for this.


Step 5 — Test Your Voice Agent

At this point you have:

  • ElevenLabs handling voice I/O
  • Claude handling reasoning and conversation
  • Google Calendar handling slot availability and booking
  • Your website content as a knowledge base

Run a Local Test

Start your proxy server:

node claude-proxy.js

If you’re testing locally and need ElevenLabs to reach your proxy, use ngrok to expose it:

ngrok http 3000

Copy the ngrok HTTPS URL and paste it into your ElevenLabs agent’s Custom LLM URL field.

Open the ElevenLabs agent test interface in your browser and start talking. Try:

  • “What services do you offer?”
  • “I’d like to book an appointment for next Tuesday.”
  • “Do you have anything open in the afternoon on Friday?”

Watch your terminal — you’ll see Claude’s reasoning as it decides when to call calendar tools versus when to answer from the website content.

Common Issues to Watch For

Latency too high: Claude Opus is slower than Haiku or Sonnet. If response time feels sluggish, switch to claude-haiku-4-5 in your proxy. You lose some reasoning quality but gain significant speed.

Tool calls not executing: Make sure your proxy handles the tool_use → function execution → tool_result loop correctly. Claude will stall if it calls a tool and doesn’t receive a result.

Agent talking too long: Add explicit instructions in your system prompt: “Limit all responses to 2–3 sentences unless the user asks for more detail.”

Calendar permissions error: Double-check that you’ve shared the calendar with the service account email, not just the project email.


Step 6 — Deploy for Real Use

A local setup is fine for testing, but for production you need to host the proxy server somewhere accessible.

Quick Deployment Options

Railway or Render — Both offer free tiers and deploy from a GitHub repo in a few clicks. Push your proxy code, set environment variables in the dashboard, and you’ll have a persistent public URL in minutes.

Vercel Edge Functions — If you want serverless, you can convert the proxy to a Vercel API route. Watch out for cold starts adding latency.

Fly.io — Good option if you want more control over region (lower latency if you pick a region close to your users).

One coffee. One working app.

You bring the idea. Remy manages the project.

WHILE YOU WERE AWAY
Designed the data model
Picked an auth scheme — sessions + RBAC
Wired up Stripe checkout
Deployed to production
Live at yourapp.msagent.ai

Once deployed, update your ElevenLabs agent’s LLM URL to point at your production server. You can then embed the ElevenLabs voice widget on your website using their provided snippet, or use their SDK to build a custom UI.


How MindStudio Fits Into This Stack

If you want to extend this voice agent without writing more backend code, MindStudio’s Agent Skills Plugin is worth knowing about.

It’s an npm SDK (@mindstudio-ai/agent) that gives any AI agent — including the Claude Code-powered one you just built — access to 120+ typed capabilities as simple method calls. Things like agent.sendEmail(), agent.searchGoogle(), agent.runWorkflow(), and integrations with HubSpot, Salesforce, Google Workspace, Slack, and more.

So instead of writing custom code every time you want your voice agent to do something new — send a confirmation email after a booking, log the appointment to a CRM, notify your team in Slack — you call a single method. The SDK handles auth, rate limiting, and retries.

Here’s what that looks like in practice. After a successful booking, you might want to:

const { MindStudio } = require("@mindstudio-ai/agent");
const agent = new MindStudio(process.env.MINDSTUDIO_API_KEY);

// Send confirmation email
await agent.sendEmail({
  to: userEmail,
  subject: "Your appointment is confirmed",
  body: `Hi ${userName}, your appointment is confirmed for ${slot}.`,
});

// Log to CRM
await agent.runWorkflow("hubspot-contact-create", {
  email: userEmail,
  name: userName,
  source: "voice-agent",
});

This lets your Claude Code agent stay focused on reasoning while MindStudio handles the integration layer. You can try MindStudio free at mindstudio.ai.

For teams that want to skip the code entirely, MindStudio’s visual builder can also be used to build AI agents without any of the proxy setup described above — it’s worth exploring if your use case evolves.


Frequently Asked Questions

How much does it cost to run a voice agent like this?

Costs depend on usage volume. ElevenLabs charges per character of text converted to speech — roughly $0.30 per 1,000 characters on their Starter plan. Claude API pricing varies by model; Claude Haiku is significantly cheaper than Opus for high-volume use. Google Calendar API is free within generous quotas. For a small business handling a few dozen calls per day, expect total API costs well under $50/month.

Can I use a different calendar system instead of Google Calendar?

Yes. The calendar module in this guide is modular by design. You can swap Google Calendar for Calendly (via their API), Microsoft Outlook (via Microsoft Graph API), or any other calendar tool that has an API. The tool definitions passed to Claude stay the same — you just change what happens when those tools are called.

What’s the difference between using ElevenLabs’ built-in LLM versus connecting Claude?

ElevenLabs’ default LLM is optimized for low latency but has less sophisticated reasoning than Claude. For simple FAQ bots, the default works fine. For agents that need to interpret nuanced requests, handle multi-step booking flows, or reason about ambiguous calendar availability, Claude is meaningfully better. The tradeoff is slightly higher latency, which you can mitigate by using Claude Haiku instead of Opus.

How do I keep the website content up to date?

How Remy works. You talk. Remy ships.

YOU14:02
Build me a sales CRM with a pipeline view and email integration.
REMY14:03 → 14:11
Scoping the project
Wiring up auth, database, API
Building pipeline UI + email integration
Running QA tests
✓ Live at yourapp.msagent.ai

The simplest approach is a cron job that re-runs your scrape script nightly and restarts the proxy server with fresh content. A more robust approach is to use a vector database (like Pinecone or Weaviate) and retrieval-augmented generation (RAG) — this lets you update individual pages without re-embedding everything. For most small sites, the cron job approach is fine.

Is this approach production-ready or just a prototype?

The architecture described here is production-capable but needs hardening before you put it in front of real users. Specifically: add error handling and fallback responses for when APIs are down, implement rate limiting on your proxy, add logging to track conversation quality, and test edge cases like users who give partial information or try to book outside business hours. The core plumbing is solid; the production work is in the error handling.

Can I add voice to an existing chatbot using this approach?

Yes, and this is actually a common use case. If you already have a Claude-powered chatbot with a system prompt and tools, you can add the ElevenLabs voice layer on top of it with relatively little effort. The main adjustment is rewriting the system prompt to instruct Claude to give shorter, spoken-friendly responses — the reasoning logic and tool calls can stay the same.


Key Takeaways

  • You can build a working voice agent with calendar booking and website Q&A in under 30 minutes using Claude Code, ElevenLabs, and the Google Calendar API
  • ElevenLabs handles voice input/output; Claude handles reasoning and tool orchestration; your calendar handles scheduling logic
  • Switching Claude model from Opus to Haiku is the fastest way to reduce latency if response times feel slow
  • Injecting website content directly into the system prompt is a fast, low-infrastructure approach to website Q&A — suitable for sites that don’t change frequently
  • For extending the agent’s capabilities without more backend code, MindStudio’s Agent Skills Plugin gives Claude access to 120+ pre-built integrations as simple method calls

If you want to go further — adding CRM logging, email confirmations, Slack notifications, or a no-code interface to manage the agent — MindStudio is worth a look. You can explore AI workflow automation that connects to everything your voice agent might need to hand off to, without writing another line of integration code.

Presented by MindStudio

No spam. Unsubscribe anytime.