Skip to main content
MindStudio
Pricing
Blog About
My Workspace
AutomationClaudeSales & Marketing

Parallel Browser Agents: How to Run Multiple Claude Code Instances Simultaneously

Learn how to spawn multiple Claude Code browser agents in parallel to complete web automation tasks like form filling and lead outreach at scale.

MindStudio Team
Parallel Browser Agents: How to Run Multiple Claude Code Instances Simultaneously

Why Sequential Browser Agents Don’t Scale

If you’ve run browser tasks with Claude Code, you know the loop: give it a URL and a task, it plans and executes, it reports back. Clean and effective for a single task.

But once you’re dealing with 200 contact forms to fill, 500 company pages to scrape, or 1,000 job listings to collect data from, running one agent at a time becomes a serious bottleneck. If each task takes 2-3 minutes, 300 tasks means 10+ hours of sequential execution.

Parallel browser agents solve this directly. Instead of one agent working through a list end-to-end, you run 10 or 20 simultaneously — each handling its own batch in an isolated browser session. The same 300-task job completes in under an hour.

This guide covers how to set up parallel Claude Code browser agents from scratch: the architecture, the orchestration code, how to isolate sessions, and how to apply this pattern to practical use cases like lead outreach, data collection, and automated form filling.

What Claude Code Browser Agents Actually Are

Before parallelizing anything, you need to understand what Claude Code does with a browser and how it fits into the broader automation stack.

Claude Code as an Agentic Tool

Claude Code is Anthropic’s official agentic CLI tool. Install it as a global npm package and run tasks via the claude command in your terminal. Unlike a simple API call, Claude Code operates in a loop: it receives a task, plans steps, uses tools (bash commands, file operations, code execution), observes results, and adapts.

Install it with:

npm install -g @anthropic-ai/claude-code

For scripted, non-interactive use, the -p flag is your main entry point:

claude -p "Visit https://example.com/contact, fill out the form with name 'Jane Smith' and email 'jane@company.com', and submit it."

This is what your orchestrator calls for each task. One call, one task, one agent loop.

How Claude Code Interacts with Browsers

Claude Code has three main paths for browser work, each with different tradeoffs:

Playwright and Puppeteer scripts Claude Code writes browser automation code and executes it via bash. The agent generates the Playwright script, runs it, reads the output, and adjusts if something fails. This is fast, headless-capable, and reliable for pages with consistent structure.

Computer use Available in Claude 3.5 Sonnet and Claude 3.7 Sonnet, computer use lets Claude see a desktop environment directly and interact via screenshot, click, and keyboard inputs. More flexible for unpredictable or heavily dynamic UIs, but significantly slower — each action requires a full see-understand-act cycle.

LLM-native browser libraries Tools like browser-use (Python) and Stagehand (TypeScript) provide high-level browser control APIs built for use with language models. They abstract away raw selector management and work well for tasks that don’t follow rigid page structures.

For parallel agents at scale, Playwright is the practical default: fast, headless by default, straightforward to configure per-instance, and well-documented.

The Three-Layer Architecture

Every reliable parallel browser agent system is built on three distinct layers:

  1. Task layer — your input data, structured and ready: a JSON file, CSV, or database table of URLs and form values
  2. Orchestration layer — code that reads tasks, controls concurrency, spawns agents, handles errors, and collects results
  3. Execution layer — individual Claude Code agents, each with its own isolated browser context, working one task at a time

These layers have clean separation. The orchestrator doesn’t touch browsers. Agents don’t manage concurrency. This makes the system easier to debug, modify, and scale.

Setting Up Your Environment

A consistent environment is the foundation for parallel runs that actually work.

Required Tools

Start with these:

  • Node.js 18+ — required for Claude Code
  • Claude Code CLInpm install -g @anthropic-ai/claude-code
  • ANTHROPIC_API_KEY — set this environment variable; get a key at console.anthropic.com
  • Playwrightnpm install playwright && npx playwright install chromium

Optional but useful:

  • GNU Parallelbrew install parallel (macOS) or apt install parallel (Ubuntu/Debian), for shell-based jobs
  • better-sqlite3npm install better-sqlite3, for task queue management at scale
  • Python 3.9+ — if you prefer Python for the orchestration layer

Project Structure

/parallel-agents/
  tasks.json           # Input task list
  results.json         # Output written after each batch
  orchestrator.ts      # Main orchestration script
  worker-prompt.md     # Reusable prompt template
  /logs/
    /screenshots/      # Error screenshots from agents
  package.json
  tsconfig.json

For jobs under a few hundred tasks, flat JSON files work fine. Above that — especially if you need multiple workers on different machines — switch to SQLite or PostgreSQL so agents can atomically claim tasks without race conditions.

Structuring Your Task List

Good task structure makes prompt generation mechanical. For a contact form campaign:

[
  {
    "id": "lead_acme",
    "url": "https://acme.com/contact",
    "fields": {
      "name": "Sarah Chen",
      "email": "sarah@yourcompany.com",
      "company": "Your Company Name",
      "message": "Hi, I wanted to reach out about..."
    }
  },
  {
    "id": "lead_widgets_co",
    "url": "https://widgetsco.com/contact-us",
    "fields": {
      "name": "Sarah Chen",
      "email": "sarah@yourcompany.com",
      "company": "Your Company Name",
      "message": "Hello, I came across your work and..."
    }
  }
]

The id field is important. Agents use it to name logs and error screenshots, making post-run debugging much easier.

Running Agents in Parallel: Three Methods

The right method depends on your technical preferences and job complexity.

Method 1: Background Processes with Bash

For small fixed-task jobs (under 50 items), shell-based parallelism is the fastest way to start. Run each claude -p call with & to background it, then wait:

#!/bin/bash

TASKS=(
  "Visit https://company-a.com/contact and fill the form: name='Alex Kim', email='alex@co.com'. Report success or failure."
  "Visit https://company-b.com/contact and fill the form: name='Alex Kim', email='alex@co.com'. Report success or failure."
  "Visit https://company-c.com/contact and fill the form: name='Alex Kim', email='alex@co.com'. Report success or failure."
)

mkdir -p logs

for i in "${!TASKS[@]}"; do
  claude -p "${TASKS[$i]}" > "logs/task_$i.log" 2>&1 &
  echo "Started agent $i (PID: $!)"
done

wait
echo "All agents complete"

To cap concurrency at N simultaneous agents:

#!/bin/bash
MAX_JOBS=5
CURRENT_JOBS=0

while IFS= read -r prompt; do
  if (( CURRENT_JOBS >= MAX_JOBS )); then
    wait -n
    ((CURRENT_JOBS--))
  fi

  claude -p "$prompt" > "logs/$(date +%s%N).log" 2>&1 &
  ((CURRENT_JOBS++))
done < task_prompts.txt

wait
echo "Done"

Method 2: GNU Parallel for Structured Shell Jobs

GNU Parallel gives you cleaner concurrency control, timeouts, retry support, and better output handling — all without any orchestration code:

# Run 5 agents at once, 2-minute timeout per task
cat task_prompts.txt | parallel \
  --jobs 5 \
  --timeout 120 \
  --results logs/ \
  --bar \
  'claude -p {}'

For CSV input where each row has an ID and a URL:

parallel --colsep ',' --jobs 8 \
  'claude -p "Visit {2}. Extract company name, phone, and email. Save JSON to /tmp/result_{1}.json"' \
  :::: tasks.csv

GNU Parallel handles partial failures gracefully and lets you resume interrupted runs — both important for production jobs.

Method 3: Programmatic Orchestration with the Claude Code SDK

For full control — dynamic task queues, retry logic, structured result aggregation — use the Claude Code SDK directly.

The @anthropic-ai/claude-code package exports a query function:

import { query, type SDKMessage } from "@anthropic-ai/claude-code";

async function runAgent(prompt: string): Promise<string> {
  const messages: SDKMessage[] = [];

  for await (const message of query({
    prompt,
    abortController: new AbortController(),
    options: { maxTurns: 20 },
  })) {
    messages.push(message);
  }

  const assistantMessages = messages.filter(m => m.type === "assistant");
  const last = assistantMessages[assistantMessages.length - 1];
  const content = last?.message?.content?.[0];
  return content?.type === "text" ? content.text : "";
}

Here’s a production-grade orchestrator with retry logic, incremental saving, and timeout management:

import { query, type SDKMessage } from "@anthropic-ai/claude-code";
import * as fs from "fs";

interface Task {
  id: string;
  url: string;
  fields: Record<string, string>;
}

interface Result {
  taskId: string;
  submitted: boolean;
  captchaBlocked: boolean;
  notes: string;
  success: boolean;
  timestamp: string;
  attempts: number;
}

function buildPrompt(task: Task): string {
  const fieldsList = Object.entries(task.fields)
    .map(([k, v]) => `- ${k}: ${v}`)
    .join("\n");

  return `
You are a browser automation agent. Submit a contact form.

Target URL: ${task.url}
Task ID: ${task.id}

Form fields:
${fieldsList}

Steps:
1. Use Playwright (headless) with userDataDir: /tmp/browser-${task.id}
2. Navigate to the URL
3. Find and fill the contact form
4. If CAPTCHA appears, stop and report captcha_blocked: true
5. Submit and wait for confirmation
6. If anything fails, save a screenshot to /tmp/screenshots/${task.id}_error.png

Return ONLY this JSON:
{"submitted": boolean, "captcha_blocked": boolean, "notes": "what happened"}
`.trim();
}

async function runTask(task: Task, attempt: number = 1): Promise<Result> {
  const controller = new AbortController();
  const timeout = setTimeout(() => controller.abort(), 180_000);

  try {
    const messages: SDKMessage[] = [];

    for await (const message of query({
      prompt: buildPrompt(task),
      abortController: controller,
      options: { maxTurns: 25 },
    })) {
      messages.push(message);
    }

    clearTimeout(timeout);

    const assistantMsgs = messages.filter(m => m.type === "assistant");
    const last = assistantMsgs[assistantMsgs.length - 1];
    const content = last?.message?.content?.[0];
    const text = content?.type === "text" ? content.text : "{}";
    const jsonMatch = text.match(/\{[\s\S]*\}/);
    const parsed = jsonMatch ? JSON.parse(jsonMatch[0]) : {};

    return {
      taskId: task.id,
      submitted: parsed.submitted ?? false,
      captchaBlocked: parsed.captcha_blocked ?? false,
      notes: parsed.notes ?? "",
      success: true,
      timestamp: new Date().toISOString(),
      attempts: attempt,
    };
  } catch (error) {
    clearTimeout(timeout);
    return {
      taskId: task.id,
      submitted: false,
      captchaBlocked: false,
      notes: String(error),
      success: false,
      timestamp: new Date().toISOString(),
      attempts: attempt,
    };
  }
}

async function runWithRetry(task: Task, maxAttempts = 2): Promise<Result> {
  for (let attempt = 1; attempt <= maxAttempts; attempt++) {
    const result = await runTask(task, attempt);
    if (result.submitted || result.captchaBlocked || attempt === maxAttempts) {
      return result;
    }
    console.log(`Task ${task.id} failed attempt ${attempt}, retrying in 5s...`);
    await new Promise(r => setTimeout(r, 5000));
  }
  return runTask(task, maxAttempts);
}

async function runParallel(tasks: Task[], concurrency: number, outputFile: string) {
  const results: Result[] = [];

  // Resume support: load existing results if present
  if (fs.existsSync(outputFile)) {
    const existing = JSON.parse(fs.readFileSync(outputFile, "utf-8")) as Result[];
    results.push(...existing);
    console.log(`Resuming: ${results.length} already done`);
  }

  const completedIds = new Set(results.map(r => r.taskId));
  const remaining = tasks.filter(t => !completedIds.has(t.id));

  console.log(`Running ${remaining.length} tasks at concurrency ${concurrency}`);

  for (let i = 0; i < remaining.length; i += concurrency) {
    const batch = remaining.slice(i, i + concurrency);
    const batchResults = await Promise.all(batch.map(runWithRetry));
    results.push(...batchResults);

    fs.writeFileSync(outputFile, JSON.stringify(results, null, 2));

    const submitted = batchResults.filter(r => r.submitted).length;
    const captcha = batchResults.filter(r => r.captchaBlocked).length;
    console.log(
      `[${i + batch.length}/${remaining.length}] ` +
      `${submitted} submitted, ${captcha} CAPTCHA, ${batch.length - submitted - captcha} failed`
    );

    if (i + concurrency < remaining.length) {
      await new Promise(r => setTimeout(r, 2000));
    }
  }

  const totalDone = results.filter(r => r.submitted).length;
  console.log(`\nFinal: ${totalDone}/${tasks.length} submitted`);
}

const tasks: Task[] = JSON.parse(fs.readFileSync("tasks.json", "utf-8"));
runParallel(tasks, 5, "results.json").catch(console.error);

Key capabilities built into this orchestrator:

  • Resume on restart — loads existing results and skips completed tasks
  • Per-task retry — automatically retries failed tasks (excluding CAPTCHA blocks)
  • Timeout per agent — aborts stuck agents after 3 minutes
  • Isolated browser contexts — each agent’s userDataDir is unique to its task ID
  • Incremental saves — results written after every batch, so partial runs aren’t lost

Isolating Browser Contexts and Managing Resources

Browser isolation is where most first-time parallel agent setups go wrong.

Why Isolation Matters

Without isolated contexts, agents sharing a browser profile can overwrite each other’s cookies and session state. If two agents log in to the same site simultaneously using a shared profile, they’ll interfere. One agent’s successful login can get immediately overwritten by the other’s.

The fix is simple: every agent gets its own userDataDir:

const { chromium } = require("playwright");

const browser = await chromium.launchPersistentContext(
  `/tmp/browser-${TASK_ID}`,
  {
    headless: true,
    args: [
      "--no-sandbox",
      "--disable-dev-shm-usage",
      "--disable-blink-features=AutomationControlled",
    ],
  }
);

const page = browser.pages()[0] || await browser.newPage();
// ... automation logic
await browser.close();

This gives each agent completely separate cookies, local storage, session tokens, and download history.

Cleaning Up After Runs

Browser data directories accumulate quickly. Each profile can grow to 50-100MB with cached assets. After a run, clean up:

rm -rf /tmp/browser-task_* /tmp/browser-lead_* /tmp/screenshots/*

Or add cleanup to your orchestrator:

import * as glob from "glob";

function cleanup() {
  const dirs = glob.sync("/tmp/browser-*");
  dirs.forEach(d => fs.rmSync(d, { recursive: true, force: true }));
  console.log(`Cleaned ${dirs.length} browser directories`);
}

Memory Budget by Machine Size

Each headless Chromium instance uses roughly 200-400MB of RAM depending on page complexity. Account for this before setting your concurrency level:

Machine SizeRecommended Concurrency
8GB RAM laptop3-4 agents
16GB RAM workstation6-8 agents
32GB cloud VM (8 vCPU)15-20 agents
64GB cloud VM (16 vCPU)30-40 agents

Run htop or your system monitor while a job is running. If you see memory pressure (frequent swapping), reduce concurrency. Swapping to disk will make your agents slower than running fewer in parallel.

Writing Agent Prompts That Actually Work

The quality of your agent prompts determines how reliably parallel runs complete. Generic prompts produce inconsistent results; structured prompts with explicit fallback instructions work far better.

Specify the Exact Tool and Configuration

Don’t leave the agent to figure out how to open a browser. Tell it exactly what to use:

Use Playwright with chromium in headless mode.
Set userDataDir to /tmp/browser-{TASK_ID}.
Set a default timeout of 15 seconds for all selectors.

This prevents the agent from choosing a different tool, using a visible browser (which can cause display issues on headless servers), or hanging indefinitely on a missing element.

Enumerate Common Edge Cases

Contact forms and web pages are inconsistent. A prompt that handles edge cases explicitly will fail far less often:

Before filling the form:
- If a cookie consent banner is visible, accept it first
- If a chat widget is blocking the page, close it
- If you land on a login page instead of the contact form, the URL may have redirected — report "redirected_to_login" in your notes

When filling the form:
- If a dropdown is present for "department" or "inquiry type", select "General Inquiry"
- If a phone number field is required and not in your data, use "555-0100"
- If the submit button is disabled, wait up to 5 seconds for it to become active

After submitting:
- Wait up to 10 seconds for a success message
- Accept anything containing "thank you", "received", "submitted", or "sent" as confirmation

This kind of explicit handling reduces agent failures from unexpected UI states — which are common in real-world parallel runs.

Enforce Structured Output

Agents that return freeform text make result aggregation painful. Always ask for JSON:

Return ONLY a valid JSON object. No explanation text, no markdown.
Format: {"submitted": boolean, "captcha_blocked": boolean, "notes": "brief description"}

When parsing, use a regex to extract the JSON in case the agent wraps it in any extra text:

const jsonMatch = responseText.match(/\{[\s\S]*\}/);
const result = jsonMatch ? JSON.parse(jsonMatch[0]) : { submitted: false };

Real-World Use Cases in Detail

Lead Outreach at Scale

You have 400 companies with contact pages. You want to submit a personalized intro message to each one.

Build your task list. Export leads from your CRM or LinkedIn to JSON. Each record needs the contact page URL and the form values — name, email, company, and your message. If you’re personalizing messages per company, generate them with a separate Claude call before building the task list.

Choose concurrency based on domain diversity. If your leads are spread across 400 different domains, higher concurrency (10-15 agents) is safe. If many leads are at the same company or using the same form platform (e.g., HubSpot forms, Typeform), lower concurrency (5-8 agents) reduces the risk of triggering rate limits on those platforms.

Plan for three outcome categories:

  • Submitted — log to CRM, no further action needed
  • CAPTCHA blocked — queue for manual review or retry with a CAPTCHA-solving service
  • Failed — check the error screenshot to understand why; usually a changed page layout or a required field the agent didn’t fill

With 10 parallel agents on 400 leads at 2-3 minutes per form, expect the run to complete in 80-120 minutes. Sequential execution of the same job would take 13-20 hours.

Web Data Collection

Collecting structured data from a business directory, job board, or listings site is well-suited to parallel agents. Each agent handles a batch of URLs and extracts the same schema from each page.

For 2,000 URLs split across 20 agents (100 each):

function chunkArray<T>(arr: T[], size: number): T[][] {
  const chunks: T[][] = [];
  for (let i = 0; i < arr.length; i += size) {
    chunks.push(arr.slice(i, i + size));
  }
  return chunks;
}

const urlBatches = chunkArray(urls, 100);

const agentTasks = urlBatches.map((batch, i) => ({
  id: `collector_${i}`,
  prompt: `
    Visit each URL and extract: company name, phone, email (if visible), business hours.
    URLs: ${JSON.stringify(batch)}
    
    Save partial results every 10 URLs to /tmp/batch_${i}.json.
    Final output: JSON array with keys: url, name, phone, email, hours
  `
}));

After all 20 agents finish, merge results:

jq -s 'add' /tmp/batch_*.json > all_results.json

Automated Quality Assurance Testing

QA teams can use parallel browser agents to test forms, navigation, and UI elements across an entire site before a release. Each agent checks a set of pages, submits test data, verifies expected outcomes, and reports any anomalies.

Assign one agent per major section of your site — checkout flow, contact forms, user registration, search functionality. Run them simultaneously. A 20-page check that would take a QA engineer an hour completes in 5-10 minutes.

This pattern is particularly useful for regression testing after updates: run the same agent suite before and after deployment and compare results.

Automated Publishing to Multiple Platforms

If you publish content across multiple platforms — blog CMS, social schedulers, content hubs — parallel agents can handle the mechanical posting work. Each agent handles one platform, logging in and publishing the assigned content simultaneously.

This cuts a 30-minute multi-platform publishing workflow to under 5 minutes.

Handling Common Failures

Parallel runs surface failure modes that single-agent runs rarely expose.

Anti-Bot Detection and IP Restrictions

Multiple agents hitting the same domain simultaneously triggers rate limiters and bot detection systems. The signals are: sudden increases in CAPTCHA rates, 403 responses, or forms that appear to submit but never actually send.

Practical mitigations:

  • Stagger agent start times — don’t start all agents at the same second. Add a random 1-10 second delay per agent before it begins
  • Realistic user agents — set a non-headless browser user agent string in your Playwright config. Headless Chromium’s default user agent is a known bot signal
  • Add behavioral delays — instruct agents to wait 1-3 seconds between actions, simulating human-speed interaction
  • Rotate IPs — for serious-scale work, route each agent through a different residential IP via a proxy service like Bright Data or Smartproxy. Residential proxies are harder for bot detection to flag than datacenter IPs

Task Duplication

Without careful coordination, two agents can claim and process the same task. For batch-processing approaches (where tasks are pre-assigned to agents), this doesn’t happen. For dynamic queue approaches, use atomic database operations:

// SQLite with better-sqlite3 — atomically claim one task
function claimTask(db: any): Task | null {
  return db.transaction(() => {
    const task = db.prepare(
      "SELECT * FROM tasks WHERE status = 'pending' ORDER BY id LIMIT 1"
    ).get();

    if (!task) return null;

    db.prepare("UPDATE tasks SET status = 'claimed', claimed_at = ? WHERE id = ?")
      .run(new Date().toISOString(), task.id);

    return task;
  })();
}

SQLite with WAL mode handles concurrent writers reliably at the scale most parallel agent jobs require.

Dynamic and Multi-Step Forms

Static Playwright scripts break when forms have conditional logic, multi-step flows, or dynamically loaded fields. Claude Code handles these better because it observes page state and adapts, but you need to give it explicit instructions for the common cases:

If the form has multiple steps (Step 1 of 3, Next button, etc.):
  Complete each step before proceeding to the next.

If selecting a dropdown value reveals new form fields:
  Fill those additional fields as well.

If a verification checkbox appears ("I'm not a robot" without image):
  Click it and wait 2 seconds before proceeding.

If you see a session expired message:
  Refresh the page and start over.

The more edge cases you enumerate in your prompt, the fewer unexpected failures you’ll see across a large parallel run.

Debugging Failed Agents

When agents fail, you need to understand why without re-running the entire job. Instruct each agent to save artifacts on failure:

If you encounter any error:
1. Take a screenshot and save it to /tmp/screenshots/{TASK_ID}_error.png
2. Note the current URL
3. Note what action you were attempting
4. Include all of this in the "notes" field of your JSON output

After a run, review error screenshots as a batch:

# Open all error screenshots for review
open /tmp/screenshots/*_error.png

Most failures cluster into a few patterns — unexpected modals, changed page layouts, required fields not in your data. Once you identify the pattern, you can update your prompt and re-run only the failed tasks.

Where MindStudio Fits Into This Workflow

The approaches above work well if you’re comfortable with TypeScript or Python and want to own the orchestration layer. But if you want parallel browser agent capability without writing and maintaining orchestration code, MindStudio handles the infrastructure for you.

MindStudio is a no-code platform for building and deploying AI agents. You can define a browser workflow — visit a page, fill fields, extract data, log results — and run it against hundreds of inputs without managing processes, memory limits, or retry logic yourself.

For the lead outreach use case in particular: connect a Google Sheet or CRM as your task source, define what each agent should do per contact page, and trigger the run. MindStudio’s background agents handle execution concurrency and route results back to your data source automatically.

The platform also connects directly to business tools you’re likely already using. Its 1,000+ pre-built integrations cover HubSpot, Salesforce, Airtable, Google Workspace, Slack, and more. Results from browser agent runs feed into your CRM or spreadsheet without a separate export-import step.

For developers who prefer to stay in code, MindStudio’s Agent Skills Plugin (@mindstudio-ai/agent) lets Claude Code agents call capabilities like agent.searchGoogle(), agent.runWorkflow(), or agent.sendEmail() as simple typed method calls. It handles rate limiting, authentication, and retries at the infrastructure level — useful when your browser agents need to trigger downstream actions after completing their tasks.

You can try MindStudio free at mindstudio.ai.

Frequently Asked Questions

How many parallel Claude Code browser agents can I run at once?

There’s no hard technical ceiling — the limits are your machine’s resources and Anthropic’s API rate limits.

Practically, a 16GB RAM workstation handles 6-8 agents comfortably. A 32GB cloud VM can run 15-20. Above that, you’ll want to distribute work across multiple machines.

Check your Anthropic API tier in the console to understand your rate limits. If agents start receiving rate-limit errors (429 responses), reduce concurrency or add delays between API calls.

Do parallel browser agents share browser sessions?

Not if you set them up correctly. Each agent must use a unique userDataDir in Playwright, giving it a completely separate cookie store, session data, and browser profile. Without this isolation, agents accessing the same site can overwrite each other’s sessions — causing authentication failures and unpredictable behavior.

Always include the task ID or agent number in the userDataDir path.

What success rate should I expect for automated form submissions?

It depends heavily on the target sites:

  • Simple contact forms without bot protection: 85-95% success
  • Forms with standard image CAPTCHA: 30-60% (higher with a CAPTCHA-solving service integrated)
  • Sites with advanced bot detection (fingerprinting, behavioral analysis): 20-50%

Budget for roughly 60-75% of tasks succeeding on the first attempt across a mixed list. With retries and improved prompts, you can push that closer to 80%. The rest will be CAPTCHA blocks or pages with unusual layouts that the agent can’t navigate reliably.

Can I use computer use instead of Playwright for parallel agents?

Yes. Computer use agents can run in parallel the same way — each needs an isolated virtual display rather than a separate browser profile.

The tradeoff is speed. Computer use requires a screenshot-analyze-act cycle per step, which makes it significantly slower than Playwright for structured tasks. A form submission that Playwright handles in 20 seconds might take 2-3 minutes with computer use.

Use computer use when page structure is unpredictable or when you need to interact with non-web GUI elements. For contact forms and data extraction from known page layouts, Playwright is faster and more reliable at scale.

How do I prevent IP bans when running many agents simultaneously?

Running 10+ agents against the same domain simultaneously is a reliable way to trigger rate limiting or IP blocks. Mitigation approaches:

  • Stagger starts — delay each agent’s launch by 5-10 seconds rather than starting all simultaneously
  • Residential proxies — route each agent through a different residential IP. Services like Bright Data, Smartproxy, and Oxylabs provide pay-per-GB residential proxy access
  • Behavioral delays — 1-3 second pauses between page interactions mimic human browsing patterns
  • Spread the timeline — for large campaigns, run at lower concurrency over a longer window instead of hitting everything at once

If your lead list contains multiple contacts at the same company, they’ll all go through the same web server. Either lower concurrency for those domains or spread them across multiple runs.

What’s the difference between parallel agents and multi-agent hierarchies?

Parallel agents (as described in this guide) are N independent instances running the same task type concurrently. Each is unaware of the others — they’re processing separate items from the same list.

Multi-agent hierarchies involve an orchestrator agent that reasons and plans, with specialist subagents it delegates specific work to. Claude Code supports this natively via its Task tool. The orchestrator receives high-level goals and decides what subagents to spawn and what to tell them.

For browser automation at scale, flat parallelism is usually sufficient — tasks are independent and the work is well-defined. Multi-agent hierarchies add value when tasks are interdependent, when one agent’s results determine what the next agent should do, or when you need complex planning logic layered over the execution.

Key Takeaways

Parallel Claude Code browser agents follow a straightforward pattern once the architecture is clear. Here’s what matters most:

  • Running 10 agents in parallel reduces job time by roughly 10x compared to sequential execution — the most obvious win for any large-scale browser task
  • The three-layer architecture (task list, orchestrator, execution) keeps the system maintainable and debuggable; don’t mix concerns across layers
  • Browser isolation via unique userDataDir paths is non-negotiable — without it, agents interfere with each other’s sessions in ways that are hard to debug
  • Structured prompts with explicit edge-case handling dramatically reduce failure rates; generic prompts produce inconsistent results at scale
  • Plan for 60-75% first-run success on contact form submissions; CAPTCHA blocks, changed page layouts, and bot detection are real variables
  • For teams that want this capability without managing orchestration infrastructure, MindStudio’s background agents handle the execution layer — connect your task list, define the workflow, and run at scale without process management code

Start with 5 parallel agents, verify your prompt handles the common edge cases, then scale concurrency up from there.