How to Build an OpenClaw-Like Agent Without Installing OpenClaw
Combine Claude Code Dispatch, a SQL memory database via MCP, and scheduled tasks to get OpenClaw-like agent behavior without the security risks.
The Case for Building Your Own Instead
The appeal of OpenClaw-style agents is obvious. You define a goal, and Claude figures out the plan — dispatching subtasks, pulling from memory built up over prior runs, and executing steps without you steering it manually. That’s the kind of autonomous, multi-agent behavior that turns a chatbot into something actually useful for ongoing workflows.
But OpenClaw and similar local agent frameworks have a real friction problem: installation. They typically require a local Python environment, elevated system permissions, and configuration that varies by OS. More importantly, they run with your machine’s full user-level access. Claude can read files, execute shell commands, and make network requests — and if something goes wrong in the planning phase, there’s limited guardrailing to prevent it.
This article walks through a specific architecture that gives you OpenClaw-like behavior — Claude as the orchestrator, multi-agent task dispatch, and persistent memory — using Claude Code Dispatch, a SQL memory database via MCP, and scheduled tasks. The result runs in the cloud, uses bounded tool access, and doesn’t require handing an AI agent the keys to your file system.
What OpenClaw Actually Does
OpenClaw is an open-source autonomous agent framework built on top of Claude. At its core, it:
- Accepts high-level natural language goals
- Breaks those goals into subtasks using Claude as the reasoning layer
- Dispatches subtasks to tools (file system, shell, search, code execution)
- Persists memory across sessions so the agent can build on prior work
- Operates autonomously without requiring step-by-step user input
That last two points are what separate it from a simple prompt-response setup. An agent that remembers what it did on Tuesday can build on prior work rather than starting cold each time. And autonomous execution means the agent keeps going until the goal is done, not just until it produces one reply.
The problem isn’t the goal — it’s the execution environment. OpenClaw runs locally with broad system access, which is exactly what makes it powerful and exactly what creates the risk.
Why “Just Install It” Isn’t Always the Answer
There are three real concerns with running a local autonomous agent:
Security surface area. Any framework installed via pip or npm brings a dependency tree. Those dependencies have their own dependencies. A compromised package in that tree can run arbitrary code on your machine the moment the framework starts.
Blast radius of agent errors. When Claude makes a reasoning mistake inside an autonomous agent, the consequences happen with your user-level permissions. A misguided file write, an accidental deletion, an unexpected external API call — they all execute on your actual system.
Portability and deployment. A local agent that works on your Mac with Python 3.11 may not work on your Ubuntu CI server without significant configuration. Hosting it in the cloud is possible, but it means deploying a local-first system to an environment it wasn’t designed for.
None of these are hypothetical edge cases. Any sufficiently complex autonomous agent run will eventually produce an unexpected action. The question is how contained that action is when it happens.
The Architecture: Three Components That Compose Into One System
Rather than installing OpenClaw, you’ll build a system from three components:
- Claude Code Dispatch — Claude acts as an orchestrator, receiving a goal, planning multi-step execution, and routing work to specialized tools or sub-agents via tool calls.
- SQL Memory via MCP — A SQL database exposed as an MCP server gives Claude structured, persistent memory it can read and write across runs.
- Scheduled Tasks — A cron job or cloud scheduler triggers the agent on a defined interval, creating autonomy without requiring a continuously running process.
Each component is straightforward on its own. Together, they replicate the core behavior of OpenClaw in a form you can run in the cloud, grant limited tool access to, and debug when something goes wrong.
Set Up Claude Code Dispatch as Your Orchestrator
The Claude Code dispatch pattern uses Claude as a planning and routing layer. Rather than Claude directly executing every action, it reasons about what needs to happen and delegates work to specific tools or sub-agents.
Write the Orchestrator System Prompt
Start with a system prompt that explicitly frames Claude as a dispatcher. The key elements:
You are a task orchestrator. When given a goal:
1. Break it into discrete subtasks.
2. For each subtask, determine which tool or sub-agent should handle it.
3. Execute subtasks in the correct order, passing outputs forward.
4. Log your reasoning and each completed action to the memory database.
5. Return a summary of what was accomplished and what remains open.
This framing prevents Claude from trying to do everything in one response. It forces decomposition first, execution second.
Define a Minimal Tool Set
Keep the tool list small and intentional. Every tool you add increases surface area. A useful starting set:
read_memory(query)— queries the SQL memory databasewrite_memory(key, value, tags)— stores information to the SQL databaserun_subagent(prompt, tools)— spawns a focused sub-agent for a narrow tasksearch_web(query)— calls a search APIsend_notification(message)— pings a Slack channel or webhook
Notice what’s absent: file system access, shell execution, arbitrary code execution. Those are the high-risk tools you want to omit unless you have a specific, well-scoped reason to add them.
The Dispatch Loop
When Claude calls run_subagent, it spawns a second Claude instance focused on one narrow task — for example, “Extract the key dates from this document and return them as structured JSON.” That sub-agent has its own restricted tool set, returns a result, and exits. The orchestrator receives the output and continues.
The orchestration loop in Python looks roughly like this:
import anthropic
client = anthropic.Anthropic()
def orchestrate(goal: str):
messages = [{"role": "user", "content": goal}]
while True:
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=4096,
system=ORCHESTRATOR_SYSTEM_PROMPT,
tools=TOOLS,
messages=messages
)
if response.stop_reason == "end_turn":
return response.content
if response.stop_reason == "tool_use":
tool_results = execute_tool_calls(response.content)
messages.append({"role": "assistant", "content": response.content})
messages.append({"role": "user", "content": tool_results})
The loop continues until Claude returns a final answer rather than another tool call. Each iteration, Claude either uses a tool or declares it’s done.
Build a SQL Memory Layer with MCP
Persistent memory is what separates an autonomous agent from a one-shot prompt. Without it, every run starts cold. With it, the agent accumulates context — what tasks it completed, what it learned, what still needs doing.
MCP (Model Context Protocol) is Anthropic’s open standard for giving AI models access to external tools and data sources. An MCP server exposes capabilities that Claude calls as native tool calls. You’ll use it to expose a SQL database as Claude’s memory.
Why SQL Beats a File
File-based memory (writing to a .txt or .json) is fragile. It doesn’t scale, it’s hard to query semantically, and concurrent writes can corrupt state. SQL gives you:
- Structured, queryable memory
- Transactional safety
- Easy filtering by date, tag, or session
- A defined schema that forces you to think about what you’re actually storing
The MCP Memory Server
Your MCP server exposes three core tools:
store_memory(content, tags, session_id)— inserts a new memory recordretrieve_memories(query, limit)— returns relevant memories via keyword or semantic searchlist_recent(n)— returns the N most recent entries
A minimal SQLite schema:
CREATE TABLE memories (
id INTEGER PRIMARY KEY AUTOINCREMENT,
content TEXT NOT NULL,
tags TEXT,
session_id TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
For production, replace SQLite with PostgreSQL. The schema stays identical; only the connection string changes.
Connecting It to Claude
Once your MCP server is running, register it in your agent’s MCP config file:
{
"mcpServers": {
"memory": {
"command": "node",
"args": ["path/to/memory-server.js"]
}
}
}
For a cloud deployment, your MCP server runs as a small HTTP service and Claude connects over the network. Either way, Claude can now call memory.store_memory and memory.retrieve_memories as regular tool calls.
What Claude Should Store
Instruct Claude (in the system prompt) to log information at specific moments:
- After completing a subtask: what was done and the result
- When it encounters a fact it may need in a future run: log it with relevant tags
- When a subtask fails: log the reason so future runs can avoid the same error
- At the end of every session: write a summary of the full run
This builds episodic memory over time. On the next scheduled run, Claude’s first call should be retrieve_memories("recent tasks and current project state") — giving it context before it starts planning anything.
Configure Scheduled Tasks for Autonomous Execution
A scheduled trigger is what makes this system genuinely autonomous. Instead of you initiating the agent, a scheduler calls it on a defined interval.
Choosing Your Scheduler
Options from simplest to most robust:
Cron (Linux/Mac) — Works for personal projects and always-on machines. Add 0 9 * * 1-5 python /path/to/agent.py to run the agent weekdays at 9 AM. Downside: requires a machine that’s always running.
GitHub Actions — Free tier is generous and no always-on server is required. The schedule trigger supports cron syntax:
on:
schedule:
- cron: '0 9 * * 1-5'
jobs:
run-agent:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: python agent.py
env:
ANTHROPIC_API_KEY: $
DATABASE_URL: $
Cloud schedulers — AWS EventBridge, GCP Cloud Scheduler, and Render Cron Jobs all support cron syntax and can trigger a Lambda, Cloud Function, or container. Good choice if you’re already deployed in a cloud environment.
State Lives in the Database, Not the Process
Your agent script should be stateless. The scheduler triggers it, the script reads current state from the SQL database, does its work, writes results back, and exits. The next scheduled run picks up where the last one left off.
This is a fundamental structural difference from how local agents typically work. OpenClaw-style frameworks often hold state in memory or local files that disappear when the process terminates. A database-backed agent survives restarts, deployment updates, and cloud function cold starts without losing history.
Add a Heartbeat Notification
If a scheduled agent fails silently, you won’t know until the output stops appearing. Add a simple success/failure notification:
try:
run_agent(goal)
send_notification("✓ Agent run completed")
except Exception as e:
send_notification(f"✗ Agent run failed: {str(e)}")
Route that to Slack, email, or a webhook. You get observability without any monitoring infrastructure.
Wire It Together: A Working End-to-End Example
A concrete example makes this tangible. Here’s a research agent that monitors a topic and delivers a briefing each morning.
Goal: Every weekday at 7 AM, search for recent developments on a specified topic, compare them to prior findings in memory, and send a concise briefing.
Orchestration flow:
- Agent starts → calls
retrieve_memories("recent briefings and known topics")to load context - Orchestrator calls
search_web("recent developments in [topic]")to get fresh results - Orchestrator calls
run_subagent("Compare these search results against prior findings and identify what's new", tools=[read_memory])— a focused sub-agent handles the comparison - Orchestrator receives the diff → calls
run_subagent("Draft a 3-paragraph briefing based on these new developments", tools=[])— another sub-agent writes the output - Orchestrator calls
write_memory("morning briefing", briefing_text, tags=["briefing"])to persist the result - Orchestrator calls
send_notification(briefing_text)to deliver it
What makes this OpenClaw-like:
- Multi-step reasoning with subtask dispatch
- Persistent memory that accumulates across runs
- Autonomous scheduled execution
- No manual intervention after setup
What makes it safer:
- No file system access
- No shell execution
- Tool access limited to four defined functions
- Runs in a container or cloud function, not on your local machine
How MindStudio Makes This Faster
The architecture above is solid, but building it from scratch takes real time. You need to write the MCP server, implement the orchestration loop, configure the scheduler, handle retries, and manage API secrets — all before you write a single line of agent logic.
MindStudio handles that infrastructure layer so you focus on the agent behavior itself. You can build autonomous background agents that run on a schedule, use Claude as the reasoning engine, connect to data sources for persistent memory, and dispatch to sub-workflows — without standing up a custom MCP server or deployment pipeline.
The pieces map directly to what we built above:
- Scheduled execution — Background agents on MindStudio run on your defined schedule automatically, with no always-on server required.
- Persistent memory — Native memory primitives are built in. You don’t set up a separate database.
- Multi-agent dispatch — MindStudio’s workflow builder lets you chain agents together so one agent’s output becomes another’s input, replicating the dispatch pattern.
- 200+ models available — Claude Opus, Sonnet, and other models are available without a separate Anthropic account. You can also configure fallback models for production agents that need to stay running.
If you want to test the multi-agent pattern before committing to building and hosting the infrastructure yourself, MindStudio is a reasonable starting point. You can try it free at mindstudio.ai.
Frequently Asked Questions
What is OpenClaw and what does it do?
OpenClaw is an open-source autonomous agent framework built on Claude. It accepts a high-level goal, has Claude plan and execute the necessary steps using tools like web search, file access, and shell execution, and persists memory between sessions so the agent can build on prior work. The core trade-off is that it runs locally with broad system permissions, which creates security exposure that you have to manage explicitly.
Is it safe to run OpenClaw locally?
The risks are real but manageable. The main concern is that Claude operates with your user-level permissions. A planning error can result in unintended file modifications or command execution. Running it inside a sandboxed Docker container significantly limits the blast radius. That said, if controlled tool access is a priority, the architecture described in this article gives you more explicit control over what the agent can and cannot do, without relying on sandbox configuration.
What is MCP and how does Claude use it?
MCP (Model Context Protocol) is an open standard from Anthropic for connecting AI models to external tools and data sources. An MCP server exposes a typed set of tools that Claude can call. From Claude’s perspective, MCP tools are identical to any other tool call — it simply uses them. The standard defines how tools are described and connected, so you can swap or update MCP servers without changing your agent code. Anthropic publishes official MCP SDKs and server examples to get started.
How do I give Claude persistent memory without file system access?
Use a database exposed via MCP. A SQL schema with a memories table — storing content, tags, and a timestamp — lets Claude read and write structured memory through controlled tool calls. Claude never touches the file system directly; it interacts with memory through the interface your MCP server defines. This also makes memory auditable and queryable in ways flat files aren’t, which is useful when debugging why the agent made a particular decision.
Can I run a scheduled agent without managing a server?
Yes. GitHub Actions is the simplest option for most setups — it supports cron-syntax scheduling and runs your agent on GitHub’s infrastructure at no cost on the free tier. For higher-frequency schedules or production workloads, AWS EventBridge or GCP Cloud Scheduler can trigger a serverless function on any interval. Because the agent script is stateless (all state lives in the database), the triggering infrastructure can be as lightweight as possible.
How is this different from just calling the Claude API directly?
A single Claude API call is stateless and single-turn. The system described here adds three things that make it genuinely agentic: a planning loop that continues until a goal is complete rather than stopping after one response, persistent memory that accumulates across sessions, and autonomous scheduled execution. The dispatch pattern also adds structure — an orchestrator delegates to focused sub-agents rather than fitting everything into one context window. Those additions together are what create behavior that resembles a continuously working agent rather than an interactive assistant.
Key Takeaways
- You can replicate OpenClaw’s core behavior — multi-step reasoning, persistent memory, autonomous execution — without installing it locally or granting broad system access.
- Claude Code Dispatch turns Claude into an orchestrator that plans and routes work to sub-agents, keeping each agent’s scope narrow and its permissions minimal.
- SQL memory via MCP gives Claude structured, persistent memory that builds up across sessions without requiring file system access.
- Scheduled tasks (GitHub Actions, cloud schedulers, or cron) create autonomous operation without a continuously running process.
- The security improvement is real: limiting Claude’s tool access to a defined set via MCP removes the broad exposure that comes with local agent frameworks.
- If you’d rather skip the infrastructure work, MindStudio provides the same architecture — Claude, scheduled execution, persistent memory, multi-agent workflows — without building the pieces yourself.