Claude Code Remote Routines: Run Automations on Anthropic's Cloud While Your Laptop Is Closed
Claude Code remote routines run on Anthropic's cloud (4 vCPUs, 16GB RAM) against a GitHub repo — no laptop required, no .env files, no token limits.
Your Laptop Can Be Off. Your Claude Agent Is Still Running.
Claude Code remote routines run on Anthropic’s infrastructure — 4 vCPUs, 16GB RAM, 30GB of disk — against a cloned copy of your GitHub repo. The environment spins up, does the work, and gets destroyed. You don’t need to be there. You don’t need your laptop open. You don’t even need to be awake.
That’s the actual promise here, and it’s worth being precise about what it means in practice, because the implementation details matter quite a bit if you want these automations to actually work.
The Infrastructure Behind Remote Routines
When Anthropic announced routines in Claude Code in April 2025, the headline was simple: schedule a prompt, have it run on the web. But the mechanics are more specific than that framing suggests.
A remote routine works by cloning your GitHub repository into a fresh cloud environment. Claude reads your claude.md file, your skills, your scripts — everything in the repo — and then executes the prompt you’ve configured. When the run finishes, that cloned environment is destroyed. Nothing persists locally. The session log stays accessible in your task history, and any changes Claude made to your codebase get pushed to a new branch. But the execution environment itself is gone.
Day one: idea. Day one: app.
Not a sprint plan. Not a quarterly OKR. A finished product by end of day.
This is a meaningful architectural choice. It means every run is stateless. Claude can’t rely on anything that lives outside your GitHub repo or outside the environment variables you’ve explicitly configured. If your automation was working fine locally because it was reading from a .env file, it will fail remotely — because .env files are in .gitignore and never reach the cloud environment.
The fix is straightforward but easy to miss: you set your API keys as environment variables in the Cloud Environment settings panel inside the scheduled task configuration. There’s a field for it. You name the variable, paste the value, and Claude can then access it during the run. Critically, you also need to tell Claude in your prompt to look for the key in the environment, not in a .env file — because by default, Claude may try to read from .env first and fail silently.
Limits, Plans, and What “Minimum Interval” Actually Means
Remote routines aren’t unlimited. The quotas break down by subscription tier:
- Pro plan: 5 remote routine runs per day
- Max plan ($200/month): 15 runs per day
- Team and Enterprise: 25 runs per day
The minimum scheduled interval is one hour. You can’t run a remote routine every 10 minutes — that’s what the /loop skill is for, and loop runs on your local machine with your session open. Remote routines are for the things you want running while you sleep: daily digests, weekly audits, Monday morning ClickUp snapshots, that kind of cadence.
If you hit your daily cap, orgs with extra usage enabled can exceed it on metered overage. For most individual users on Pro, five runs a day is genuinely constraining — you’ll want to be deliberate about which automations deserve a cloud slot versus which ones can run locally.
The .env Problem (And Why It Trips Everyone Up)
This is the most common failure mode when migrating local automations to remote routines, and it’s worth dwelling on.
Your local Claude Code setup almost certainly has a .env file. It has your ClickUp API token, your YouTube Data API key, your Stripe secret, whatever. Claude knows to look there. When you’ve been running automations locally for weeks, you build up a mental model where “Claude has access to my keys” is just a given.
Remote routines break that assumption completely. The cloud environment has no .env file. It has no local cookies. It has no browser session state. It only has what’s in your GitHub repo and what you’ve explicitly added to the Cloud Environment panel.
Nate Herk, who documented his migration of local automations to remote routines, ran into this directly. His ClickUp automation worked fine on the first try because Claude figured out the environment variable lookup on its own. His YouTube comments automation didn’t — Claude kept looking for the key in .env, failing, and returning an error. The fix was adding an explicit instruction to the prompt: “My YouTube API key is available as an environment variable. Use it directly from the environment. Don’t look for a .env file.”
That one sentence is the difference between an automation that works and one that silently fails every night.
Network Access: Trusted vs. Full
How Remy works. You talk. Remy ships.
When you configure a Cloud Environment, you choose a network access level. The default is Trusted, which restricts outbound requests to a vetted list of domains that Anthropic maintains — things like their own services, major cloud platforms like Google, version control systems. Most well-known APIs are on this list.
Full access removes those restrictions. If Claude reads malicious content during a run and gets tricked into exfiltrating data, a Full environment won’t block that outbound request. Trusted would.
For private repos where you control all the inputs, the practical risk is low. But it’s worth knowing the tradeoff. Herk found he needed Full access to get ClickUp working in his initial tests — Trusted was blocking something in the ClickUp API flow. If you’re hitting unexplained failures, network access level is one of the first things to check.
You can also set Custom access to allow specific domains that aren’t on the Trusted list but that you’ve vetted yourself.
What Remote Routines Are Not: The Loop Comparison
The /loop skill — Claude Code’s built-in cron scheduler — is often confused with remote routines. They’re different tools for different jobs.
Loop creates cron jobs within a session. You can say “every 10 minutes, check my ClickUp for new tasks” and Claude will do exactly that, firing off in the same session window. The minimum interval is whatever you want — every minute, every five minutes. But the session has to stay open. Close the terminal tab and the crons die. Loop jobs also have a hard 3-day expiry, after which they auto-delete.
Remote routines survive restarts. They run on Anthropic’s cloud. They don’t need your machine on. But they have a one-hour minimum interval and a daily cap.
The decision rule is simple: if you need help right now on a project that’s running today, use loop. If you need something to run every Monday at 6am indefinitely, use a remote routine.
There’s also the middle ground of desktop scheduled tasks — local automations that run on a schedule but require the Claude desktop app to be open. These have no daily cap and can run as frequently as every minute, but they’re tied to your machine being awake. For teams that want 24/7 coverage without that constraint, keeping a Claude Code agent running continuously requires a different setup entirely.
What Persists, What Gets Destroyed
After a remote routine run, here’s what survives:
- Session logs: Accessible in your task history. You can review every run, see what Claude did, and diagnose failures.
- Code changes: If Claude modified files in your repo, those get pushed to a new branch.
- Nothing else: The cloud environment is destroyed. No local state, no cookies, no cached data.
This statelessness is actually a feature for most automation use cases. Each run starts clean. There’s no accumulated cruft from previous sessions. But it does mean you can’t build automations that rely on session memory — if you want Claude to remember what it found last week, you need to write that to a file in your GitHub repo during the run, so it’s available next time.
One pattern that works well here: have your routine write a brief summary of its findings to a markdown file in your repo at the end of each run. The next run reads that file first. You get a lightweight form of continuity without needing any external state management.
Connecting the Routine to Your Skills
Remote routines don’t run Python scripts directly. They inject a prompt into a real Claude session — the same interaction as if you’d typed that prompt yourself in Claude Code. Which means they can invoke your skills.
If you have a skill at .claude/skills/wins-engagement/skill.md, your routine prompt can just say “run the wins engagement skill.” Claude reads the claude.md file (which it does automatically on every remote run), finds the skill reference, loads the skill’s YAML front matter to confirm it’s the right one, then loads the full skill.md and executes the steps.
This is the architecture that makes remote routines genuinely useful rather than just a novelty. You’re not writing a new automation from scratch for each scheduled task. You’re scheduling the skills you’ve already built and tested. The progressive context loading in Claude Code skills — where only the ~100-token YAML front matter gets read during skill search, not the full file — means even a repo with dozens of skills stays lightweight for each run. For a deeper look at how this skill-building process works, Claude Code’s AutoResearch pattern for self-improving skills is worth understanding before you start scheduling routines at scale.
The practical implication: build your skills first, test them locally until they’re reliable, then wrap them in a remote routine. Don’t try to do both at once.
The GitHub Repo Requirement (And What It Means for Your Setup)
Remote routines require a GitHub repository. This isn’t optional. The cloud environment has to have something to clone.
If you’ve been running Claude Code locally against a folder on your desktop, you need to push that to GitHub before remote routines will work. The repo can be completely private — Anthropic clones it for the run and then discards the clone. No one else sees it.
There’s a secondary implication here: your repo size matters. The cloud environment has 30GB of disk, which is generous, but if you’ve accumulated hundreds of megabytes of reference files, transcripts, and output artifacts in your repo, you’re cloning all of that on every run. For a repo that’s mostly markdown files and skill definitions, this is a non-issue. For a repo that’s grown into a sprawling knowledge base, it might be worth thinking about what actually needs to be in the repo versus what can live elsewhere.
The approach of building an AI second brain with Claude Code and Obsidian — keeping a raw/ folder and a wiki/ folder with an index.md — is one way to keep your knowledge organized without bloating the repo. The wiki structure gives Claude a navigable index rather than forcing it to scan everything.
Writing Prompts That Actually Work Remotely
Local automations can be a little sloppy. If Claude gets confused, you’re there to correct it. Remote routines have no such safety net.
Remy doesn't write the code. It manages the agents who do.
Remy runs the project. The specialists do the work. You work with the PM, not the implementers.
The prompt is the entire instruction set. It needs to be specific enough that Claude can complete the task correctly on the first try, without asking clarifying questions. A prompt like “analyze my YouTube comments and give me a summary” is too vague for a remote routine. A prompt like “fetch the 50 most recent comments from my YouTube channel using the YOUTUBE_API_KEY environment variable, categorize them by sentiment and topic, and write a markdown summary to /reports/youtube-comments-{date}.md” is the right level of specificity.
A few things that help:
- Explicitly name the environment variables Claude should use
- Specify the output location and format
- Tell Claude what to do if it encounters an error (write a failure log, send a Slack message, whatever)
- Reference the specific skill by name if you want it to run a skill
Before setting any routine to run on a schedule, run it manually using the “Run Now” button and watch it execute. Fix whatever breaks. Run it again. Only schedule it once you’ve seen it complete successfully without intervention. This is the same feedback cycle that makes Claude Code agentic workflows reliable — you iterate locally until the behavior is predictable, then automate.
The Broader Context: What This Architecture Enables
Remote routines are the fourth layer of the Four C’s framework that serious Claude Code users are building toward: Context, Connections, Capabilities, and Cadence. The first three are about what Claude knows and what it can do. Cadence is about when it acts without you.
Platforms like MindStudio approach this orchestration layer differently — offering 200+ models, 1,000+ integrations, and a visual builder for chaining agents and workflows without writing the orchestration code yourself. If you want to go further and compile those workflows into a full deployable application, Remy is MindStudio’s spec-driven full-stack app compiler: you write a markdown spec with annotations, and it compiles into a complete TypeScript app with backend, database, auth, and deployment included. Remote routines in Claude Code are the DIY version of the same orchestration idea: you control the infrastructure, the repo, the skills, and the schedule.
Neither approach is universally better. Remote routines give you full control and keep everything in your GitHub repo. The tradeoff is that you’re responsible for the .env migration, the network access configuration, the prompt specificity, and the debugging when something fails at 3am.
The right question isn’t which approach is superior. It’s whether you want to own the infrastructure or have it managed for you.
What to Configure Before Your First Remote Routine
If you’re ready to set one up, the sequence is:
- Push your Claude Code project to a private GitHub repo if it isn’t already there
- Open the Claude desktop app, go to scheduled tasks, and create a new remote task
- Configure a Cloud Environment: name it, set network access (start with Trusted, switch to Full if you hit failures), and add your API keys as environment variables
- Write a specific, one-shot prompt — no ambiguity, explicit environment variable references, defined output location
- Set the schedule (minimum one hour interval)
- Hit “Run Now” and watch the session log before you let it run on schedule
The session history is your debugging tool. Every run is logged. If something fails, you can see exactly where Claude got stuck and why.
Remy is new. The platform isn't.
Remy is the latest expression of years of platform work. Not a hastily wrapped LLM.
One last thing: if you’re building automations that interact with browser-based tools using cookies or local session state — school community automations, anything that requires a logged-in browser session — those won’t work remotely. The cloud environment has no cookies. You’ll need an API endpoint or a token-based authentication method. Remote control via Dispatch is one alternative for cases where you need to reach tools that don’t have clean API access.
The infrastructure is real. The constraints are real. Work within them and you get a Claude agent that runs while you’re asleep. That’s the actual value proposition, and it’s worth the setup cost.