Skip to main content
MindStudio
Pricing
Blog About
My Workspace

What Is the Conway Agent? Anthropic's Unreleased Always-On Background AI Revealed in the Code Leak

Conway is Anthropic's leaked always-on background agent with push notifications, GitHub subscriptions, and a proprietary extension format. Here's what we know.

MindStudio Team RSS
What Is the Conway Agent? Anthropic's Unreleased Always-On Background AI Revealed in the Code Leak

Anthropic’s Hidden Agent: What the Code Leak Actually Revealed

A snippet of leaked code doesn’t usually make waves in the AI world. But when it points to an entirely unreleased product from one of the industry’s most secretive labs, people pay attention.

That’s exactly what happened with Conway — an internal Anthropic project that surfaced through a code leak and gave observers their first look at how the company is thinking about always-on, background AI agents. Conway is notable not just because it’s unreleased, but because of what it suggests about where Claude-based systems are headed: persistent, event-driven, and deeply integrated into developer workflows.

Here’s what we know, what we don’t, and why it matters for anyone building with multi-agent AI systems.


What Is the Conway Agent?

Conway appears to be Anthropic’s internal name for an always-on background agent — a Claude-powered system designed to run persistently rather than only when a user actively types a prompt.

Most AI assistants today are reactive. You ask a question, the model responds, and the session ends. Conway inverts that model. Based on references in leaked code and configuration files, it’s built to sit in the background, monitor external events, and act — or notify — without waiting to be asked.

The name “Conway” itself doesn’t appear in any official Anthropic documentation or product releases. It emerged from code that leaked publicly, likely from a repository or build artifact that wasn’t meant to be visible.

What makes Conway significant isn’t just that it exists. It’s what the implementation details suggest about Anthropic’s product thinking around agentic AI — specifically around the idea that useful AI doesn’t always wait for instructions.


How Conway Was Discovered

The Conway agent came to light through a code leak that exposed portions of Anthropic’s internal tooling. Developers and researchers who examined the leaked materials found references to:

  • A background agent architecture named “Conway”
  • A proprietary file extension format associated with it
  • Configuration patterns suggesting persistent, event-driven operation
  • Integration hooks that point toward GitHub and notification systems

This kind of accidental disclosure isn’t unprecedented. Internal tooling, codenames, and unreleased features routinely appear in leaked builds, misconfigured repositories, or published npm packages before companies catch the exposure. The leak didn’t reveal everything — there’s no full codebase, no product documentation, and no confirmed timeline — but the fragments are detailed enough to draw meaningful conclusions.

Anthropic hasn’t officially commented on Conway. That silence is itself informative: the company neither confirmed the leak’s authenticity nor issued a denial, which is the usual posture when the disclosed information is accurate but premature.


Key Features Revealed in the Code

Always-On Background Operation

The most distinctive aspect of Conway is its design as a persistent agent. Unlike Claude.ai or the Claude API in standard use, Conway isn’t triggered by user input alone. It’s built to run continuously, checking conditions and responding to signals from external systems.

This puts Conway in a different category than chatbots or one-shot agents. It’s closer to a daemon process — the kind of background service that developers already use to handle tasks like monitoring, alerting, and scheduled jobs. Except instead of running predetermined scripts, it applies Claude’s reasoning to decide what to do when something happens.

Push Notifications

The leaked code references a push notification system as part of Conway’s architecture. This is a meaningful detail. Push notifications require a persistent connection to the agent — you can’t send a push notification to something that only exists when you’re talking to it.

This suggests Conway is designed to alert users when something notable happens, even if they’re not actively in a conversation. Think: “Your PR was reviewed and three comments need your attention” or “This repository just published a new version that breaks your integration” — delivered proactively, not surfaced only when you remember to ask.

GitHub Subscriptions

The leak shows Conway has subscription-based hooks into GitHub. This means it can watch repositories, branches, pull requests, issues, or commit streams and respond to those events.

The implications are practical. A developer-focused background agent that monitors GitHub activity could:

  • Summarize PR comments and flag blockers
  • Track CI/CD failures and escalate appropriately
  • Monitor dependency repositories for security advisories
  • Draft responses or suggested fixes based on issue activity

This isn’t speculative — GitHub event subscriptions are explicitly referenced in the leaked materials. The exact scope of what Conway tracks and what actions it can take from those events isn’t fully clear, but the integration is deliberate and specific.

Proprietary Extension Format

Conway uses a proprietary extension format — essentially a file type specific to this agent’s configuration and behavior. The details on this are sparse, but the existence of a dedicated extension format suggests Anthropic built Conway with a formalized packaging system for agent behavior.

This matters because it indicates Conway isn’t just a prompt template running on Claude. It has its own configuration layer — a way to define what the agent monitors, how it responds, what it’s allowed to do. That’s closer to an agent runtime than a wrapper around a language model.


Why This Is Different from Standard Claude Agents

Anthropic has published a lot about Claude’s multi-agent capabilities, including how Claude can operate as an orchestrator directing other agents or as a subagent executing specific tasks. But those frameworks are still fundamentally request-response in nature: something triggers Claude, Claude does work, Claude returns a result.

Conway represents a different paradigm. A few key distinctions:

Trigger source: Standard Claude agents are triggered by humans or other agents explicitly calling them. Conway is triggered by external events — GitHub activity, time-based conditions, system signals.

Persistence: Claude instances in typical use are stateless between calls. Conway implies a persistent process with ongoing awareness of subscribed event streams.

Initiative: Conway is described in ways that suggest it can act proactively — sending notifications, surfacing information, potentially taking actions — without a human prompt initiating each cycle.

This is sometimes called an “ambient agent” or “always-on agent” in the research literature. The idea is that the most useful AI isn’t the one you have to remember to ask — it’s the one that tells you what you need to know before you think to look.


What Conway Tells Us About Anthropic’s Roadmap

Anthropic’s public work has focused heavily on safety, alignment, and making Claude a capable model for general use. Conway suggests the product team is thinking beyond the chat interface.

A few things stand out:

Developer-first focus. The GitHub subscription feature isn’t aimed at general consumers. This is tooling for software teams. Anthropic appears to be building Conway as infrastructure for developer workflows, not a consumer product.

Competition with Copilot and Cursor. GitHub Copilot and tools like Cursor already operate in the IDE and development workflow space. An always-on Claude agent with deep GitHub integration would be a direct move into that territory.

Agentic AI as infrastructure. The proprietary extension format and persistent architecture suggest Anthropic is building Conway as a platform, not just a feature. Packaging agent behavior in a formalized format implies third-party extensibility is planned — even if not ready yet.

Timing with Model Context Protocol. Anthropic recently published the Model Context Protocol (MCP), a standard for connecting AI models to external data sources and tools. Conway’s event subscription model fits naturally with MCP’s design philosophy — agents that stay connected to live data rather than only working with information present at query time.


The “Always-On” Problem in AI Systems

Building a useful always-on agent is harder than it sounds. The reason most AI tools are still request-response isn’t laziness — it’s that persistent agents introduce real engineering challenges.

Signal-to-noise. An agent that monitors everything and alerts on everything is useless noise. The hard problem is knowing what’s worth surfacing. This is a reasoning problem that benefits from a capable model — which is presumably why Anthropic is building this with Claude rather than rule-based logic.

Action scope. An agent that can only notify you is limited. An agent that can take actions on your behalf raises immediate questions about authorization, rollback, and accountability. Conway’s exact action scope isn’t clear from the leak, but this tension is inherent to any always-on agent design.

Cost. Running Claude continuously against an event stream isn’t free. Anthropic will need to think carefully about pricing and rate limiting for this kind of persistent use — different from per-query API pricing.

Privacy. An agent with persistent access to your GitHub repositories, notifications, and potentially more has significant access to sensitive information. Trust and permission models become critical.

These aren’t problems that make Conway a bad idea. They’re problems any serious always-on agent has to solve. The fact that Anthropic is building something in this space suggests they believe they have workable answers, even if those answers aren’t visible in the leak.


Building Always-On Background Agents Today with MindStudio

You don’t have to wait for Conway to ship to build always-on background agents of your own. MindStudio already supports exactly this kind of architecture, and it doesn’t require writing infrastructure code to get there.

MindStudio is a no-code platform for building AI agents and workflows. Among the agent types it supports are autonomous background agents that run on a schedule or trigger from external events — webhooks, emails, API calls, and integrations with tools like GitHub, Slack, HubSpot, and Google Workspace.

If the Conway architecture interests you — an agent that monitors a system, reasons about what’s happening, and takes action or sends a notification — you can build a working version of that pattern in MindStudio today. The platform includes:

  • 200+ AI models including Claude, so you can power your agent with the same model family Conway is built on
  • Webhook and API endpoint agents that trigger from GitHub events, form submissions, or any external system that can make an HTTP call
  • 1,000+ pre-built integrations including GitHub, Slack, and notification tools — no manual API wiring required
  • Scheduled agents that run on a cron-style schedule and check conditions, summarize data, or take actions

A practical example: you could build a MindStudio agent that subscribes to GitHub webhook events, uses Claude to reason about whether a PR comment needs urgent attention, and sends a Slack notification with a summary and suggested next step. That’s Conway’s core loop, running today, without waiting for Anthropic to ship.

You can start building for free at mindstudio.ai.


How Conway Fits Into the Broader Multi-Agent Landscape

Conway isn’t the only project in this space. Several companies are working on persistent, background-capable AI agents:

GitHub Copilot Workspace — Microsoft’s agentic coding environment, which can reason across a repository and execute multi-step tasks. Still primarily session-based, but moving toward more autonomous operation.

Devin (Cognition) — An autonomous software engineering agent designed to handle extended coding tasks independently. Closer to a worker agent than an always-on monitor.

Google’s Project Astra / Gemini Live — Google’s work on persistent, always-available AI that can maintain context over time and respond to multimodal input.

OpenAI’s Operator and Assistants — OpenAI’s assistants API supports persistent threads and tool use; Operator takes this further with web-based action capability.

What makes Conway potentially distinctive is the combination of Claude’s reasoning, tight GitHub integration, and a formalized extension format that could allow third-party developers to package and distribute agent behaviors. That last piece — if it materializes — could make Conway a platform rather than just a product feature.


FAQ: Common Questions About the Conway Agent

What exactly is the Conway agent?

Conway is an unreleased Anthropic project, revealed through a code leak, that appears to be an always-on background AI agent. It’s designed to monitor external events — particularly GitHub activity — and respond proactively through push notifications and potentially automated actions, rather than waiting for a user to initiate a conversation.

Is Conway officially released or available to use?

No. As of publication, Conway has not been officially announced or released by Anthropic. It surfaced through a code leak and has not been confirmed by Anthropic in any public statement or documentation. There’s no timeline for when or whether it will ship as a product.

How is Conway different from Claude?

Claude is the underlying language model. Conway appears to be a product built on top of Claude — specifically an agent system with persistent background operation, event subscriptions, push notifications, and a proprietary extension format. Claude is the reasoning engine; Conway is the architecture that makes it always-on and event-driven.

What is the Conway proprietary extension format?

The leaked code references a proprietary file extension format specific to Conway. This suggests Conway uses a formalized configuration layer — likely a way to define what events an agent monitors, how it responds, and what it’s authorized to do. The exact specification of this format isn’t fully visible from the leak.

Can I build something like Conway today?

Yes. The core pattern — a background agent that monitors events, reasons about them using a language model like Claude, and sends notifications or triggers actions — is buildable today. Platforms like MindStudio support webhook-triggered agents, GitHub integrations, and Claude as the underlying model. You can implement the same loop Conway describes without waiting for Anthropic’s release.

What does Conway tell us about the future of AI agents?

Conway is a concrete signal that major AI labs are moving beyond chat interfaces toward persistent, event-driven agents. The combination of always-on operation, external event subscriptions, and a packaging format for agent behavior points toward AI as infrastructure that runs continuously in the background — more like a service than a tool you open when you need it.


Key Takeaways

  • Conway is Anthropic’s unreleased always-on background agent, revealed through a code leak — not an official product announcement.
  • Its core features include persistent background operation, push notifications, GitHub event subscriptions, and a proprietary extension format.
  • Conway represents a shift from reactive AI (respond when asked) to ambient AI (monitor, reason, and act continuously).
  • The GitHub integration signals a developer-first focus and likely competition with tools in the coding and DevOps workflow space.
  • The architecture fits naturally with Anthropic’s Model Context Protocol work, suggesting Conway may be part of a larger connected-agent infrastructure play.
  • You can build this kind of always-on, event-driven agent today using platforms like MindStudio without waiting for Conway to ship.

The broader lesson from Conway isn’t really about Anthropic specifically — it’s that the industry is converging on persistent, proactive agents as the next meaningful form factor for AI. Whether it’s Conway, a competing product, or something you build yourself, always-on agents are moving from research concept to real product faster than most expected.

Presented by MindStudio

No spam. Unsubscribe anytime.