Skip to main content
MindStudio
Pricing
Blog About
My Workspace

OpenAI Codex Got 3 New Features This Week: Chrome Plugin, Virtual Pets, and a Persistent Goal System

Codex shipped a Chrome browser plugin, virtual pets via /pet and /hatch, and the /goal persistent task system in the same week. Here's what each one does.

MindStudio Team RSS
OpenAI Codex Got 3 New Features This Week: Chrome Plugin, Virtual Pets, and a Persistent Goal System

OpenAI Shipped Three New Codex Features in One Week — Here’s What Each One Actually Does

OpenAI dropped three distinct Codex features in the same release window: a Chrome browser plugin, virtual pets via /pet and /hatch, and a persistent task system called /goal. That’s a Chrome plugin, /pet and /hatch virtual pets, and a /goal persistent task — three new Codex features at once, spanning browser automation, cosmetic silliness, and what might be the most substantive change to how agentic coding tools work in 2025. They’re not obviously related. They don’t share a theme. And that’s worth paying attention to.

This post covers all three, what they do in practice, and which one you should actually care about.


What Shipped and When

The Chrome plugin lets Codex control your browser directly. You install it from inside the Codex app: go to Plugins → find the Chrome option → hit the plus icon → install the extension in Chrome. Once connected, you can address the browser in a Codex chat using @Chrome and give it a URL or task.

The virtual pets feature adds a small animated character to your Codex environment. Type /pet to wake your pet. Type /hatch to generate a custom one. The hatch command uses AI image generation to create the character — expect it to take around 30 minutes, not 30 seconds.

Not a coding agent. A product manager.

Remy doesn't type the next file. Remy runs the project — manages the agents, coordinates the layers, ships the app.

BY MINDSTUDIO

The /goal feature is the one that’s generating serious discussion. It keeps a task alive across turns, running autonomously until the goal is complete or you stop it. Philip Corey, who works on Codex at OpenAI, described it as “our take on the Ralph loop — keep a goal alive across turns, don’t stop until it’s achieved.”


Why None of This Is Obvious From the Announcements

The Chrome plugin sounds straightforward but is currently buggy. In testing, connecting Codex to Chrome and asking it to read a page (@Chrome, look at futuretools.io/news and give me a breakdown of today's news) resulted in Codex opening a new tab, highlighting it, and then returning: “Chrome connected but direct page automation was blocked by an open extension UI.” It eventually fell back to a search. The plugin is working — it did take control of Chrome — but it’s not reliable yet. This was day-one behavior, so improvement is likely, but don’t build a workflow around it this week.

The pets feature is genuinely strange in context. OpenAI has been publicly talking about eliminating “side quests” — internal projects that distract from core mission work. Shipping virtual pets inside a professional coding tool in the same news cycle as that messaging is the kind of irony that multiple commentators flagged immediately. The pets are clearly a team-built Easter egg that made it into the release. They’re not useful. They are, however, a signal that the Codex team has some latitude to ship things for fun.

If you want to import someone else’s pet rather than generating your own, there’s a community site at codeex-pets.net where you can browse and import community-created pets. The angry Daario pet — a visibly irritated Daario character — is one of the more popular ones. Each pet on the site provides a terminal command you can run to pull it into your Codex instance.

The /goal feature, by contrast, is not a side quest. It represents a qualitative shift in what a coding agent session means.


The Chrome Plugin in Practice

The setup flow is: Plugins → install Chrome extension → confirm in Chrome → Codex shows “connected.” From there, you reference the browser in chat with @Chrome.

What it’s supposed to do: open pages, read content, interact with elements, and return structured information back to your Codex session. What it actually does right now: opens tabs, highlights them, and occasionally gets blocked by extension UI conflicts. The fallback behavior (using search instead of direct page automation) means you don’t get a hard failure — you get a degraded result.

Browser control from a coding agent is genuinely useful when it works. The ability to have Codex read a live page, extract data, and immediately use that data in code it’s writing is a meaningful capability. The current implementation isn’t there yet, but the architecture is in place.

For comparison, Cursor added /orchestrate around the same time — a skill that “recursively spawns agents to tackle ambitious tasks with the Cursor SDK.” Both tools are clearly racing toward the same destination: agents that can reach outside the editor and interact with the broader computing environment. The Chrome plugin is Codex’s first move in that direction.


How the Pet System Actually Works

VIBE-CODED APP
Tangled. Half-built. Brittle.
AN APP, MANAGED BY REMY
UIReact + Tailwind
APIValidated routes
DBPostgres + auth
DEPLOYProduction-ready
Architected. End to end.

Built like a system. Not vibe-coded.

Remy manages the project — every layer architected, not stitched together at the last second.

The /pet command wakes whatever pet is currently active. Out of the box, you get a small robot character. If you want something different, the path is:

  1. Go to Plugins → Skills → type hatch → install the “hatch pet” skill
  2. Start a new chat and type /hatch
  3. Describe what you want (e.g., “a cartoon wolf character”)
  4. Wait — this takes approximately 30 minutes
  5. When it finishes, do NOT type /pet yet — it won’t load the new pet automatically
  6. Go to Settings → Appearance → Pets → Custom Pets → Refresh → select your pet → click “Wake Pet”
  7. Now /pet will load your custom character

The process is not intuitive. Codex itself will give you wrong instructions about where to find the settings (it says “personalization” when the correct path is “appearance”). The pet persists outside the Codex window once active, which is either charming or annoying depending on your disposition.

The community site at codeex-pets.net is a faster path if you just want something interesting. You can browse pets like the Doom Guy, Alan Turing, or various animated characters, click into any of them, and get a terminal command to import them directly. The angry Daario has multiple variants — “yelling Daario” and “angry Daario” both appear in search results on the site.

This is worth knowing about if you use Codex for long sessions and want something ambient. It’s not worth knowing about if you’re evaluating Codex as a tool.

Interestingly, Claude Code shipped a similar feature — the /buddy command — around the same period. If you want to understand how Claude Code’s virtual pet system compares in terms of rarity, stats, and species mechanics, that post covers the Tamagotchi-style depth that Anthropic built into theirs.


The /goal Feature: What It Is and Why It Matters

/goal is the feature that deserves extended attention. The basic mechanic: you type /goal followed by a task description, and Codex works on it continuously across turns without requiring you to re-prompt. It doesn’t stop when it hits a natural pause point. It keeps going.

A16Z’s Andrew Chen tried it on a low-level eGPU + Mac device driver project — not a toy task — and reported it running for 14 hours overnight, still making progress. “Naturally unattended 24/7 LLM use will be several magnitudes bigger than me prompting actively over a normal workday,” he wrote.

Alex Finn’s assessment was blunter: “/goal is the biggest advancement in AI coding this year and it isn’t even close. It allows your AI agent to quite literally work for days without stopping.”

That’s a strong claim. But the 14-hour device driver run is a concrete data point, not a benchmark. Device driver work is low-level, iterative, and requires sustained context — exactly the kind of task where session-based AI tools break down because you lose thread between prompts.

Day one: idea. Day one: app.

DAY
1
DELIVERED

Not a sprint plan. Not a quarterly OKR. A finished product by end of day.

The shift here is architectural, not just cosmetic. Session-based AI coding means you prompt, review, prompt again. The cognitive overhead of re-establishing context each time is real. /goal moves toward a model where you define the objective once and the agent manages its own continuity. That’s closer to how you’d work with a human contractor than a chatbot.

This connects to a broader pattern in how agentic tools are evolving. Platforms like MindStudio handle multi-agent orchestration across 200+ models with 1,000+ integrations — the infrastructure problem of chaining agents and keeping them on task is one the whole industry is working on. /goal is OpenAI’s answer for the single-agent, single-session version of that problem inside Codex specifically.


How to Use /goal Without Wasting It

The biggest mistake people make with /goal is writing the prompt themselves. Alex Finn’s meta-prompting technique is worth following exactly:

  1. Open any AI that has context on your project
  2. Say: “I’m working with Codex and want to use their new /goal feature. Please research the /goal feature. Then look at our project and give me three options for how we could use /goal to be maximally productive. Give me a highly detailed /goal prompt for each.”
  3. Take the best of the three prompts
  4. Go to the Codex CLI and type /goal followed by that prompt

The reasoning is sound. A hand-written /goal prompt tends to be too vague — it produces results that might as well have come from a normal prompt. The meta-prompting step forces specificity: the AI researching /goal will understand what kinds of objectives the feature handles well (persistent, inspectable, verifiable goals) and will write prompts that match that shape.

The AI Daily Brief host tried this with GPT-5.5, asking it to research /goal and then narrow down to projects in his own stack. GPT-5.5 responded: “Yes, this is a real /goal-shaped idea, but only after you separate two things. Building the system is a normal Codex project, but running the system every day against the new episode can become a /goal project. The key is: can the objective be made persistent, inspectable, and verifiable.”

That’s a useful heuristic. /goal is well-suited to tasks that have a clear terminal state, can be verified programmatically, and benefit from sustained iteration rather than a single large generation. It’s not well-suited to open-ended exploration or tasks where the definition of “done” shifts.

For anyone thinking about how this fits into a larger build — say, you’re using /goal to generate a backend and then want to wire it into a full-stack application — tools like Remy take a complementary approach: you write an annotated markdown spec, and it compiles a complete TypeScript backend, SQLite database, auth, and deployment from that spec. The spec is the source of truth; the generated code is derived output. That’s a different abstraction layer than /goal, but they’re solving adjacent problems.


What the Three Features Tell You About Codex’s Direction

Taken together, the Chrome plugin, pets, and /goal aren’t random. They sketch a direction.

The Chrome plugin is about expanding the agent’s reach beyond the editor. Right now it’s buggy, but the intent is clear: Codex should be able to interact with the browser as a first-class tool, not just generate code that interacts with browsers.

Other agents start typing. Remy starts asking.

YOU SAID "Build me a sales CRM."
01 DESIGN Should it feel like Linear, or Salesforce?
02 UX How do reps move deals — drag, or dropdown?
03 ARCH Single team, or multi-org with permissions?

Scoping, trade-offs, edge cases — the real work. Before a line of code.

The pets are a morale artifact. They signal that the Codex team is shipping quickly and has room to be playful. That’s not nothing — teams that ship pets also tend to ship features fast.

The /goal feature is the one that changes the category. If you’re evaluating whether Codex is a “coding assistant” or a “coding agent,” /goal is the clearest evidence that OpenAI is building toward the latter. A 14-hour autonomous run on a device driver project is not a demo. It’s a proof of concept for a different kind of tool.

The comparison between GPT-5.4 and Claude Opus 4.6 on agentic tasks is worth reading if you’re deciding which model to anchor your coding workflow on — the /goal feature runs on Codex specifically, but the underlying model capabilities matter for how far it can get on complex tasks before needing intervention.

For anyone building autonomous workflows, the /goal pattern also connects to how persistent memory and goal-tracking work in agent systems more broadly — the ability to maintain state across sessions is the same problem /goal is solving at the task level.


The Honest Assessment

The Chrome plugin: promising, not ready. Use it experimentally, not in production.

The pets: genuinely silly, surprisingly fun, completely optional. The community site at codeex-pets.net is a nice touch. The 30-minute generation time for a custom pet is too long for what you get.

The /goal feature: this is the one. If you use Codex for anything beyond quick code generation — if you have multi-hour tasks, complex refactors, or projects that require sustained iteration — /goal changes the math on what you can hand off. The meta-prompting technique for generating the initial prompt is not optional; it’s the difference between /goal working and /goal producing the same output as a normal prompt.

The irony of shipping virtual pets the same week OpenAI announced it was cutting side quests will not be lost on anyone paying attention. But the /goal feature is the opposite of a side quest. It’s the main thread.

Presented by MindStudio

No spam. Unsubscribe anytime.