How to Write a Codex /goal Prompt That Actually Works: The Meta-Prompting Technique in 5 Minutes
Writing a good /goal prompt for Codex is harder than it looks. This meta-prompting technique uses another AI to generate your /goal prompt — and it works.
Writing a Good /goal Prompt Takes 5 Minutes If You Do It This Way
Most people’s first /goal prompt looks something like this: /goal build me a REST API with authentication. Then they watch Codex spin for a while and produce something that’s roughly what a normal prompt would have given them. Not bad, but not the 14-hour autonomous coding session people are talking about.
The problem isn’t /goal. The problem is the prompt.
Alex Finn, who has been running /goal experiments publicly, put it directly: “/goal is useless if you don’t use it properly. I found basically any prompt I hand-write after /goal is never good enough.” His fix — and the one this post is about — is a meta-prompting technique: ask another AI to research the /goal feature and your specific project, then generate three detailed /goal prompts, then take the best one into Codex CLI. The whole setup takes about 5 minutes. The results are meaningfully different.
This post walks through exactly how to do that.
What You’re Actually Getting Out of This
Before the steps, it helps to understand what /goal is doing differently from a normal Codex prompt.
Remy doesn't write the code. It manages the agents who do.
Remy runs the project. The specialists do the work. You work with the PM, not the implementers.
Philip Corey, who works on Codex at OpenAI, describes /goal as “our take on the Ralph loop — keep a goal alive across turns, don’t stop until it’s achieved.” A regular prompt runs once. /goal persists. It keeps iterating, hitting blockers, working around them, and continuing until the objective is complete or you stop it.
A16Z’s Andrew Chen ran it on a low-level eGPU + Mac device driver project — not a tutorial project, a real one — and it ran for 14 hours overnight, still making progress in the morning. That’s a qualitatively different thing from a session-based coding assistant.
The catch is that /goal needs a goal that’s actually shaped for persistence. It needs to be specific enough that Codex can verify progress, broad enough that it doesn’t terminate after one file, and structured so that Codex knows what “done” looks like. That’s hard to write by hand, especially for a project you’re deep in. You’re too close to it. You’ll either under-specify or over-specify.
That’s exactly what the meta-prompting technique solves. You use an AI that can look at your project with fresh eyes, research how /goal works, and generate prompts that are actually shaped for the feature.
What You Need Before Starting
You need two things:
Codex CLI with /goal available. This is the terminal-based Codex, not the web interface. The /goal feature runs in the CLI. If you haven’t installed it, the OpenAI Codex documentation covers setup. You’ll also want the relevant skills installed — if you’re doing anything involving image generation or browser control, install those skills first (more on that in troubleshooting).
A second AI with context about your project. This is the AI you’ll use to generate your /goal prompts. It can be ChatGPT (GPT-5.5 or better works well here), Claude, or any model you already have a conversation history with around your project. The key is that it either already knows your codebase or you can quickly give it context. If you’ve been building something in Claude Code, that’s a natural choice — it already has the project context. If you’re using ChatGPT, paste in your README, your current file structure, or a quick description of what you’re building.
That’s it. No special setup beyond what you’d need to run Codex normally.
The Technique, Step by Step
Step 1: Give your second AI the /goal research task
Open a conversation with whichever AI has context on your project. Give it this prompt, adapted to your situation:
I’m working with OpenAI Codex CLI and want to use their /goal feature. Please research how /goal works — specifically what kinds of tasks it’s well-suited for, how it maintains persistence across turns, and what makes a good /goal prompt versus a weak one. Then look at our project [describe your project briefly, or reference the existing conversation context] and give me three options for how we could use /goal to be maximally productive. For each option, write a highly detailed /goal prompt I could actually run.
The research step matters. You’re not just asking for prompts — you’re asking the AI to understand the feature first, then apply that understanding to your specific project. The output is usually three prompts that vary in scope: one ambitious, one focused, one somewhere in between.
Coding agents automate the 5%. Remy runs the 95%.
The bottleneck was never typing the code. It was knowing what to build.
Now you have three /goal prompts written by an AI that understands both the feature and your project.
Step 2: Evaluate the three prompts
Don’t just take the first one. Read all three and ask yourself:
- Is the goal verifiable? Can Codex tell when it’s done?
- Is there a clear output format or artifact? (A file, a passing test suite, a deployed endpoint — something concrete.)
- Is it persistent by nature? A goal that terminates after one function isn’t a /goal task. A goal that requires building a system, running it against data, and iterating is.
- Does it match your actual priority right now?
The AI Daily Brief host ran this exact process with GPT-5.5, asking it to research /goal and then narrow down to his specific projects. When he described a project that would take episode transcripts and turn them into chunked insights for knowledge workers, GPT-5.5 responded: “Yes, this is a real /goal-shaped idea, but only after you separate two things. Building the system is a normal Codex project, but running the system every day against the new episode can become a /goal project. The key is: can the objective be made persistent, inspectable, and verifiable.”
That kind of feedback — “this part is /goal-shaped, this part isn’t” — is exactly what you’re trying to get. If your second AI gives you that kind of nuance, you’re on the right track.
Now you have one /goal prompt that’s actually shaped for the feature.
Step 3: Run it in Codex CLI
Open your Codex CLI, navigate to your project directory, and type:
/goal [paste your chosen prompt here]
Then leave it. That’s the point. /goal is designed to run unattended. If you’ve enabled the image generation skill before running (relevant if your project involves generating assets), Codex will use it autonomously. If you’ve installed the Chrome plugin for browser control, it can use that too — though as of the current release, the Chrome plugin is still a bit buggy. During testing, it returned “Chrome connected but direct page automation was blocked by an open extension UI,” so if you’re relying on browser control, close any open extension UIs first.
For most coding tasks, you don’t need any extra skills. Just run the prompt and let it work.
Now you have an autonomous agent working on your project with a prompt that was actually designed for persistence.
Step 4: Check in, don’t interrupt
The temptation is to hover. Resist it.
/goal is doing something different from a normal session — it’s maintaining state across turns, hitting blockers, and finding its own way around them. Interrupting it mid-run often resets that context. Check in after a meaningful chunk of time (30 minutes minimum, longer for complex projects), read what it’s done, and decide whether to let it continue or redirect.
Not a coding agent. A product manager.
Remy doesn't type the next file. Remy runs the project — manages the agents, coordinates the layers, ships the app.
If you’re building something where the output is a running system rather than a static file, you’ll want to verify the output is actually working at checkpoints. /goal can make confident progress in the wrong direction if the verification criteria aren’t built into the prompt. This is another reason the meta-prompting step matters — a well-written /goal prompt includes verification steps.
Now you have a workflow: meta-prompt → evaluate → run → verify.
Where Things Go Wrong
The prompt is too vague. “Build me a web app” is not a /goal prompt. It has no verifiable completion state. The AI will either terminate early or wander. The meta-prompting technique usually catches this because the second AI will push back or generate something more specific, but if all three of your generated prompts feel vague, go back and give the second AI more project context.
The Chrome plugin blocks progress. If your /goal task involves browser control and Codex gets stuck, check for open extension UIs in Chrome. The plugin is functional but has known issues at this stage — it’s new. For tasks that don’t require browser control, skip the Chrome plugin entirely.
/goal terminates early. Sometimes Codex decides the goal is complete before you think it is. This usually means the completion criteria in your prompt were too easy to satisfy. Go back to your second AI, share what happened, and ask it to rewrite the prompt with more explicit success criteria.
The second AI doesn’t know enough about /goal. If your second AI is working from outdated training data, it might not know what /goal is. In that case, paste in a brief description: “/goal is a Codex CLI feature that keeps a goal alive across turns and doesn’t stop until it’s achieved — it’s designed for long-running autonomous tasks.” Then ask for the prompts. The research step is about giving the AI enough context to generate useful output, not about it having memorized the feature.
You’re trying to use /goal for a one-shot task. Some things are just prompts. If you want Codex to write a single function, use a normal prompt. /goal is for tasks that benefit from iteration — building a system, running it against real data, fixing what breaks, repeating. If your task doesn’t have that shape, /goal won’t add much.
Where to Take This Further
The meta-prompting technique generalizes. It’s not just for /goal — it’s a pattern for any time you’re using a feature that rewards well-structured input. The idea is: use one AI to understand the feature and your project, then use that understanding to generate better input for the tool you’re actually using.
If you’re interested in similar patterns for Claude Code, the Claude Code agentic workflow patterns post covers five workflow patterns including loops and multi-step orchestration — some of which map directly to what /goal is doing in Codex. Cursor also added /orchestrate around the same time as /goal, which “recursively spawns agents to tackle ambitious tasks with the Cursor SDK” — a similar idea from a different direction.
Everyone else built a construction worker.
We built the contractor.
One file at a time.
UI, API, database, deploy.
For the meta-prompting step itself, the quality of your second AI matters. If you’re working on a project that lives in a specific codebase, an AI with persistent memory of that codebase will generate better /goal prompts than one you’re briefing cold. The Claude Code AutoResearch self-improving skills post covers how to build that kind of persistent context — worth reading if you’re doing this regularly. And if you’re evaluating which model to use as your second AI for the meta-prompting step, the GPT-5.4 vs Claude Opus 4.6 comparison breaks down how the two perform on exactly this kind of structured reasoning task.
One thing worth thinking about as /goal becomes more capable: the prompts you write for it are starting to look less like instructions and more like specs. You’re describing a system’s behavior, its completion criteria, its edge cases. That’s a different kind of writing than a normal prompt. Tools like Remy take this further — you write an annotated markdown spec and it compiles a full-stack TypeScript application from it, treating the spec as the source of truth and the code as derived output. The direction is similar: the more precisely you can describe what you want, the more the tool can do autonomously.
For teams thinking about how to build agents that run workflows like this at scale, MindStudio handles the orchestration layer — 200+ models, 1,000+ integrations, and a visual builder for chaining agents and workflows — which is useful when /goal-style autonomous work needs to connect to business systems rather than just a local codebase.
The /goal feature is genuinely new behavior. Not because it’s technically unprecedented, but because it changes the relationship between you and the tool — from “I prompt, it responds” to “I set a direction, it works.” Getting that direction right is the whole job now. The meta-prompting technique is the fastest way to do that well.
If you run this and get interesting results, the community around Codex is active — the /goal experiments are happening in the usual places. Worth sharing what you find.