One Markdown File Controls Your Entire AI Second Brain — Here's How agents.md Works
A single agents.md file governs every AI action in your Obsidian vault. Edit it like a note and your agent behavior changes instantly.
The Single File That Controls Everything Your AI Second Brain Does
agents.md is a plain-text markdown file that sits in the root of your Obsidian vault and governs every action your AI agent takes. Not a config file in some hidden directory. Not a settings panel behind a login. A markdown file you can open, read, and edit in the same app you use to browse your notes.
That design choice — making the control plane human-readable and directly editable — is what separates this architecture from most “AI on your notes” setups. You don’t touch code to change agent behavior. You edit a note.
Here’s how it works, what’s actually inside it, and why the architecture is more interesting than it first appears.
What agents.md Actually Is
The system comes from Andrej Karpathy’s LLM Wiki on GitHub, which lays out a minimal architecture for a self-building knowledge base in markdown. The vault structure is deliberately sparse: a /raw folder for immutable source material, a /wiki folder for AI-generated pages, plus three root-level files — agents.md, index.md, and log.md.
Hire a contractor. Not another power tool.
Cursor, Bolt, Lovable, v0 are tools. You still run the project.
With Remy, the project runs itself.
agents.md is the one that does the work. It’s a prompt file. When you open a chat in Codeex pointed at your second brain project folder, the agent reads agents.md first and uses it as its operating instructions. Every decision the agent makes — how to process a new file, how to respond to a journal entry, how to update the CRM — is governed by what’s written in that file.
The initial version gets built by prompting Codeex: “Build out the wiki architecture based on Karpathy’s LLM wiki here [GitHub URL]. The current second brain folder is the folder that Obsidian is connected to. It is currently empty.” Codeex generates the folder structure and populates agents.md with the default operating rules. From that point on, you own the file.
What the Default agents.md Contains
Out of the box, agents.md defines two operations.
Ingest: When you add a source and ask the agent to process it, it reads the source from /raw, creates or updates wiki pages, updates relevant entity and concept pages, updates index.md, and appends an entry to log.md.
Query: When you ask a question, it checks the vault index, pulls relevant wiki pages, answers from captured content, and adds any new reusable synthesis back into the wiki.
That’s the whole default loop. Simple enough to read in two minutes. Specific enough that the agent behaves consistently across sessions.
The interesting part is what happens when you change it.
Editing agents.md Changes Agent Behavior Immediately
This is the non-obvious thing about the architecture. Because agents.md is just text, and because the agent reads it at the start of every interaction, any edit you make takes effect on the next run. No redeployment. No restart. No code change.
The video demonstrates this directly. After the initial setup, the author wanted processed files moved out of /raw into /raw/processed so the folder doesn’t accumulate indefinitely. The fix: open agents.md in Obsidian, find the ingest operation, add a step 6 — “Move the source file from the root raw directory to raw/processed.”
Next time the agent processes a file, it moves it. Done.
Same approach for a YouTube channel name issue. The agent was adding channel names to generated wiki pages instead of to the original source files. Fix: find the relevant line in agents.md, change “add the channel name to the generated wiki page front matter” to “add the channel name to the original source page front matter.” One sentence changed. Behavior changed.
You can also make these edits by prompting Codeex directly — “update the agents.md file to also cross-link any wiki pages generated or updated back to the original source page” — and it will modify the file for you. Either way, the file is the source of truth. The agent reads it; you edit it.
This is a meaningful design constraint. If you want to understand what your agent does, you read one file. If you want to change what it does, you edit one file. There’s no hidden logic elsewhere.
Adding New Capabilities by Extending the File
The real test of this architecture is whether it scales gracefully when you add new behaviors. The answer, based on the video, is yes — with some caveats.
The author adds two major capabilities after the initial wiki setup: a journal system and a CRM. Both get added by extending agents.md with new rule sections.
Coding agents automate the 5%. Remy runs the 95%.
The bottleneck was never typing the code. It was knowing what to build.
Journal rules: If a chat starts with the prefix journal, the agent treats the input as a journal entry rather than a wiki query. It saves the full conversation as a new markdown file in /journal, creates a short title from the content, uses the date and title as the filename, updates a journal index file, and logs the entry in log.md. Critically, the response is grounded in the wiki, past journal entries, and the CRM — not just the current message.
The prompt that generates these rules is worth reading in full because it shows how specific you need to be: “Your response to my journal entry should be grounded in content from the wiki in the same way you view the index and respond to my chat questions based on what’s in the wiki. Provide advice and insights to my journal entries based on what’s available in the wiki as well as your own LLM knowledge.”
CRM rules: If you tell the agent you’re giving it CRM information, it either creates a new file in /CRM named after the person or updates an existing one. CRM files are always named by person. There’s an index file in /CRM listing contacts alphabetically with a short bio. The agent can answer questions about contacts by querying that index.
The prompt: “If I tell you I’m giving you information for the CRM, either update the person in the CRM or add the person to the CRM. CRM files should always be a person’s name.”
Both of these capabilities live entirely in agents.md. The agent knows to handle journal entries differently from wiki queries because agents.md says so. The agent knows to create CRM files with person names because agents.md says so. Remove those sections from the file and the behaviors disappear.
This is the architecture’s core bet: that a well-written prompt file is a sufficient control plane for a personal knowledge agent. For a single-user system with a defined set of operations, that bet holds up.
The Automation Layer and Why agents.md Scales There Too
Once you have agents.md defining the behavior, you can hand that behavior to an automated runner and it works without modification.
Codeex has an Automations feature. You create a new automation, set it to run hourly, point it at the second brain project, and give it a simple trigger: “If there are any unprocessed files inside the raw directory, please process them.” The model recommendation from the video is GPT-5.5 on high reasoning — use the strongest model you have available, since this runs unattended.
The automation reads agents.md the same way a manual chat session does. So when you edited agents.md to move processed files to /raw/processed, the automation picks that up automatically on its next run. When you added the journal and CRM rules, those don’t affect the automation because the automation only triggers on unprocessed raw files — but if you later added a rule to agents.md that the automation should handle, it would.
You can extend the automation to commit and push to a private GitHub repo after each processing run: “Once everything is processed, please commit and push the current version of the directory to the main branch on GitHub.” Now you have hourly processing plus hourly backup, both governed by the same agents.md file.
Day one: idea. Day one: app.
Not a sprint plan. Not a quarterly OKR. A finished product by end of day.
The Obsidian Web Clipper is the input layer for this automation. Install it as a Chrome extension, configure it to save to the /raw folder in your vault, and any YouTube video or article you clip gets dropped there. The Obsidian Web Clipper automatically pulls full YouTube transcripts — not just metadata — which means a clipped video arrives in /raw as a complete text document ready for the agent to process. The automation picks it up within the hour, generates wiki pages, moves the source to /raw/processed, and commits to GitHub. Zero manual steps after the initial clip.
For teams thinking about this kind of agent infrastructure at scale, platforms like MindStudio handle the orchestration layer differently — 200+ models, 1,000+ integrations, and a visual builder for chaining agents — but the agents.md pattern is interesting precisely because it works at the individual level with no infrastructure beyond a local folder and a GitHub repo.
The Non-Obvious Part: agents.md Is Also a Debugging Tool
When the agent does something unexpected, agents.md is the first place to look.
In the video, the agent initially created 51 files when asked to build the wiki architecture. The fix wasn’t to debug code — it was to prompt: “Please remove all the extra crap and just build what’s explicitly called for in Karpathy’s game plan.” The agent pruned back to the minimal structure. But if that kept happening, you’d open agents.md, find the ingest rules, and tighten the language.
When the agent added channel names to the wrong location, the fix was a one-line edit in agents.md. When you want to add cross-linking between wiki pages and source files, you add a step to the ingest operation in agents.md.
The file functions as both the specification and the audit trail for agent behavior. If you share your vault with someone else, they can read agents.md and understand exactly what the agent will do. That’s a property most agent systems don’t have.
This connects to a broader pattern worth paying attention to. The WAT framework — Workflows, Agents, and Tools — separates these concerns explicitly: workflows define the sequence, agents handle the reasoning, tools do the execution. agents.md collapses all three into a single readable document. That’s a tradeoff. It’s simpler to manage but harder to test individual components. For a personal knowledge system, the simplicity wins.
If you’re building something more structured — say, a spec-driven application rather than a personal knowledge base — Remy takes a similar “document as source of truth” approach but compiles annotated markdown into a complete TypeScript backend, SQLite database, auth, and deployment. The spec is the source; the code is derived output. Different use case, same underlying intuition about where the control plane should live.
What the Graph View Reveals Over Time
Obsidian’s graph view shows connections between notes as a visual network. When you start, it’s a handful of nodes with sparse links. After a few weeks of clipping and processing, it becomes something genuinely useful — a map of how concepts in your saved content relate to each other.
The connections form because agents.md instructs the agent to cross-link related wiki pages and link back to source files. Every time the agent processes a new clip, it doesn’t just create a page for that content — it finds existing wiki pages that relate to it and adds links. The graph grows organically based on what you save.
This is the Andrej Karpathy LLM wiki architecture working as intended: not a flat dump of transcripts, but an interconnected structure where clicking into a concept shows you every source that touched it. The journal layer adds another dimension — your own thinking, grounded in that structure, saved alongside it.
For anyone building on top of this pattern, the AI second brain with Claude Code and Obsidian approach is worth comparing. Claude Code and Codeex are both viable as the processing layer; the agents.md architecture works with either. The personal productivity agent patterns are also relevant if you want to extend beyond wiki and journal into task management or calendar integration.
How to Start
The minimal setup: install Obsidian (free), create a new vault, open the vault folder in Codeex as a project, and run the Karpathy wiki build prompt. You’ll have agents.md, index.md, log.md, and the /raw and /wiki folders in under ten minutes.
Install the Obsidian Web Clipper Chrome extension, point it at your vault’s /raw folder, and clip something. Then ask Codeex to process the files in /raw. Read what it generates. Then open agents.md and read what it says.
That’s the whole mental model. The file tells the agent what to do. You edit the file to change what the agent does. The automation runs the agent on a schedule. The graph grows.
The thing I find most interesting about this architecture is that it makes agent behavior legible without making it rigid. You can read agents.md and know exactly what will happen. You can change it in thirty seconds. And because it’s markdown in Obsidian, there’s no friction between “understanding the system” and “modifying the system.” They’re the same action.