How to Build a Self-Learning AI Skill System with a Learnings.md File and Wrap-Up Skill
Learn how to build a Claude Code skill system that captures what worked, what failed, and improves automatically after every session.
Why Your AI Coding Sessions Keep Starting From Zero
Every developer who uses Claude Code or a similar AI coding agent has hit the same wall: the agent nails something brilliant in one session, and by the next session it has no idea it ever figured that out. You re-explain the same context. You re-debug the same patterns. You re-establish the same conventions. The agent isn’t getting smarter over time — it’s just getting another clean slate.
The fix is a self-learning AI skill system built around two components: a Learnings.md file that accumulates knowledge across sessions, and a Wrap-Up Skill that captures what the agent learned before the session ends. When you wire these together inside a Claude workflow, you get an agent that actually improves the more you use it.
This guide walks through exactly how to build that system — from structuring the file to writing the Wrap-Up Skill to integrating it all with Claude Code so the feedback loop runs automatically.
What Makes a Skill System “Self-Learning”
Most AI agents are stateless by design. They process a prompt and return a response, then forget everything. Claude Code has some session persistence, but it doesn’t automatically carry forward what it discovered about your codebase, your preferences, or what approaches failed.
A self-learning skill system changes that by treating every session as a training opportunity. Here’s the basic model:
- Session starts — The agent reads a
Learnings.mdfile to load context from previous sessions. - Session runs — The agent works on tasks, encounters problems, finds solutions, builds patterns.
- Session ends — A Wrap-Up Skill analyzes the session and writes new learnings back to the file.
- Next session starts — The agent reads the updated file and is slightly better informed than before.
The loop compounds. What starts as a mostly empty file gradually becomes a rich reference document that shapes how the agent approaches your specific project, codebase, and working style.
This isn’t magic — it’s structured memory management. The “intelligence” comes from consistently capturing and surfacing the right information at the right time.
Designing the Learnings.md File
The quality of your self-learning system depends heavily on how well-structured the Learnings.md file is. A messy file produces messy context. A well-organized file gives the agent clear, actionable knowledge to work with.
Core Sections to Include
Here’s a proven structure that covers the most useful categories of learning:
# Project Learnings
## What Works
<!-- Approaches, patterns, and solutions that have proven effective -->
## What Doesn't Work
<!-- Failed approaches, dead ends, antipatterns to avoid -->
## Codebase Patterns
<!-- Project-specific conventions, architecture decisions, naming patterns -->
## Tool & Library Notes
<!-- Quirks, gotchas, and useful behaviors discovered about dependencies -->
## Recurring Errors & Fixes
<!-- Common errors encountered and their solutions -->
## Session Notes
<!-- Timestamped brief summaries of what each session accomplished -->
## Open Questions
<!-- Things that need more investigation or were left unresolved -->
Keep Entries Concrete and Actionable
Vague entries are useless. Compare these two:
Vague: “Promises can be tricky.”
Useful: “Using Promise.all() on the data ingestion pipeline times out after 30 items. Switch to Promise.allSettled() with batching (max 10 at a time) for that module.”
Every entry in Learnings.md should be specific enough that an agent reading it cold knows exactly what to do or avoid without needing to re-investigate.
Timestamp Session Notes
The Session Notes section should always include the date and a brief summary of what was accomplished. This helps you (and the agent) understand the trajectory of the project over time, and makes it easy to spot when a learning has become outdated.
## Session Notes
### 2025-01-15
Refactored the auth middleware to use JWT refresh tokens. Discovered that the `passport-jwt` version we're using doesn't handle expired tokens gracefully — had to add a manual expiry check before validation.
### 2025-01-12
Set up the test suite. Jest works fine but needs `--forceExit` flag or it hangs. Added to package.json scripts.
Building the Wrap-Up Skill
The Wrap-Up Skill is the mechanism that writes to Learnings.md at the end of a session. You can implement this in several ways depending on how you work with Claude Code.
Option 1: Slash Command in Claude Code
The simplest implementation is a custom slash command. In Claude Code, you can define project-specific commands in .claude/commands/. Create a file called wrap-up.md:
/.claude/commands/wrap-up.md
With contents like:
Review our session today and update the Learnings.md file at the project root.
Follow this process:
1. Identify 2-5 concrete things that worked well this session (solutions found, patterns that held up, efficient approaches).
2. Identify any approaches that failed or were abandoned, and why.
3. Note any new codebase patterns, conventions, or architectural decisions made.
4. Capture any library or tool quirks discovered.
5. Document any recurring errors and their fixes.
6. Add a dated entry to the Session Notes section summarizing what was accomplished.
7. Flag any unresolved questions in the Open Questions section.
Write clearly and specifically. Each entry should be actionable for someone (or an AI) reading it cold next session. Do not duplicate entries already in the file — extend or update them instead.
After updating the file, confirm what was added.
When you run /wrap-up at the end of a session, Claude reviews the conversation, extracts the relevant learnings, and writes them to the file.
Option 2: Automated Hook
Claude Code supports hooks that run automatically at session end. You can configure a post-session hook in your .claude/settings.json (or settings.local.json for personal configs):
{
"hooks": {
"PostToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "node scripts/wrap-up-trigger.js"
}
]
}
]
}
}
The trigger script can then call Claude with the wrap-up prompt programmatically. This approach is more hands-off but requires some scripting setup.
Option 3: Manual Wrap-Up Prompt
If you want the lightest-weight version, keep a standard wrap-up prompt in a text file and paste it at the end of each session. Less elegant, but it works. The key is discipline — if you skip the wrap-up, the system doesn’t learn.
What the Wrap-Up Skill Should Extract
Regardless of how you trigger it, the Wrap-Up Skill needs to be trained to look for the right signals in the session. Good prompts for the skill include:
- What solutions did we arrive at after trying multiple approaches?
- What errors came up more than once and how were they resolved?
- Did we make any decisions about architecture or code organization?
- Were there any library or API behaviors that surprised us?
- What’s still unfinished or uncertain?
The more specific your wrap-up prompt, the more useful the output.
Integrating With Claude Code’s CLAUDE.md
Claude Code reads a CLAUDE.md file at the root of your project (and in subdirectories) to get persistent context. This is where you tell the agent to use your Learnings.md file.
Point Claude to Your Learnings File
Add a section to your CLAUDE.md like this:
## Session Context
Before starting any work, read `Learnings.md` in the project root. This file contains accumulated knowledge from previous sessions, including:
- Approaches that have worked and should be reused
- Approaches that have failed and should be avoided
- Codebase conventions and architectural decisions
- Known errors and their fixes
Treat the contents of Learnings.md as high-confidence guidance unless explicitly told otherwise.
Add Session Closing Instructions
Also add a reminder at the end of your CLAUDE.md so Claude knows to run the wrap-up:
## End of Session
When wrapping up, run `/wrap-up` to update Learnings.md with what was learned this session. Do not skip this step.
You can learn more about configuring persistent context in Claude Code workflows in MindStudio’s guide to Claude Code customization.
Making the Feedback Loop Actually Work
The structure above gives you the scaffolding. But a feedback loop only compounds value if it runs consistently and the entries stay useful over time. A few practices that make the difference:
Prune Outdated Learnings
Learnings go stale. If you upgrade a library, your notes about its quirks in the old version become noise — or worse, misleading guidance. Build a habit of reviewing Learnings.md monthly and removing or updating entries that no longer apply.
You can even ask Claude to do this: “Review Learnings.md and flag any entries that may be outdated given the current state of the codebase.”
Use Categories for Scoped Context
If your project grows, Learnings.md can get long. Split it into scoped files — Learnings-Auth.md, Learnings-Database.md, Learnings-Frontend.md — and point Claude to the relevant one based on the task at hand. Your CLAUDE.md can dynamically reference these:
For auth-related work, also read Learnings-Auth.md.
For database queries, also read Learnings-Database.md.
Track What the Agent Gets Wrong Repeatedly
One of the highest-value entries you can add to Learnings.md is a pattern of repeated mistakes. If Claude keeps reaching for the wrong solution to a particular type of problem, document that explicitly:
## What Doesn't Work
### State management in the checkout flow
Claude repeatedly tries to use local component state here. This doesn't work because the cart data is shared across three components. Always use the Zustand store (cartStore.ts) for anything touching cart state.
This kind of entry saves significant re-debugging time over many sessions.
Version Control Your Learnings File
Commit Learnings.md to your repository. This lets you track how the project’s accumulated knowledge evolves, roll back if a bad wrap-up overwrites useful content, and share learnings with team members using the same codebase.
Where MindStudio Fits Into This System
The manual implementation above works well, but if you’re building more complex agents or want to push this system further, the MindStudio Agent Skills Plugin adds a lot of capability with minimal setup.
The @mindstudio-ai/agent npm SDK lets Claude Code (and any other AI agent) call over 120 typed capabilities as simple method calls. For a self-learning skill system, this opens up some useful options.
For example, instead of writing wrap-up summaries only to a local file, you could use agent.runWorkflow() to trigger a MindStudio workflow that:
- Parses the session summary
- Formats and deduplicates learnings before writing them back
- Pushes a notification to Slack with what was learned
- Logs the session summary to a Notion database for cross-project knowledge tracking
You can wire this up in your wrap-up script without managing the infrastructure:
import MindStudio from '@mindstudio-ai/agent';
const agent = new MindStudio();
const result = await agent.runWorkflow({
workflow: 'session-learnings-processor',
input: {
rawNotes: sessionSummary,
projectId: 'my-project',
date: new Date().toISOString()
}
});
The plugin handles rate limiting, retries, and auth so you’re not building plumbing — you’re building the skill logic.
For teams working across multiple projects, this becomes especially useful. A central MindStudio workflow can aggregate learnings across projects, identify patterns, and surface insights that would be invisible if each project’s Learnings.md lived in isolation.
You can try MindStudio free at mindstudio.ai.
Common Mistakes to Avoid
Even a well-designed system can underperform if these issues aren’t addressed.
Not Running the Wrap-Up Consistently
This is the most common failure mode. The system only learns if you consistently trigger the Wrap-Up Skill at the end of sessions. If you skip it three sessions in a row because you’re in a hurry, you lose the compounding effect.
If you use hooks or automation, this is less of a risk. If you’re triggering it manually, build it into your session-ending ritual — same as committing your changes.
Writing Entries That Are Too Generic
“Use async/await carefully” is not a learning — it’s noise. Every entry should be specific enough that an agent reading it knows exactly what behavior to adopt or avoid in what context. If you can’t be that specific, hold off on writing the entry until you can.
Letting the File Get Too Long
There’s a practical limit to how much context Claude can usefully load from Learnings.md. If the file gets to 200+ entries, the signal-to-noise ratio drops. Keep it pruned, or split it into domain-specific files as described earlier.
Conflicting Entries
If your codebase evolves, you might end up with contradictory entries: one section says “always use X approach” and another says “X approach fails here.” Claude will try to reconcile these, but it will sometimes pick the wrong one. Review for conflicts regularly and resolve them explicitly.
Skipping the “What Doesn’t Work” Section
Most people naturally gravitate toward documenting successes. But negative learnings — approaches that fail, patterns to avoid, dead ends — are often more valuable. A “don’t do this” entry can save hours of re-debugging.
Extending the System for Team Use
So far this guide has focused on individual developer use. But the same architecture scales to teams with a few adjustments.
Shared Learnings Files in the Repository
If you commit Learnings.md to a shared repo, everyone on the team benefits from accumulated knowledge. New team members get a fast-track orientation to the codebase. Senior developers stop answering the same questions.
The challenge is merge conflicts when multiple people update the file. Solve this by keeping Learnings.md append-only in pull requests — each person adds entries, never edits existing ones — and designating a maintainer who periodically cleans up and consolidates.
Team-Level Wrap-Up Conventions
Agree on a standard format for entries across the team. This is especially important for the Codebase Patterns section, where inconsistent terminology across different contributors will confuse the agent trying to use them.
Integrating with Your AI automation workflows
For teams using AI in production workflows, not just development, the Learnings.md pattern applies broadly. Any AI agent that runs repeatedly on similar tasks can benefit from this system. You can use MindStudio’s autonomous background agents to run wrap-up processing automatically after each workflow execution, maintaining a persistent knowledge base without any manual intervention.
Frequently Asked Questions
What is a Learnings.md file and how is it different from CLAUDE.md?
CLAUDE.md is a project configuration file that gives Claude Code persistent instructions about how to behave in your project — coding conventions, tool preferences, project-specific rules. It’s static; you update it manually.
Learnings.md is dynamic. It’s a knowledge accumulation file that grows over time as the agent discovers things through actual work. The agent writes to it (via the Wrap-Up Skill), and reads from it at the start of each session. Think of CLAUDE.md as the project’s operating manual and Learnings.md as the project’s working memory.
Does this system work with AI agents other than Claude Code?
Yes. The Learnings.md pattern is model-agnostic — it’s just structured markdown that any AI agent can read and write to. The Wrap-Up Skill needs to be adapted to whatever interface your agent uses (a prompt, a tool call, a workflow step), but the core concept applies to GPT-4-based coding assistants, LangChain agents, CrewAI agents, and others.
The integration points differ. With Claude Code, you use CLAUDE.md and slash commands. With other systems, you might inject the learnings file contents into the system prompt or pass it as tool context.
How do I prevent the Learnings.md file from getting bloated over time?
Set a quarterly review cadence. Each quarter, go through the file and:
- Remove entries that apply to code you’ve deleted or refactored
- Update entries where the approach has evolved
- Consolidate similar entries into one clearer one
- Move resolved open questions out of the
Open Questionssection
You can also ask Claude to help: “Review Learnings.md and suggest which entries are outdated, redundant, or unclear.” Use its suggestions as a starting point, but make the final call yourself — the agent doesn’t always know what’s still relevant.
Can the Wrap-Up Skill make mistakes that corrupt the knowledge base?
Yes, it can. The Wrap-Up Skill is still an LLM summarizing a session, so it can mischaracterize what happened, draw wrong conclusions, or write confusing entries. This is why human review matters.
The best practice is to treat Learnings.md as a draft that you periodically review and edit, not an authoritative source you never touch. The agent’s wrap-up saves you 90% of the work — you just need to spot-check and correct the remaining 10%.
What’s the right cadence for running the Wrap-Up Skill?
Run it at the end of every meaningful coding session — anything lasting more than 30 minutes where you encountered a problem, made a decision, or discovered something new. If you pair-programmed with Claude for an hour and found three useful patterns, that’s worth capturing.
Skip it for very short sessions (quick config edits, trivial fixes) where nothing new was discovered. The goal is signal quality, not volume.
How does this relate to retrieval-augmented generation (RAG)?
Learnings.md is a lightweight, human-readable form of persistent context — you could think of it as manual RAG without the infrastructure overhead. True RAG systems use vector embeddings to retrieve relevant chunks from large knowledge bases automatically.
For most individual projects, Learnings.md is simpler and more than sufficient. If your project’s accumulated knowledge grows large enough that loading the whole file becomes inefficient, that’s when you might want to graduate to a RAG-based approach — embedding your learnings and retrieving only the most relevant chunks per session. Anthropic’s documentation on Claude context management covers some strategies for handling long-context scenarios.
Key Takeaways
- A self-learning AI skill system solves the stateless session problem by creating a feedback loop between sessions.
Learnings.mdstores accumulated knowledge in structured, actionable categories — successes, failures, patterns, errors, and open questions.- The Wrap-Up Skill captures what the agent learned before the session ends — it can be a slash command, automated hook, or manual prompt.
- Pointing Claude to
Learnings.mdviaCLAUDE.mdensures every new session starts with that context loaded. - The system only works if you run the Wrap-Up Skill consistently and keep the file pruned and accurate.
- For teams or more complex workflows, tools like the MindStudio Agent Skills Plugin let you extend the system with structured processing, multi-project aggregation, and automated notifications.
The core idea is simple: stop treating every AI session as disposable. Start treating each session as a contribution to a growing body of project knowledge. Over weeks and months, the compounding effect is significant — not because the model is changing, but because the context it works from keeps getting better.
If you want to extend this pattern into broader automation workflows, MindStudio is worth exploring. The platform makes it straightforward to build agents that handle the knowledge management layer automatically, so the learning happens without breaking your focus.