How to Build a Self-Learning Claude Code Skill with a Learnings.md File
Add a learnings.md file to any Claude Code skill and it will capture what worked, what failed, and what to do differently — improving automatically over time.
The Problem with Stateless AI Skills
Every time you start a Claude Code session, it begins fresh. No memory of previous runs, no record of what worked, no recollection of the time it spent 20 minutes debugging an API quirk you’d already figured out three weeks ago.
Claude treats each session as an independent event. That’s fine for one-off tasks. But for skills — defined, repeatable workflows you run regularly — starting from zero every time is wasteful. The skill isn’t improving. You’re compensating manually by re-explaining context, watching it repeat mistakes, and gradually stuffing your system prompt with notes that pile up into an unmanageable wall of text.
A Learnings.md file solves this cleanly. Add it to any Claude Code skill, configure Claude to read it before starting and update it after finishing, and the skill accumulates useful knowledge across sessions automatically. This guide covers the full setup.
What Claude Code Skills Actually Are
Claude Code is Anthropic’s agentic coding tool. It runs in your terminal, reads and writes files, executes commands, and works through multi-step tasks without constant direction. It’s built for sustained effort — refactoring a codebase, generating tests, auditing dependencies, writing changelogs, running custom code reviews, and similar jobs that take more than a single prompt.
A skill, in this context, is a defined, repeatable workflow you’ve configured Claude to follow for a specific task. Skills are typically set up through:
- A
CLAUDE.mdfile in the project root — Claude reads this automatically at the start of every session - A system prompt or instructions block passed at invocation
- Example inputs and expected outputs that define what “done” looks like
Skills make Claude faster and more consistent on jobs you run repeatedly. What they usually don’t include is any mechanism for learning across runs. That’s the gap Learnings.md fills.
How the Learnings.md Pattern Works
The pattern is simple: a markdown file that lives in your project and accumulates structured observations over time.
Claude reads it before starting any task. After the task is done, it adds new entries. The next run starts with those entries already in context. Over time, the file builds a record of what approaches work, what causes failures, and what quirks are specific to this codebase.
No external database, no retrieval pipeline, no vector embeddings. Just a file that the previous version of Claude left notes in for the current version to read.
Why This Works
Claude doesn’t have persistent memory between sessions, but it doesn’t need it. What it needs is access to a well-structured file where useful context has already been written down. As long as Learnings.md is part of the context window at the start of each run, the skill has continuity.
This is the same principle behind any knowledge base: write things down clearly, and you don’t need to rely on memory. The difference here is that Claude both writes and reads the file — and it can do both reliably when you configure it correctly. Research into long-context AI behavior consistently shows that models are better at applying structured, well-organized context than reconstructing knowledge from scratch.
Why Markdown Works Here
Markdown is the right format because:
- Claude reads and writes it naturally, without any special formatting overhead
- Headings, bullet points, and code blocks provide structure without requiring a rigid schema
- It’s human-readable, so you can audit and edit it yourself
- It stays in your repo and gets versioned alongside your code
Setting Up the File and Configuring Claude
Step 1: Create Learnings.md
Add a Learnings.md file to the root of your project (or wherever your skill’s configuration files live). Start with a minimal structure:
# Learnings
## What Has Worked
- (Nothing recorded yet)
## What Has Failed
- (Nothing recorded yet)
## Patterns and Preferences
- (Nothing recorded yet)
## Open Questions
- (Nothing recorded yet)
You can seed it manually if you already know relevant things about the project — known API quirks, file structure constraints, build tool behavior. Claude treats manually written entries the same as ones it wrote itself.
Step 2: Add Instructions to CLAUDE.md
Your CLAUDE.md file is where Claude looks for context at session start. Add explicit read and write instructions:
## Memory and Learning
**Before starting any task:**
Read `Learnings.md` in full. Apply all entries under "What Has Worked"
and "Patterns and Preferences." Avoid all patterns listed under
"What Has Failed."
**After completing any task:**
Update `Learnings.md` with new observations using this format:
**[Date] — [Task type]**
- Observation: [what you noticed]
- Action: [what to do or avoid going forward]
- Confidence: [high / medium / low]
Be specific. "Avoid relative imports in /utils — the build step
resolves them incorrectly" is useful. "Be careful with imports" is not.
Do not add:
- Observations already captured in the file
- General best practices (only project-specific ones)
- Redundant restatements of existing entries
The specificity rule is worth emphasizing: vague entries waste context space and don’t change behavior. Every entry should be actionable.
Step 3: Make the Update Step Non-Optional
Claude will sometimes skip updating the file if it treats the task as routine or if the instruction is phrased optionally. Fix this by making it explicit and unconditional:
You MUST update Learnings.md before ending the session.
This is required even if nothing new was discovered.
If existing patterns held, add a brief note confirming that.
This also creates a useful audit trail — you can see which runs were uneventful versus which produced new insights.
Step 4: Force Active Reading
Getting Claude to read the file is easy. Getting it to actually use what’s in it is a bit harder. One reliable technique: have Claude summarize the learnings file at the very start of the task. Summarizing forces it to process the content rather than skim past it before getting to the interesting work.
Add a line to your instructions:
Start each task by summarizing the current Learnings.md entries
in 3–5 bullet points. Then proceed with the task.
Writing Learnings That Actually Help
The value of the file depends entirely on the quality of what goes into it. Here’s what useful entries look like — and what doesn’t work.
Useful: Specific API Behavior
**2025-06-10 — Changelog generation**
- Observation: GitHub API returns commits in reverse-chronological order,
but pagination starts from the oldest page when using `?per_page=100`
- Action: Always sort by date descending after fetching — don't rely on
the API to maintain order across pages
- Confidence: High
Useful: File Structure Constraint
**2025-06-12 — Dependency audit**
- Observation: The vendor/ directory contains a forked version of lodash
with custom patches applied — it's not the published package
- Action: Exclude vendor/ from all automated dependency checks
- Confidence: High
Useful: Workflow Failure
**2025-06-15 — Test generation**
- Observation: Generating tests for files in /legacy causes import
resolution errors — legacy modules use CommonJS, test runner is ESM
- Action: Skip files in /legacy unless explicitly requested;
flag them in output with a note explaining why
- Confidence: High
Not Useful: Vague Observations
- Be careful with the database layer
- Tests in /legacy can be tricky
- API calls sometimes behave unexpectedly
None of these say what to be careful about, what “tricky” means, or what “unexpectedly” refers to. Claude can’t act on them, and they’re worse than nothing because they take up space.
The confidence field does real work here. Low-confidence entries mark hypotheses — things Claude noticed once that might be a pattern, or might be a one-off. High-confidence entries mark established rules. Collapsing them into the same category means Claude can’t calibrate how firmly to apply a rule.
Maintaining the File Over Time
Learnings.md is a shared document. Claude adds entries; you edit and curate them. Neither side does it alone.
Periodic Review
Review the file every few weeks and:
- Promote low-confidence entries that turned out correct over multiple runs
- Remove entries that are no longer relevant after a refactor, an API change, or a structural project shift
- Rewrite vague entries that Claude wrote unclearly
- Add your own observations about the codebase that Claude wouldn’t discover through normal task runs
The goal is a file that’s dense with signal. A 30-line file with sharp, accurate entries outperforms a 200-line file full of stale observations.
Versioning
Because the file lives in your repo, it’s versioned automatically. This gives you:
- Git history showing when specific learnings were added
- The ability to roll back entries that introduced bad behavior
- The option to branch learnings alongside code branches when working on separate workstreams
- Review of the file in PRs alongside the code it relates to
When to Split or Archive
Keep one Learnings.md per skill. If you’re running multiple distinct workflows in the same repository — a test generator, a changelog writer, a dependency auditor — give each its own file. Mixed learnings from unrelated workflows create confusion.
When a file grows past 80–100 lines of actual entries, consider archiving older sections into a learnings_archive.md and keeping only current, active entries in the main file.
Common Mistakes That Break the Pattern
Skipping the Read Step
If Claude doesn’t read Learnings.md before starting work, the pattern fails immediately. Make the read step explicit and first in your CLAUDE.md instructions. The summarization technique covered above is the most reliable enforcement mechanism.
Letting the File Grow Without Trimming
Without cleanup, Learnings.md becomes a liability. Outdated entries cause Claude to apply stale rules. Contradictory entries produce unpredictable behavior. Long files dilute the high-value entries. Build in periodic maintenance — it takes ten minutes and keeps the pattern functional.
Mixing Skill Scopes
One file per skill, scoped tightly. A Learnings.md full of Django-specific observations is noise for a Next.js workflow in the same monorepo. Keep files matched to the workflow they support.
Treating It as a Substitute for Your System Prompt
Learnings.md handles dynamic, discovered knowledge — observations Claude accumulated through running the skill. Your system prompt and CLAUDE.md handle static, intentional knowledge — things you’ve decided Claude should always know. They’re complementary, not interchangeable. Don’t collapse your entire configuration into Learnings.md or use it to store instructions that belong in the system prompt.
Not Auditing What Claude Writes
Claude can be wrong. It can misattribute a failure, overgeneralize from a single incident, or write an entry that’s technically accurate but misleading in practice. Read the entries it adds. Correct what’s wrong. The file should reflect reality, not just Claude’s interpretation of events.
How MindStudio Fits Into Agent Workflows Like This
The Learnings.md pattern works well for Claude Code skills you run locally on a single project. But when skills need to run on a schedule, be shared across a team, or connect to external tools, managing context through local markdown files starts to show its limits.
MindStudio is a no-code platform for building AI agents and workflows, and it handles the state management and memory layer that Learnings.md approximates manually. You can build a workflow where context capture happens automatically as part of the agent’s orchestration logic — no manual file updates, no session-by-session review.
For developers who want to extend Claude Code specifically, MindStudio’s Agent Skills Plugin is worth knowing about. It’s an npm SDK (@mindstudio-ai/agent) that lets any Claude Code agent call 120+ typed capabilities — sending emails, running Google searches, generating images, triggering subworkflows — as simple method calls. Your agent gets structured access to external tools without managing API credentials, rate limiting, or retry logic.
If you’re already building Claude workflows and want to connect them to Airtable, Notion, Slack, HubSpot, or similar tools, MindStudio has 1,000+ pre-built integrations that handle the infrastructure. The average workflow build takes 15 minutes to an hour.
You can try MindStudio free at mindstudio.ai.
Frequently Asked Questions
What is a Claude Code skill?
A Claude Code skill is a defined, repeatable workflow that Claude follows to accomplish a specific task — generating tests, writing changelogs, auditing dependencies, refactoring modules, and so on. Skills are typically configured through a CLAUDE.md file, a system prompt, or both. They give Claude consistent, task-specific instructions so it doesn’t reason from scratch on every run.
How is Learnings.md different from CLAUDE.md?
CLAUDE.md holds static, intentional configuration — things you’ve decided Claude should always know about your project. Learnings.md holds dynamic, discovered knowledge — observations Claude accumulated by actually running the skill. CLAUDE.md is written by you; Learnings.md is written primarily by Claude, with your curation. Both are read at session start and serve different purposes.
Can Claude Code actually update its own files?
Yes. Claude Code can read and write files in your working directory — that’s a core part of how it handles agentic tasks. Writing new entries to Learnings.md at the end of a session works the same as writing to any other project file. You can restrict file access if needed, but in most setups Claude can write to the file automatically.
How large does Learnings.md get before it causes problems?
With the formatting guidelines in this article applied — specific entries, confidence levels, no redundancy — most files stay useful for several weeks before needing cleanup. If you’re running a skill multiple times a day, plan to review and prune every one to two weeks. Files over 80–100 lines of actual entries are worth trimming. Signal density matters more than absolute size.
Does this pattern work with AI models other than Claude?
Yes. The core mechanism — a markdown file that an agent reads before starting and updates after finishing — is model-agnostic. GPT-4, Gemini, and other models that support file access can use the same approach. The implementation details differ (CLAUDE.md is Claude-specific; other models have their own context-loading mechanisms), but the pattern works broadly across agentic AI systems.
Should I commit Learnings.md to version control?
In most cases, yes. Committing it means the file is versioned alongside your code, teammates can benefit from shared learnings, and you can audit changes through git history. The main exception is if the file contains sensitive information about your codebase — internal API behavior, credentials structure, proprietary architecture details — that shouldn’t be shared. In that case, add it to .gitignore and treat it as a local file.
Key Takeaways
Learnings.mdgives Claude Code persistent memory across sessions with no infrastructure required — just a markdown file in your project directory.- Configure Claude to read first, update last — both steps are required, and the update step should be unconditional, not optional.
- Specific entries outperform vague ones — include the date, task type, observation, recommended action, and a confidence level for every entry.
- Maintain the file manually — review it periodically, remove stale entries, correct inaccuracies, and add knowledge Claude wouldn’t discover on its own.
- One file per skill, scoped tightly — don’t mix learnings from unrelated workflows, and don’t use the file as a replacement for your system prompt.
- If you want this kind of memory and state management at scale — shared across a team, running on a schedule, or integrated with external business tools — MindStudio handles that infrastructure without the manual overhead.