Most Claude Code mistakes are not intelligence problems. They're memory problems.
If you keep re-explaining your stack, architecture, and "please stop doing that" rules every session, your prompt is doing a job that CLAUDE.md should be doing instead.
Key Takeaways
- A strong CLAUDE.md file works like persistent project memory, not a giant catch-all prompt.
- The best entries are stable constraints, conventions, and decision logs, not temporary task chatter.
- Structured prompting beats generic prompting in adjacent software workflows, which is a useful design cue for CLAUDE.md files [1].
- Multi-step coding agents fail when context drifts over time, so durable guardrails matter more than one perfect chat prompt [2].
- Real-world users report better outputs when CLAUDE.md captures stack choices, "always do" rules, and known mistakes [3][4].
What is a CLAUDE.md file for?
A CLAUDE.md file should store durable project instructions Claude Code can reuse across sessions, so you stop spending tokens restating the same rules and the model stops behaving like a smart contractor with amnesia. Think of it as project memory, not a task brief [3][4].
That distinction matters. If you cram temporary requests, brainstorming notes, and random one-off fixes into the file, it turns noisy fast. What works better is a stable layer: stack, architecture choices, quality bar, banned patterns, review expectations, and recurring gotchas.
I've found that the file gets stronger when I ask one question: "Would I want Claude to remember this next week?" If yes, it belongs. If not, it probably belongs in the current chat instead.
Structured prompting research backs this instinct. In ReqFusion, a PEGS-based structured format beat generic prompting by a wide margin in software requirements extraction, improving F1 from 0.71 to 0.88 under the same multi-provider setup [1]. Different task, same lesson: structure helps models find the right signal.
How should you structure CLAUDE.md so memory sticks?
A good CLAUDE.md file is easier for the model to use when it is split into clear, predictable sections with explicit labels. Structure reduces ambiguity, surfaces constraints earlier, and makes the file more reusable across sessions and tasks [1].
Here's the pattern I like:
- Start with project identity. Name the app, the stack, and the architectural shape in plain language.
- Add non-negotiables. These are rules Claude should follow every time.
- Add conventions. File organization, naming, testing style, API patterns, UI patterns.
- Add a decisions log. Short entries with the "what" and "why."
- Add known pitfalls. Repeat mistakes are exactly what project memory should prevent.
This is the wrong way to write it:
We use Next.js and TypeScript and please be careful with auth and make sure code is clean and don't break anything and use our patterns from before.
This is better:
# Project Identity
- App: B2B analytics dashboard
- Stack: Next.js App Router, TypeScript, tRPC, Drizzle, Postgres
- UI: Tailwind + shadcn/ui
# Non-Negotiables
- Do not add REST endpoints unless explicitly requested.
- Run tests and lint before proposing completion.
- Prefer editing existing patterns over introducing new abstractions.
# Conventions
- Server actions only for simple form workflows.
- Data access goes through tRPC procedures.
- Use Zod for runtime validation at boundaries.
# Decisions Log
- We use Drizzle, not Prisma, to keep SQL visible.
- We use toast notifications for non-critical feedback, not modals.
# Known Pitfalls
- Do not create a new DB client per request.
- Do not move shared types into UI components.
That's not fancy. That's the point.
If you want help turning messy notes into this kind of structure, tools like Rephrase are useful because they can rewrite rough instructions into a cleaner prompt format before you drop them into CLAUDE.md.
What belongs in project memory versus the current prompt?
Project memory should hold persistent rules and context, while the current prompt should hold the task, scope, and immediate success criteria. Keeping those layers separate makes Claude Code more predictable and reduces instruction collisions [4].
This is where a lot of people quietly sabotage themselves. They use CLAUDE.md like a dumping ground. Then Claude follows outdated instructions, misses the actual task, or over-prioritizes old context.
Here's the clean separation:
| Put in CLAUDE.md | Put in the current prompt |
|---|---|
| Stack and frameworks | The feature or bug you want fixed |
| Coding conventions | Scope for this session |
| "Always do" checks | Acceptance criteria |
| "Never do" constraints | File targets for this task |
| Architecture decisions | Edge cases unique to the task |
| Recurring bug patterns | Deadline or speed tradeoffs |
What's interesting is that agent safety research points in the same direction. AgentHazard shows that computer-use agents often become unsafe or unreliable through multi-step accumulation, where each local step looks reasonable but the overall trajectory drifts [2]. In plain English: if your persistent context is muddy, the agent can wander into bad decisions gradually.
That's why I prefer a thin but strict CLAUDE.md plus a precise current prompt. One handles memory. One handles execution.
How do you write rules Claude Code actually follows?
Claude Code is more likely to follow rules that are specific, testable, and operational rather than vague style advice. Rules should tell the model what to do, what not to do, and how to verify completion in concrete terms [2][4].
"Write clean code" is weak. "Run lint and tests before claiming completion" is strong.
"Respect the architecture" is weak. "All database access must go through /server/db/* and never from React components" is strong.
Here's a simple before-and-after I'd actually use.
| Before | After |
|---|---|
| Be careful with state management | Use Zustand only in client state islands. Do not add Redux. Prefer server state via tRPC + React Query. |
| Keep commits small | Propose changes in one bounded feature slice. If scope expands, stop and ask before continuing. |
| Don't break auth | Do not modify auth middleware, session shape, or role checks without explicit approval. |
Community examples line up with this. One Reddit user described better results by storing a "Project Memory" block with stack, conventions, key decisions, and explicit "never do" / "always do" rules [3]. Another recommended keeping CLAUDE.md under control with concise rules, bug logs, verification requirements, and narrower scopes [4].
I'd add one more opinionated rule: write for enforcement, not inspiration. Your file is not a manifesto. It's an operating manual.
How often should you update CLAUDE.md?
You should update CLAUDE.md whenever a decision becomes durable enough that repeating it manually would be wasteful. The best update moments are after architecture choices, recurring mistakes, and new conventions that should persist into future sessions [3][4].
The biggest miss I see is people setting the file once and never touching it again. That defeats the whole idea of project memory.
I update mine after three kinds of events: when we reject a tool or pattern, when Claude repeats a mistake, and when a temporary decision becomes standard. That last one matters a lot. Today's workaround becomes tomorrow's convention surprisingly often.
A lightweight template for updates looks like this:
## New Decision
- We now use background jobs for email fan-out.
- Reason: request-time sends were causing timeout spikes.
## New Pitfall
- Do not send bulk email in API request handlers.
- Use the queue worker and status polling pattern instead.
If you want more prompt design breakdowns like this, the Rephrase blog has more articles on practical prompting workflows across coding and non-coding tools.
What does a strong CLAUDE.md starter template look like?
A strong starter template gives Claude Code stable identity, clear rules, and enough decision history to avoid repeating mistakes. It should be opinionated enough to guide behavior but short enough to stay readable [1][3][4].
Here's a compact version you can adapt:
# Project Identity
- Product:
- Users:
- Stack:
- Architecture:
# Non-Negotiables
- Always:
- Never:
- Ask before:
# Code Conventions
- Backend:
- Frontend:
- Validation:
- Testing:
# File and Folder Rules
- Prefer editing:
- Avoid creating:
- Shared utilities live in:
# Decisions Log
- Decision:
Why:
- Decision:
Why:
# Bug Log / Pitfalls
- Mistake:
Fix:
- Mistake:
Fix:
# Definition of Done
- Lint passes
- Tests relevant to the change pass
- No changes outside agreed scope
That's enough to start. Don't wait for the perfect version. Write version one, use it for a week, then tighten it.
The main shift is simple: stop treating Claude Code like it should remember everything from chat alone.
Give it a durable memory layer. Keep that layer structured. Keep it current. Keep it strict. If you do that, Claude starts acting less like a talented stranger and more like a teammate who actually knows your codebase. And if you don't want to handcraft that structure every time, Rephrase can help turn rough notes into cleaner, reusable prompt blocks fast.
References
Documentation & Research
- ReqFusion: A Multi-Provider Framework for Automated PEGS Analysis Across Software Domains - The Prompt Report (link)
- AgentHazard: A Benchmark for Evaluating Harmful Behavior in Computer-Use Agents - arXiv cs.AI (link)
Community Examples 3. I gave Claude Code persistent memory and it mass produces features like a senior engineer now - r/PromptEngineering (link) 4. Stop writing repetitive prompts. Use a CLAUDE.md file instead (Harness Engineering) - r/PromptEngineering (link)
-0322.png&w=3840&q=75)

-0326.png&w=3840&q=75)
-0321.png&w=3840&q=75)
-0318.png&w=3840&q=75)
-0316.png&w=3840&q=75)