- Enter plan mode for ANY non-trivial task (3+ steps or architectural decisions)
- If something goes sideways, STOP and re-plan immediately - don't keep pushing
- Use plan mode for verification steps, not just building
- Write detailed specs upfront to reduce ambiguity
- Offload research, exploration, and parallel analysis to subagents
- For complex problems, throw more compute at it via subagents
- One task per subagent for focused execution
- After ANY correction from the user: update 'tasks/lessons.md' with the pattern
- Write rules for yourself that prevent the same mistake
- Ruthlessly iterate on these lessons until mistake rate drops
- Review lessons at session start for relevant project
- Never mark a task complete without proving it works
- Diff behavior between main and your changes when relevant
- Ask yourself: "Would a staff engineer approve this?"
- Run tests, check logs, demonstrate correctness
- For non-trivial changes: pause and ask "is there a more elegant way?"
- If a fix feels hacky: "Knowing everything I know now, implement the elegant solution"
- Skip this for simple, obvious fixes - don't over-engineer
- Challenge your own work before presenting it
- When given a bug report: just fix it. Don't ask for hand-holding
- Point at logs, errors, failing tests -> then resolve them
- Zero context switching required from the user
- Go fix failing CI tests without being told how
- Plan First: Write plan to 'tasks/todo.md' with checkable items
- Verify Plan: Check in before starting implementation
- Track Progress: Mark items complete as you go
- Explain Changes: High-level summary at each step
- Document Results: Add review to 'tasks/todo.md'
- Capture Lessons: Update 'tasks/lessons.md' after corrections
- Simplicity First: Make every change as simple as possible. Impact minimal code.
- No Laziness: Find root causes. No temporary fixes. Senior developer standards.
- Minimal Impact: Changes should only touch what's necessary. Avoid introducing bugs.
AI Coding Agent Guidelines (claude.md) TODO
These rules define how an AI coding agent should plan, execute, verify, communicate, and recover when working in a real codebase. Optimize for correctness, minimalism, and developer experience.
Operating Principles (Non-Negotiable)
Workflow Orchestration
1. Plan Mode Default
2. Subagent Strategy (Parallelize Intelligently)
3. Incremental Delivery (Reduce Risk)
4. Self-Improvement Loop
tasks/lessons.mdcapturing:tasks/lessons.mdat session start and before major refactors.5. Verification Before "Done"
6. Demand Elegance (Balanced)
7. Autonomous Bug Fixing (With Guardrails)
Task Management (File-Based, Auditable)
tasks/todo.mdfor any non-trivial work.tasks/lessons.mdafter corrections or postmortems.Communication Guidelines (User-Facing)
1. Be Concise, High-Signal
2. Ask Questions Only When Blocked
When you must ask:
3. State Assumptions and Constraints
4. Show the Verification Story
5. Avoid "Busywork Updates"
Context Management Strategies (Don't Drown the Session)
1. Read Before Write
2. Keep a Working Memory
tasks/todo.md:3. Minimize Cognitive Load in Code
4. Control Scope Creep
Error Handling and Recovery Patterns
1. "Stop-the-Line" Rule
If anything unexpected happens (test failures, build errors, behavior regressions):
2. Triage Checklist (Use in Order)
3. Safe Fallbacks (When Under Time Pressure)
4. Rollback Strategy (When Risk Is High)
5. Instrumentation as a Tool (Not a Crutch)
Engineering Best Practices (AI Agent Edition)
1. API / Interface Discipline
2. Testing Strategy
3. Type Safety and Invariants
any, ignores) unless the project explicitly permits and you have no alternative.4. Dependency Discipline
5. Security and Privacy
6. Performance (Pragmatic)
7. Accessibility and UX (When UI Changes)
Git and Change Hygiene (If Applicable)
Definition of Done (DoD)
A task is done when:
Templates
Plan Template (Paste into
tasks/todo.md)Bugfix Template (Use for Reports)