Skip to content

Instantly share code, notes, and snippets.

@gsannikov
Last active February 10, 2026 13:18
Show Gist options
  • Select an option

  • Save gsannikov/92cf8ca50407458b605756508a20fe18 to your computer and use it in GitHub Desktop.

Select an option

Save gsannikov/92cf8ca50407458b605756508a20fe18 to your computer and use it in GitHub Desktop.
Claude Code prompts: Part 2 (ADR workflow) + Part 3 (Context management for Agent Teams & Subagents). Paste into Claude Code to generate slash commands.

Context Manager for Claude Code Agent Teams & Subagents

You are setting up a context management layer for a Claude Code project. Generate the following 4 files exactly as specified, then explain how to use them.

1. Create .claude/commands/context-audit.md

---
description: Audit current context usage across active agents and subagents
---

Analyze the current project's context allocation:

1. **Check current usage**: Run the `/cost` command or check the status bar to understand current token consumption.

2. **Inventory agent profiles**: List all `.claude/agents/*.md` files and categorize each:
   - **Advisory** (read-only analysis): agents that only read code — architect, security, analyst, validator
   - **Executor** (read-write): agents that modify files — developer, frontend, devops

3. **Estimate context cost per agent**: For each agent profile:
   - Count the lines in the profile itself (each line ≈ 4 tokens)
   - Identify which files/globs the agent typically loads based on its instructions
   - Flag agents that load broad patterns (e.g., `**/*.ts`) vs scoped ones (e.g., `src/auth/*.ts`)

4. **Output a context budget table**:

| Agent | Type | Profile Size (lines) | Typical File Load | Risk |
|-------|------|---------------------|-------------------|------|
| architect | advisory | ~80 | broad (reads whole modules) | medium |
| developer | executor | ~60 | scoped (specific files) | low |
| ... | ... | ... | ... | ... |

Risk levels:
- **Low**: Agent loads < 20 files, scoped globs
- **Medium**: Agent loads 20-50 files or uses moderate globs
- **High**: Agent loads 50+ files or uses `**/*` patterns

5. **Recommend execution mode per agent**:
   - Agents with **low** context needs → spawn as **subagent** (Task tool with focused prompt)
   - Agents with **high** context needs or needing sustained reasoning → spawn as **Agent Teams member** (TeamCreate + Task with team_name)
   - Agents that need to debate or exchange findings → **Agent Teams** (they need SendMessage)

6. **Suggest optimizations**:
   - Which agent profiles should add file scope restrictions
   - Which agents are loading redundant files
   - Whether any agents should be merged or split

2. Create .claude/commands/dispatch-subagent.md

---
description: Build a clean handoff payload and dispatch a focused subagent
---

Before spawning a subagent via the Task tool, build a minimal handoff payload. This prevents context pollution in the parent session.

**Step 1 — Extract the task spec** from the current conversation:
- What needs to be done (1-3 sentences, imperative voice)
- Input files (exact paths, not globs — e.g., `src/auth/login.ts` not `src/auth/*`)
- Expected output (file path + format)
- Constraints (what NOT to modify, what tests must pass)

**Step 2 — Strip reasoning traces**: The subagent does NOT need to know:
- What approaches you considered and rejected
- Why you chose this approach over alternatives
- Your exploration history or debugging journey

Only pass: conclusions, decisions, and the specific spec.

**Step 3 — Build the payload** as a structured prompt for the Task tool:

```
Task: [1-3 sentence description of what to build/fix/analyze]

Input files:
- path/to/file1.ts
- path/to/file2.ts

Expected output:
- path/to/output.ts (new file with X interface)
- OR: modifications to path/to/existing.ts

Constraints:
- Do not modify path/to/protected.ts
- Must pass: npm test -- --grep "auth"
- Follow existing patterns in src/auth/

Context (decisions only):
- We chose JWT over session cookies (see ADR-012)
- The auth middleware must run before route handlers
```

**Step 4 — Select model tier**:

| Task Scope | Model | Subagent Type | When |
|-----------|-------|---------------|------|
| < 50 lines of change | haiku | Task(model="haiku") | Simple edits, formatting, boilerplate |
| 50-500 lines | sonnet | Task(model="sonnet") | Feature implementation, refactoring |
| Architectural judgment needed | opus | Promote to Agent Teams | Design decisions, complex debugging |
| Read-only analysis | haiku | Task(subagent_type="Explore") | Codebase search, pattern finding |

**Step 5 — Dispatch**: Use the Task tool with the payload as the prompt. Set `subagent_type` to `general-purpose` for implementation tasks or `Explore` for read-only research.

Do NOT pass the full conversation history. The payload IS the prompt.

3. Create .claude/commands/context-clean.md

---
description: Checkpoint session state and clean context when usage is high
---

Run this when your session is getting long or you notice degraded responses.

**Step 1 — Check current usage**: Note the token count from the status bar or `/cost` output.

**Step 2 — Assess**:
- If below 60% of 200K (< 120K tokens used): report status, no action needed.
- If between 60-80%: create checkpoint, continue in current session.
- If above 80%: create checkpoint, recommend starting a fresh session.

**Step 3 — Create checkpoint**: Write a structured summary of the current session:

```
## Session Checkpoint — [date]

### Completed
- [bullet point per completed task with file paths]

### Modified Files
- `path/to/file.ts` — added auth middleware
- `path/to/test.ts` — new test for login flow

### Pending Tasks
- [ ] Wire auth into API routes
- [ ] Add error handling for expired tokens

### Active Decisions (must carry forward)
- Using JWT with 15-minute expiry (ADR-012)
- Auth middleware runs before all /api/* routes
- Test database uses SQLite in-memory

### Context Usage
- Estimated: [X]K / 200K tokens ([Y]%)
```

**Step 4 — Recommend action**:

| Usage | Action |
|-------|--------|
| < 60% | Continue working. No cleanup needed. |
| 60-80% | Consider using `/compact` to compress context. Checkpoint saved above as fallback. |
| > 80% | Start a fresh session. Paste the checkpoint above as your opening message. |

**Step 5 — If starting fresh**: Copy the checkpoint and paste it into a new Claude Code session. The new session starts with full context of what was done and what remains, without the accumulated tool outputs and exploration that filled the original context.

4. Create .claude/agents/context-rules.md

# Context Management Rules

All agents in this project follow these context discipline rules.

## File Loading

- **Scope aggressively**: Never glob entire directories (`**/*.ts`). Always scope to specific subdirectories or file patterns (`src/auth/**/*.ts`).
- **Check before loading**: Before reading a file, check if the information is already available from a previous tool call in this session.
- **Use line ranges**: Prefer reading specific line ranges over full files when you only need a section. Use the `offset` and `limit` parameters on the Read tool.
- **Prefer Grep over Read**: When looking for specific patterns, use Grep to find relevant files first, then read only the matches.

## Subagent Handoffs

- When dispatching to a subagent, follow the `/dispatch-subagent` format.
- **Never forward full conversation history** to subagents. Build a focused payload.
- **Strip reasoning traces**: Send conclusions and decisions, not the thought process that produced them. The subagent doesn't need to know what you tried and rejected.
- **Specify exact files**: Pass file paths, not directory globs. The subagent should read only what it needs.

## Session Hygiene

- Monitor context usage throughout the session.
- When approaching 60% usage, run `/context-clean` to checkpoint.
- Advisory agents (read-only) should complete and return results promptly — don't hold context open for analysis you've already finished.
- Executor agents working on multi-file changes should checkpoint after each logical unit of work.

## Budget Guidelines

These are target budgets per dispatch. Exceeding them means the task should be restructured.

| Role | Target Budget | Rationale |
|------|--------------|-----------|
| Advisory agents (analyst, security) | < 30K tokens | Read-only analysis, no file editing overhead |
| Executor agents (developer, frontend) | < 80K tokens | Need room for file reads + edits + test output |
| Orchestrator / parent session | Reserve 40K tokens | Must retain capacity for final synthesis and decisions |

## Agent Teams vs Subagents — When to Use Which

| Use Agent Teams when... | Use Subagents when... |
|------------------------|----------------------|
| Agents need to debate or exchange findings | Task is self-contained with clear input/output |
| Multiple agents work on interconnected files | Agent works on independent files |
| Sustained reasoning across multiple turns | Single-turn: do task, return result |
| You need 3 or more perspectives on the same code | You need one agent to do one focused job |

Agent Teams = parallel processes with shared filesystem (expensive, powerful).
Subagents = function calls with own stack (cheap, focused).

In practice: use Agent Teams for the outer loop, subagents for inner tasks each team member dispatches.

After generating all files:

  1. Confirm all 4 files were created with the correct paths
  2. Show the directory structure under .claude/:
    .claude/
    ├── agents/
    │   ├── context-rules.md      ← NEW: shared budget rules
    │   └── [your-agents].md
    └── commands/
        ├── context-audit.md      ← NEW: map context cost
        ├── context-clean.md      ← NEW: checkpoint + cleanup
        ├── dispatch-subagent.md  ← NEW: clean handoff builder
        └── [your-commands].md
    
  3. Explain the workflow:
    • Start with /context-audit to understand your current context allocation across agents
    • Use /dispatch-subagent before spawning any focused task — it builds clean payloads
    • Run /context-clean when sessions get long — checkpoints state for fresh sessions
    • All agents automatically load context-rules.md for shared discipline (place it in .claude/agents/ so Agent Teams members see it)
  4. Context budget reference: Default is 200K tokens per Claude Code session. Plan all budgets around this limit.

Prompt: Generate implement-adr Skill + ADR Writing Guide

Copy this entire block and paste it into Claude Code. It will generate two skill files and a process guide automatically.


I need you to set up an ADR-driven implementation workflow for my project. Do the following three things in order:

1. Create the implement-adr skill

Create the file .claude/commands/implement-adr.md with the following content. This is a slash command skill that implements any Architecture Decision Record using Claude Code's native capabilities.

The skill should contain:

Header: "Implement any ADR using Claude Code native features. Usage: /implement-adr path/to/ADR-file.md"

Completion Gates (all must pass before declaring done):

Gate 1 - Library Code:

  • All new modules/classes written
  • All new functions have callers (no orphan code)

Gate 2 - Integration Wiring:

  • New code is called from existing entry points (API routes, CLI commands, UI)
  • No new module exists without being imported and used by production code
  • Config files and registrations are updated so the feature is reachable by users

Gate 3 - Migration & Data:

  • If the ADR replaces an existing pattern, existing data/files are migrated (not left as "future work")
  • Source-of-truth files are populated (not just the directory structure)

Gate 4 - UI (if applicable):

  • UI components created/updated, pages wired, build passes

Gate 5 - Tests Match ADR:

  • Every test case listed in the ADR Testing section has a passing test
  • Every use case in the ADR is validated
  • No test is skipped or stubbed

Gate 6 - Existing Tests Green:

  • All pre-existing tests still pass, build passes

Anti-Patterns to avoid:

  • Library-only delivery: writing a module but never wiring it into entry points
  • Empty source-of-truth: creating directory structure without populating content
  • Orphan config: writing handlers that nothing calls
  • "Tests pass = done": not checking if ALL ADR-listed tests are covered
  • Summarizing instead of building
  • Asking permission to continue (the ADR IS the permission)

8-Phase Native Workflow:

Phase 1 - Codebase Analysis: Use the explore subagent to scan files referenced in the ADR, identify existing patterns, find conflicts, map dependencies. Model: Sonnet.

Phase 2 - Task Planning: Use the plan subagent to parse ADR Decision sections, extract workstreams with dependencies, build dependency graph. Critical: the plan MUST include integration wiring tasks, not just library code. Model: Opus.

Phase 3 - Agent Swarm Dispatch: Deploy parallel subagents -- independent workstreams run in parallel, dependent tasks wait. Model: Sonnet per subagent.

Phase 4 - Background Agent for Tests: Run the full test suite as a background agent. Non-blocking. Continue implementing while tests run.

Phase 5 - PostToolUse Hooks: After each file edit, auto-run linting. TypeScript files: lint --fix. Python files: ruff check --fix.

Phase 6 - Integration Wiring (MANDATORY -- this is where most ADR implementations fail): Wire new modules into entry points, migrate existing data, update registrations and configs, run end-to-end smoke test.

Phase 7 - RALPH Debugging (when failures occur, switch to Opus): Reflect (what went wrong?) -> Analyze (logs, diffs) -> Learn (pattern recognition) -> Plan (fix strategy) -> Hypothesize (test prediction). Max 3 iterations per subagent.

Phase 8 - Verification: Open the ADR, read the Testing section line by line, verify every test case and use case has a passing test.

Model Selection Matrix:

  • Explore/Analyze: Sonnet (fast scan)
  • Plan/Design: Opus (deep reasoning)
  • Implement: Sonnet (fast coding)
  • Debug (RALPH): Opus (complex problem-solving)
  • Verify: Haiku (quick validation)

2. Create the ADR writing guide

Create the file .claude/commands/write-adr.md with the following content. This is a helper skill that generates well-structured ADRs that the implement-adr skill can parse.

The skill should contain:

Header: "Generate a well-structured ADR for any technical decision. Usage: /write-adr 'description of what you want to build'"

Instructions for Claude: When the user describes what they want to build, generate an ADR with these sections:

Section 1 - Title and Metadata:

# ADR-NNN: [Short descriptive title]
Status: Proposed
Date: [today]

Section 2 - Context: Why this decision? What problem are we solving? What exists today that needs to change?

  • Be specific about current pain points
  • Reference existing code/systems affected
  • State constraints and requirements

Section 3 - Decision: What are we building? Break into subsections per component:

  • Each subsection should describe one module/feature
  • Include a table mapping components to their responsibilities
  • Specify interfaces between components
  • Call out dependencies explicitly

Section 4 - Testing & Verification: THIS IS THE MOST IMPORTANT SECTION FOR implement-adr. Each line becomes a verification gate.

Format as:

Unit Tests table: | Test Name | Expected Result | Each test should be specific and named (test_something_specific)

Use Cases: UC-1: [Scenario name]

  1. Step-by-step flow
  2. Expected behavior at each step
  3. Verification criteria

Tips for writing good Testing sections:

  • Be specific: "test_user_login_with_valid_credentials" not "test login"
  • Include edge cases: what happens when things fail?
  • Every use case should be runnable as an end-to-end flow
  • If you can't describe how to test it, you can't describe how to build it

Section 5 - Consequences: Positive: what improves Negative: what breaks, what gets harder, what new complexity is added Migration: what existing data/code needs to change

ADR Template (ready to copy):

# ADR-NNN: [Title]

**Status**: Proposed
**Date**: YYYY-MM-DD

## Context

[Why this decision? What problem? What exists today?]

## Decision

### Component 1: [Name]
[What it does, how it interfaces with other components]

### Component 2: [Name]
[What it does, how it interfaces with other components]

## Testing & Verification

### Unit Tests
| Test | Expected Result |
|------|-----------------|
| test_specific_thing | Specific outcome |

### Use Cases

**UC-1: [Scenario]**
1. [Step]
2. [Step]
3. [Verification]

## Consequences

### Positive
- [What improves]

### Negative
- [What breaks or gets harder]

3. Show the process guide

After creating both files, output this guide to the user:


Your ADR workflow is ready. Here is how to use it:

Step 1: Write an ADR Run /write-adr "I want to build [describe your feature]" Claude will generate a structured ADR with all the sections that implement-adr needs. Pay special attention to the Testing section -- every line there becomes a verification gate.

Step 2: Review and save the ADR Review the generated ADR. Edit anything that does not match your intent. Save it somewhere in your project (e.g., docs/decisions/ADR-001-my-feature.md).

Step 3: Implement the ADR Run /implement-adr docs/decisions/ADR-001-my-feature.md Claude will:

  • Scan your codebase with /explore (Phase 1)
  • Break the ADR into parallel workstreams with /plan (Phase 2)
  • Dispatch subagents for parallel implementation (Phase 3)
  • Run tests in background (Phase 4)
  • Auto-lint after every file edit (Phase 5)
  • Wire everything into existing entry points (Phase 6)
  • Debug failures with the RALPH loop (Phase 7)
  • Verify every test case and use case from your ADR (Phase 8)

Step 4: Verify Implementation is only done when all 6 gates pass. Claude will not stop early.

That is it. Datasheet in, firmware out, OS runs it.


Now create both files and show me the guide.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment