Skip to content

Instantly share code, notes, and snippets.

@siberianmi
Created December 18, 2025 01:06
Show Gist options
  • Select an option

  • Save siberianmi/7f4beeaa2f3358e48240bcefe1b73f23 to your computer and use it in GitHub Desktop.

Select an option

Save siberianmi/7f4beeaa2f3358e48240bcefe1b73f23 to your computer and use it in GitHub Desktop.
agent-developers
name description
agent-developers
Use when executing implementation plans with independent tasks in the current session - dispatches subagent to work on tasks until complete or context exhausted with code review between tasks, enabling fast iteration with quality gates

Subagent-Driven Development

Executes implementation plans by dispatching fresh subagent to work on tasks until context exhaustion with code review after each. Enables high-quality, fast iteration within the current session.

Announce: "I'm using the agent-developers skill to execute this plan."

When to Use

  • Have implementation plan ready to execute
  • Staying in current session (not switching to parallel session)
  • Tasks are mostly independent
  • Want continuous progress with quality gates
  • Need code review between tasks

When NOT to Use

  • Need to review plan with human first (use executing-plans for checkpoint reviews)
  • Tasks are tightly coupled and interdependent (manual execution better)
  • Plan needs revision (use brainstorming first)
  • No plan exists yet (use writing-plans first)

Core Principle

Operate a single subagent until it runs out of context or complets all tasks + review agents work one commit at a time when it exits = high quality, fast iteration

vs. Executing Plans (parallel session):

  • Same session (no context switch)
  • Fresh subagent to execute tasks (no context pollution)
  • Subagent is instructed to work until all tasks are complete or context pressure occurs.
  • Code review after agent completes work (catch issues early)
  • Faster iteration (no human-in-loop between tasks)
  • Automatic quality gates

Process

Step 1: Load Plan

  1. Read plan file completely
  2. Create TodoWrite with all tasks from plan
  3. Verify plan has proper structure (tasks, steps, verification)
  4. Create a list of tasks broken out by line numbers.

Step 2: Execute Task with Subagent

Create a sub agent that will work on tasks until all tasks are completed or it's context is exhausted.

Dispatch implementation subagent using Task tool:

Task: Implement Tasks from Plan

You are implementing Tasks from [full-path-to-plan-file] it has the following unfinished tasks.

**Task List:**
* Task 1: Short description [Lines 37-95]
* Task 2: Short description #2 [Lines 96-145]
* Task 3: Short description #3 [Lines 146-195]
* Task 4: Short description #4 [Lines 196-245]
* Task 5: Short description #5 [Lines 246-295]

Read each task carefully one by one. Only read the lines for a single task at a time.

** IMPORTANT: DO NOT READ A SECOND TASK UNTIL YOU COMPLETE THE FIRST **

1. PASSIVE MONITORING:
   - After each tool use, you receive a system warning with token counts
   - Parse: <system_warning>Token usage: X/200000; Y remaining</system_warning>
   - Track your usage percentage continuously

2. THRESHOLDS:
   - 75% (150K tokens): Enter "wrapping up" mode - finish current task only
   - 80% (160K tokens): Execute graceful termination

3. YOUR WORKFLOW FOR EACH TASK:
   A. Read the task from the plan file (use the line numbers above)
   B. Implement exactly what the task specifies
   C. Write tests (following TDD if task says to)
   D. Verify implementation works
   E. Commit your work
   F. Check your remaining context
   G. Start the next task OR begin graceful shutdown and report back
   H. If no tasks remain, report back

Work from: /home/kkolk/prusatouch

4. WRAPPING UP MODE (75-80%):
   - Complete only the current atomic task
   - Reject any new task requests
   - Begin preparing termination report
   - Start saving intermediate state

5. GRACEFUL SHUTDOWN PROCEDURE:
   When 80% context threshold is reached:
   a. Stop accepting new tasks immediately
   b. Complete only the current in-progress operation
   c. Update the docs/PROGRESS.md
   d. Document:
      - Tasks completed
      - Tasks in progress (with current state)
      - Tasks not started
      - Recommended next steps
      - Exact context usage at termination
   e. Save all work-in-progress files
   f. Exit with clear termination message

6. COMMUNICATION:
   Always inform when entering wrapping up mode:
   "⚠️ Context usage at 75%. Entering wrap-up mode. Will terminate at 80%."

Subagent reports back with summary of work and results.

Step 3: Review Subagent's Work

Get git SHAs for review:

# Get commit before task
git log --oneline -2

# Identify: base_sha (before task) and head_sha (after task)

Dispatch code-reviewer subagent using Agent tool:

Agent: code-reviewer

You are reviewing Task N implementation.

**What was implemented:**
[Paste subagent's report]

**Requirements (from plan):**
[Paste the specific task from plan file]

**Code changes:**
- Base commit: [base_sha]
- Head commit: [head_sha]

Review the implementation:
1. Read the plan task requirements
2. Check git diff between commits
3. Verify all requirements met
4. Check code quality
5. Identify any issues

Report:
- Strengths
- Issues (Critical/Important/Minor)
- Assessment (Ready/Needs work)

Code reviewer returns: Strengths, Issues categorized by severity, Assessment

Step 4: Apply Review Feedback

If issues found:

  • Critical issues: Fix immediately before proceeding
  • Important issues: Fix before next task
  • Minor issues: Note for later or batch fix

Dispatch follow-up subagent if needed:

Task: Fix code review issues

Fix these issues from code review of Task N:

**Critical:**
- [List critical issues]

**Important:**
- [List important issues]

Work from: [absolute-directory-path]

Report: What you fixed, verification results

Step 5: Mark Complete, Next Task

  1. Mark task as completed in TodoWrite
  2. Move to next task
  3. Repeat steps 2-5 for each task

Step 6: Final Review

After all tasks complete, dispatch final code-reviewer:

Agent: code-reviewer

Review the complete implementation of [feature-name].

**Plan requirements:**
[Link to plan file or paste full requirements]

**Code changes:**
- Base commit: [sha before all work]
- Head commit: [sha after all work]

Review:
1. All plan requirements met
2. Architecture follows design
3. Code quality
4. Test coverage
5. Any gaps or issues

Report:
- Completeness assessment
- Code quality assessment
- Issues found
- Recommendation (Ready/Needs work)

Step 7: Complete Development

After final review passes:

  1. Run final verification (full test suite, build, etc.)
  2. Announce completion with summary
  3. Transition to branch completion:
    • "All tasks complete and reviewed. Ready to finish development branch?"
    • Use finishing-a-development-branch skill for next steps (merge/PR/cleanup)

Workflow Comparison

Aspect Subagent-Driven Executing Plans Manual
Session Same session Parallel session Same session
Execution Subagent per task Human executes batches Human executes
Review Automatic after each Human checkpoints As needed
Context Fresh each task Continuous Continuous
Speed Fast (no waiting) Medium (checkpoint delays) Slow
Quality High (reviews) High (checkpoints) Variable

Output Format

Progress reports should be concise:

## Task N: [Name] - Complete ✅

**Implementation:**
[Brief summary of what was implemented]

**Review:**
- Strengths: [Key strengths]
- Issues: [None/List critical or important issues]
- Status: [Ready/Fixed and verified]

**Next:** Task N+1: [Name]

Constraints

  • MUST: Dispatch fresh subagent for completing tasks (don't implement manually)
  • MUST: Review code when an agent terminates work before proceeding
  • MUST: Fix Critical issues immediately dispatch a new subagent and inform it of the issues as it's first task
  • MUST: Use absolute paths when dispatching subagents
  • It is unacceptable to skip code review between tasks
  • It is unacceptable to proceed with unfixed Critical issues
  • It is unacceptable to dispatch multiple implementation subagents in parallel (causes conflicts)
  • It is unacceptable to implement tasks manually instead of using subagents (breaks fresh context principle)
  • NEVER skip verification steps specified in plan
  • ALWAYS wait for review before next task

Anti-Patterns

Manual implementation: Implement task yourself → Context pollution, no fresh eyes ✅ Subagent per task: Dispatch clean subagent → Fresh context, follows TDD naturally

Skip reviews: "Task is simple, no need for review" → Issues compound ✅ Review everything: Review after each task → Catch issues early when cheap

Parallel subagents: Dispatch 3 tasks to 3 subagents simultaneously → Git conflicts, chaos ✅ Sequential tasks: One task, review, next task → Clean progression

Ignore Critical issues: "I'll fix later" → Builds on broken foundation ✅ Fix immediately: Stop, fix Critical issues, verify, then continue → Solid foundation

Examples

### Example 1: Complete Workflow

User: "Execute the email-validation plan using agent-developers"

Agent: "I'm using the agent-developers skill to execute this plan."

[Reads plan: /docs/plans/2024-12-06-email-validation.md] [Creates TodoWrite with 3 tasks]


Task 1: Email Validation Function

[Dispatches implementation subagent]

Task: Implement Task 1: Email Validation Function

You are implementing Task 1 from /docs/plans/2024-12-06-email-validation.md.

Read that task carefully. Your job is to:

  1. Implement exactly what the task specifies
  2. Write tests (following TDD)
  3. Verify implementation works
  4. Commit your work
  5. Report back

Work from: /Users/dev/project


Subagent reports:

Implemented email validation function with tests.

Tests written (3):

  • test_valid_email
  • test_invalid_email_no_at
  • test_invalid_email_no_domain

Test results:

$ pytest tests/validation/test_email.py -v
===== 3 passed in 0.12s =====

Files changed:

  • Created: src/validation/email.py
  • Created: tests/validation/test_email.py

Committed: "feat: add email validation function"


Agent (Review):

[Gets git SHAs: base=abc1234, head=def5678]

[Dispatches code-reviewer]

Agent: code-reviewer

You are reviewing Task 1 implementation.

What was implemented: [Pastes subagent's report]

Requirements: Task 1: Email Validation Function

  • Create validation function using email-validator library
  • Test valid and invalid emails
  • Return boolean

Code changes:

  • Base: abc1234
  • Head: def5678

Review and report.


Code Reviewer reports:

Strengths:

  • Clean TDD approach (tests first, pass)
  • Good test coverage (valid + 2 invalid cases)
  • Follows Python conventions

Issues: None

Assessment: Ready


Agent: "Task 1 Complete ✅ - No issues found. Moving to Task 2."

[Marks Task 1 complete]


Task 2: Integrate Validation

[Dispatches implementation subagent] [Subagent implements Task 2] [Agent reviews, finds Important issue: Missing error logging] [Agent dispatches fix subagent] [Fix verified] [Marks Task 2 complete]


Task 3: Add API Tests

[Dispatches implementation subagent] [Subagent implements Task 3] [Agent reviews - Ready] [Marks Task 3 complete]


Final Review:

[Dispatches final code-reviewer for complete feature]

Final Reviewer reports:

Completeness: All plan requirements met Quality: High - clean code, good tests, proper error handling Issues: None Recommendation: Ready


Agent:

All tasks complete and verified!

Summary:

  • 3 tasks completed
  • All tests passing (8/8)
  • Code reviewed and approved
  • Commits: 4 clean commits

Ready to finish development branch?

Example 2: Handling Review Feedback

Task 2 Review finds Critical issue:

Code Reviewer: Critical: Email validation not handling None input - will crash if None passed

Agent:

Critical issue found. Fixing before proceeding.

[Dispatches fix subagent]

Task: Fix code review issue - None handling

Fix this Critical issue: Email validation crashes on None input

Work from: /Users/dev/project

Add None check to validate_email function and test case.


Fix Subagent:

Added None handling:

def validate_email(email: str | None) -> bool:
    if email is None:
        return False
    # ... rest of validation

Added test:

def test_none_email():
    assert validate_email(None) == False

Test results:

$ pytest tests/validation/test_email.py -v
===== 4 passed in 0.13s =====

Committed: "fix: handle None in email validation"


Agent:

Critical issue fixed and verified. Task 2 complete ✅

Moving to Task 3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment