| name | description |
|---|---|
review-plan |
Multi-AI plan review with iterative refinement. Use when reviewing implementation plans, architecture decisions, or technical approaches. Calls Claude Opus 4.6, GPT-5.3 (via Codex), and Gemini 3 Pro in parallel to review a plan, then autonomously refines it based on meritorious feedback until consensus or only nitpicks remain. Invoke with /review-plan followed by your plan. |
Review plans with 3 AI models in parallel, then iteratively refine until consensus.
- User provides plan inline after invoking
/review-plan - Call all 3 AIs in parallel to review
- Analyze feedback - update plan for meritorious points, ignore nitpicks
- Re-review until consensus or only minor suggestions remain
- Output the final refined plan (and any AI failures)
Run these in parallel using background Bash tasks.
CRITICAL - NO TIMEOUTS:
- Do NOT set a
timeoutparameter on any Bash or TaskOutput call - The AIs WILL complete with either output or an error - wait for them
- A timeout does NOT mean success - it means you aborted prematurely
- Reviews may take 10-30+ minutes - this is expected and normal
- Use
block: truewith no timeout to wait for completion
Claude Opus 4.6:
claude --print --model claude-opus-4-6 --dangerously-skip-permissions "PROMPT"GPT-5.3 (via Codex CLI):
codex exec -c model="gpt-5.3-codex" -c model_reasoning_effort="high" --dangerously-bypass-approvals-and-sandbox "PROMPT"Gemini 3 Pro:
gemini --yolo --model gemini-3-pro-preview --output-format text "PROMPT"You are a senior software architect reviewing an implementation plan.
Review this plan and provide feedback:
1. **Critical issues** - What's wrong or risky?
2. **Missing pieces** - What's not addressed?
3. **Better alternatives** - Simpler or more robust approaches?
4. **Security/performance** - Any concerns?
Be concise. If solid, say "LGTM" with any minor notes.
Rate overall: NEEDS_CHANGES | MINOR_SUGGESTIONS | LGTM
## Plan:
{PLAN}
After collecting reviews, use this to refine:
You have a plan and feedback from 3 AI reviewers.
Update the plan to address feedback that has merit. Ignore:
- Nitpicks about style/formatting
- Suggestions that add unnecessary complexity
- Redundant points already covered
Keep changes minimal and focused.
## Current Plan:
{PLAN}
## Feedback:
{COMBINED_FEEDBACK}
## Output:
Return ONLY the updated plan. No explanations.
Stop iterating when:
- All responding AIs rate LGTM or MINOR_SUGGESTIONS
- Feedback is purely stylistic/nitpicky
- Same issues repeat (AIs disagree - pick the majority view)
- Max 3 iterations reached
- Get the plan - User provides it after the slash command
- First review round - Run all 3 AIs in parallel (use
run_in_background: true) - Collect results - Wait for all, note any failures
- Analyze feedback - Look for NEEDS_CHANGES vs LGTM ratings
- If refinement needed - Update plan based on meritorious feedback
- Re-review - Run another parallel round with updated plan
- Repeat until consensus or max iterations
- Output - Print final plan; report any AI failures
On completion:
## Final Plan
{REFINED_PLAN}
If any AI failed:
Note: {AI_NAME} failed to run: {ERROR}. Consider fixing and re-running.
- NEVER use timeouts - AIs will respond with output or error, wait for them
- If an AI consistently fails, continue with the others
- Trust majority opinion when AIs disagree
- Prefer keeping plans simple over adding every suggestion
- Reviews taking 10-30 minutes is normal - do not abort early
- A timeout is a failure, not a success - re-run without timeout if one occurs