Skip to content

Instantly share code, notes, and snippets.

@BrianLeishman
Created February 6, 2026 23:07
Show Gist options
  • Select an option

  • Save BrianLeishman/d80ac64ac26d6daa91ed02be43065f38 to your computer and use it in GitHub Desktop.

Select an option

Save BrianLeishman/d80ac64ac26d6daa91ed02be43065f38 to your computer and use it in GitHub Desktop.
Claude Code skill: Multi-AI plan review with iterative refinement (Opus, GPT-5.3, Gemini 3 Pro)
name description
review-plan
Multi-AI plan review with iterative refinement. Use when reviewing implementation plans, architecture decisions, or technical approaches. Calls Claude Opus 4.6, GPT-5.3 (via Codex), and Gemini 3 Pro in parallel to review a plan, then autonomously refines it based on meritorious feedback until consensus or only nitpicks remain. Invoke with /review-plan followed by your plan.

Multi-AI Plan Review

Review plans with 3 AI models in parallel, then iteratively refine until consensus.

Workflow

  1. User provides plan inline after invoking /review-plan
  2. Call all 3 AIs in parallel to review
  3. Analyze feedback - update plan for meritorious points, ignore nitpicks
  4. Re-review until consensus or only minor suggestions remain
  5. Output the final refined plan (and any AI failures)

AI Commands

Run these in parallel using background Bash tasks.

CRITICAL - NO TIMEOUTS:

  • Do NOT set a timeout parameter on any Bash or TaskOutput call
  • The AIs WILL complete with either output or an error - wait for them
  • A timeout does NOT mean success - it means you aborted prematurely
  • Reviews may take 10-30+ minutes - this is expected and normal
  • Use block: true with no timeout to wait for completion

Claude Opus 4.6:

claude --print --model claude-opus-4-6 --dangerously-skip-permissions "PROMPT"

GPT-5.3 (via Codex CLI):

codex exec -c model="gpt-5.3-codex" -c model_reasoning_effort="high" --dangerously-bypass-approvals-and-sandbox "PROMPT"

Gemini 3 Pro:

gemini --yolo --model gemini-3-pro-preview --output-format text "PROMPT"

Review Prompt Template

You are a senior software architect reviewing an implementation plan.

Review this plan and provide feedback:
1. **Critical issues** - What's wrong or risky?
2. **Missing pieces** - What's not addressed?
3. **Better alternatives** - Simpler or more robust approaches?
4. **Security/performance** - Any concerns?

Be concise. If solid, say "LGTM" with any minor notes.
Rate overall: NEEDS_CHANGES | MINOR_SUGGESTIONS | LGTM

## Plan:
{PLAN}

Refinement Prompt Template

After collecting reviews, use this to refine:

You have a plan and feedback from 3 AI reviewers.

Update the plan to address feedback that has merit. Ignore:
- Nitpicks about style/formatting
- Suggestions that add unnecessary complexity
- Redundant points already covered

Keep changes minimal and focused.

## Current Plan:
{PLAN}

## Feedback:
{COMBINED_FEEDBACK}

## Output:
Return ONLY the updated plan. No explanations.

Consensus Detection

Stop iterating when:

  • All responding AIs rate LGTM or MINOR_SUGGESTIONS
  • Feedback is purely stylistic/nitpicky
  • Same issues repeat (AIs disagree - pick the majority view)
  • Max 3 iterations reached

Execution Steps

  1. Get the plan - User provides it after the slash command
  2. First review round - Run all 3 AIs in parallel (use run_in_background: true)
  3. Collect results - Wait for all, note any failures
  4. Analyze feedback - Look for NEEDS_CHANGES vs LGTM ratings
  5. If refinement needed - Update plan based on meritorious feedback
  6. Re-review - Run another parallel round with updated plan
  7. Repeat until consensus or max iterations
  8. Output - Print final plan; report any AI failures

Output Format

On completion:

## Final Plan

{REFINED_PLAN}

If any AI failed:

Note: {AI_NAME} failed to run: {ERROR}. Consider fixing and re-running.

Tips

  • NEVER use timeouts - AIs will respond with output or error, wait for them
  • If an AI consistently fails, continue with the others
  • Trust majority opinion when AIs disagree
  • Prefer keeping plans simple over adding every suggestion
  • Reviews taking 10-30 minutes is normal - do not abort early
  • A timeout is a failure, not a success - re-run without timeout if one occurs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment