Skip to content

Instantly share code, notes, and snippets.

@BrianLeishman
Created February 6, 2026 23:07
Show Gist options
  • Select an option

  • Save BrianLeishman/e07aa7b1ef4c7c84bc1c710ec103282f to your computer and use it in GitHub Desktop.

Select an option

Save BrianLeishman/e07aa7b1ef4c7c84bc1c710ec103282f to your computer and use it in GitHub Desktop.
Claude Code skill: Multi-model code review (Opus, GPT-5.3, Gemini 3 Pro)
name description
code-review
Use when you want a comprehensive code review of current branch changes from multiple AI perspectives. This skill orchestrates parallel reviews from Claude Opus 4.6, GPT-5.3 (via Codex), and Gemini 3 Pro, synthesizes their feedback, then immediately fixes all valid issues and iterates until clean.

Multi-Model Code Review

Review current branch changes with 3 AI models in parallel, synthesize feedback, fix all valid issues, and iterate until clean.

Workflow

  1. Determine the target branch (usually main or master)
  2. Stage any new/untracked files so reviewers can see them
  3. Call all 3 AIs in parallel to review
  4. Wait for all results, note any failures
  5. Synthesize feedback — evaluate each issue for validity
  6. Fix all valid issues immediately
  7. Stage fixes and re-run reviews — iterate until only repetitive nitpicks remain

Step 1: Determine Target Branch

git remote show origin | grep 'HEAD branch' | cut -d' ' -f5

Fallback to main or master if that fails.

Step 2: Run All Reviews in Parallel

Stage any untracked files first so they appear in the diff (git add new files only).

Run these as background Bash tasks (run_in_background: true), then wait for all to complete.

IMPORTANT: Do NOT paste the diff into the prompt. All agents have their own tools to view git changes - let them use those tools.

CRITICAL - NO TIMEOUTS:

  • Do NOT set a timeout parameter on any Bash or TaskOutput call
  • The AIs WILL complete with either output or an error - wait for them
  • A timeout does NOT mean success - it means you aborted prematurely
  • Reviews may take 10-30+ minutes - this is expected and normal
  • Use block: true with no timeout to wait for completion

Claude Opus 4.6:

claude --print --model claude-opus-4-6 --dangerously-skip-permissions "PROMPT"

GPT-5.3 (via Codex CLI):

codex exec -c model="gpt-5.3-codex" -c model_reasoning_effort="high" --dangerously-bypass-approvals-and-sandbox "PROMPT"

Gemini 3 Pro:

gemini --yolo --model gemini-3-pro-preview --output-format text "PROMPT"

Review Prompt Template

Use this same prompt for all 3 agents (replace <target-branch> with actual branch name):

You are a senior software engineer doing a code review.

Review the current branch changes against <target-branch>. Use your tools to view the git diff and examine the code.

Provide feedback on:
1. **Bugs/Issues** - Logic errors, edge cases, potential crashes
2. **Security** - Vulnerabilities, injection risks, auth issues
3. **Performance** - Inefficiencies, N+1 queries, memory leaks
4. **Code Quality** - Readability, naming, duplication, complexity
5. **Best Practices** - Idiomatic patterns, error handling, testing

Be specific - reference line numbers and file names.
Prioritize: critical issues first, nitpicks last.

Step 3: Synthesize and Evaluate

Present findings, but critically evaluate each issue before acting:

  • Is the issue valid? Does it describe a real problem, not a hypothetical one?
  • Is the fix correct? Would the suggested approach actually improve things?
  • Discard invalid feedback. Reviewers sometimes hallucinate issues or misread code.
  • Do NOT skip valid issues because they seem "small." If a fix is valid and correct, it gets done. Size is irrelevant — always do things the right way.

Present synthesis in this format:

## Code Review Summary

### Consensus Issues
[Issues flagged by 2+ reviewers - highest priority]

### Unique Insights
[Valid issues raised by only one reviewer]

### Discarded
[Issues that were evaluated and found invalid, with brief reason]

Step 4: Fix All Valid Issues

Immediately implement all fixes. Do not ask for permission — the user invoked the review to get things fixed.

After fixing, run build checks (cargo check, npm run build, etc.) to verify nothing broke.

Step 5: Stage Fixes and Iterate

CRITICAL: Reviewers see git state, not your working tree. After applying fixes:

  1. Stage all modified files with git add so the changes appear in the diff
  2. Then re-run the full 3-model review
  3. Synthesize and fix again

Without staging, git diff master...HEAD only shows committed changes — your fixes would be invisible to the review agents, and they'd report the exact same issues again.

Stop iterating when:

  • New feedback is repetitive (same issues re-raised that were already addressed)
  • Only subjective nitpicks remain (style preferences, not correctness)
  • Reviewers find no new issues

Typically 2-3 rounds is sufficient.

Important: Do NOT commit the fixes yourself during iteration — only stage them. The user decides when to commit.

Handling Failures

If any AI fails:

  • Report which one failed and the error
  • Still present results from successful reviews
  • Continue with fixes based on available feedback

A timeout is a failure, not a success. If you hit a timeout, that means you aborted the review prematurely. Do not assume "all is good" from a timeout - you simply didn't get the review. Re-run without a timeout.

Tips

  • NEVER use timeouts - AIs will respond with output or error, wait for them
  • NEVER paste diffs - Let each AI use its own tools to view the changes
  • Gemini is slower than Claude/Codex - this is normal, wait for it
  • Trust consensus when AIs disagree on subjective matters
  • Reviews taking 10-30 minutes per round is normal - do not abort early
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment