| name | description |
|---|---|
code-review |
Use when you want a comprehensive code review of current branch changes from multiple AI perspectives. This skill orchestrates parallel reviews from Claude Opus 4.6, GPT-5.3 (via Codex), and Gemini 3 Pro, synthesizes their feedback, then immediately fixes all valid issues and iterates until clean. |
Review current branch changes with 3 AI models in parallel, synthesize feedback, fix all valid issues, and iterate until clean.
- Determine the target branch (usually
mainormaster) - Stage any new/untracked files so reviewers can see them
- Call all 3 AIs in parallel to review
- Wait for all results, note any failures
- Synthesize feedback — evaluate each issue for validity
- Fix all valid issues immediately
- Stage fixes and re-run reviews — iterate until only repetitive nitpicks remain
git remote show origin | grep 'HEAD branch' | cut -d' ' -f5Fallback to main or master if that fails.
Stage any untracked files first so they appear in the diff (git add new files only).
Run these as background Bash tasks (run_in_background: true), then wait for all to complete.
IMPORTANT: Do NOT paste the diff into the prompt. All agents have their own tools to view git changes - let them use those tools.
CRITICAL - NO TIMEOUTS:
- Do NOT set a
timeoutparameter on any Bash or TaskOutput call - The AIs WILL complete with either output or an error - wait for them
- A timeout does NOT mean success - it means you aborted prematurely
- Reviews may take 10-30+ minutes - this is expected and normal
- Use
block: truewith no timeout to wait for completion
Claude Opus 4.6:
claude --print --model claude-opus-4-6 --dangerously-skip-permissions "PROMPT"GPT-5.3 (via Codex CLI):
codex exec -c model="gpt-5.3-codex" -c model_reasoning_effort="high" --dangerously-bypass-approvals-and-sandbox "PROMPT"Gemini 3 Pro:
gemini --yolo --model gemini-3-pro-preview --output-format text "PROMPT"Use this same prompt for all 3 agents (replace <target-branch> with actual branch name):
You are a senior software engineer doing a code review.
Review the current branch changes against <target-branch>. Use your tools to view the git diff and examine the code.
Provide feedback on:
1. **Bugs/Issues** - Logic errors, edge cases, potential crashes
2. **Security** - Vulnerabilities, injection risks, auth issues
3. **Performance** - Inefficiencies, N+1 queries, memory leaks
4. **Code Quality** - Readability, naming, duplication, complexity
5. **Best Practices** - Idiomatic patterns, error handling, testing
Be specific - reference line numbers and file names.
Prioritize: critical issues first, nitpicks last.
Present findings, but critically evaluate each issue before acting:
- Is the issue valid? Does it describe a real problem, not a hypothetical one?
- Is the fix correct? Would the suggested approach actually improve things?
- Discard invalid feedback. Reviewers sometimes hallucinate issues or misread code.
- Do NOT skip valid issues because they seem "small." If a fix is valid and correct, it gets done. Size is irrelevant — always do things the right way.
Present synthesis in this format:
## Code Review Summary
### Consensus Issues
[Issues flagged by 2+ reviewers - highest priority]
### Unique Insights
[Valid issues raised by only one reviewer]
### Discarded
[Issues that were evaluated and found invalid, with brief reason]Immediately implement all fixes. Do not ask for permission — the user invoked the review to get things fixed.
After fixing, run build checks (cargo check, npm run build, etc.) to verify nothing broke.
CRITICAL: Reviewers see git state, not your working tree. After applying fixes:
- Stage all modified files with
git addso the changes appear in the diff - Then re-run the full 3-model review
- Synthesize and fix again
Without staging, git diff master...HEAD only shows committed changes — your fixes would be invisible to the review agents, and they'd report the exact same issues again.
Stop iterating when:
- New feedback is repetitive (same issues re-raised that were already addressed)
- Only subjective nitpicks remain (style preferences, not correctness)
- Reviewers find no new issues
Typically 2-3 rounds is sufficient.
Important: Do NOT commit the fixes yourself during iteration — only stage them. The user decides when to commit.
If any AI fails:
- Report which one failed and the error
- Still present results from successful reviews
- Continue with fixes based on available feedback
A timeout is a failure, not a success. If you hit a timeout, that means you aborted the review prematurely. Do not assume "all is good" from a timeout - you simply didn't get the review. Re-run without a timeout.
- NEVER use timeouts - AIs will respond with output or error, wait for them
- NEVER paste diffs - Let each AI use its own tools to view the changes
- Gemini is slower than Claude/Codex - this is normal, wait for it
- Trust consensus when AIs disagree on subjective matters
- Reviews taking 10-30 minutes per round is normal - do not abort early