description: AI Self-Correction Loop - A universal pattern where AI owns the full feedback loop: writing automated checks first, implementing to satisfy them, and iterating until all validations pass.
Transition from "AI as autocomplete" to "AI as owner" by enforcing a closed feedback loop across any domain. The AI must verify its own work through automated checks before presenting results.
- Checks First: Define objective success criteria (tests, scripts, or automated validations) before writing feature code.
- The 30-Second Rule: Automated checks MUST complete in under 30 seconds to maintain high-velocity AI iteration.
- AI-Centric Feedback: Validation output must be designed for AI consumption, not human review. It should include precise error locations, stack traces, and state dumps.
- AI Ownership of Tooling: The AI chooses the most appropriate validation tools that meet the speed and feedback quality requirements.
- Iterate until Green: Run checks, read failures as actionable feedback, and fix implementation until all criteria are met.
- No Half-Baked Results: Do not present code that "might work." Only present "code + passing checks."
Step 1: Document Objective Checks
- Create or update an
objective-checks.mdfile (or a specific section in the project spec) that lists the success criteria in plain English. - This document serves as the source of truth for both humans and AI.
- For each check, define:
- Scenario: What is being tested?
- Expected Outcome: What constitutes success?
- Verification Method: How will it be automated?
Step 2: Choose and Implement AI-Centric Automation
- Select tools that provide the fastest possible feedback loop (aim for <10s, max 30s).
- Tool Selection Criteria:
- Speed: Can it run the specific check in seconds? (e.g., unit/integration tests over full system E2E where possible).
- Precision: Does it point to the exact file and line of failure?
- Rich Context: Does it provide enough state (variable values, logs) for the AI to diagnose without manual probing?
- Crucial: Ensure the automated checks exactly match the human-readable documentation from Step 1.
- Write the validation code or scripts.
- Verify Failure: Run the checks and verify they fail as expected. This proves the check is valid and the feature is currently missing or broken.
Step 3: Research & Implement
- Research what is best about the objective checks and the current codebase to determine the optimal implementation strategy.
- Implement the feature by applying the best practices and patterns discovered during this research.
- Goal: Use the research to implement the best solution that satisfies the checks while maintaining high architectural quality.
Step 4: Run Checks & Capture Machine-Readable Feedback
- Execute the validation suite.
- Time Constraint: If checks take >30s, refactor them to be more targeted (e.g., test only the affected module).
- Critical: Capture ALL output (stderr, logs, trace files). Treat all warnings, console errors, and log exceptions as failures.
Step 5: Analyze & Fix using High-Signal Feedback
- Read the machine-readable output directly.
- Actionable AI Feedback includes:
- Precise Locations:
file:line:columnformat for immediate jumping. - State Diffing: "Expected 'A', but got 'B'" with a clear diff.
- Contextual Dumps: Local variable values at the time of failure, HTML snapshots, or raw API response bodies.
- Traceability: Full stack traces or call chains leading to the error.
- Precise Locations:
- Use this diagnostic data to identify the root cause. Do not guess.
- Refine implementation.
Step 6: Repeat until Green
- Re-run validations after every fix.
- Continue until 100% of checks pass.
The AI is "done" only when:
- Documentation Sync: The
objective-checks.md(or equivalent) is up-to-date and matches the implementation. - Automated Success: All new and related existing checks pass green.
- Clean Feedback: No new errors, warnings, or regressions were introduced (check logs/output).
- Complete Patch: The diff includes the documentation update, the automated checks, and the implementation.