—————————— IMPORTANT ——————————
Anti-Recursion / Output Contract
You are not writing a metaprompt. You are rewriting P into a single, executable task prompt.
Hard Rules
Do not mention prompting techniques or re-explain this instruction.
Do not instruct the next model to “rewrite” or “apply techniques.”
Output only the rewritten task prompt, bounded by:
...content...
Banned phrases (must not appear in the output) metaprompt, rewrite the prompt, apply the following techniques, auto-detect the domain, Summary-Expand Loop, Chain-of-Verification, Adversarial Prompting, Reverse Prompting, Zero-Shot Chain of Thought, Reference Class Priming, Multi-Persona Debate, Temperature Simulation, Recursive Prompt Optimization, Deliberate Over-Instruction.
Domain Declaration (Explicit)
At the top of the rewritten prompt, state a single explicit domain (do not say “auto-detect”):
Domain: Coding or Domain: Research / Analysis or Domain: Decision-Making or Domain: Analytical / General.
Modify the following prompt to incorporate ALL of the advanced prompting techniques below. Your ONLY output should be a NEW, REWRITTEN PROMPT (not the answer to P). The rewritten prompt must preserve P’s intent and constraints while adding structure, checks, and examples. Remove any bracketed explanations like (WHY:) or (EXAMPLE:) from the final rewritten prompt; keep only the instructions that help the next AI perform the task.
—————————— GOAL ——————————
Rewrite P into a single, polished prompt that: • Uses Chain-of-Verification, Adversarial Prompting, Strategic Edge Case Learning, Reverse Prompting, Recursive Prompt Optimization, Deliberate Over-Instruction, Zero‑Shot Chain‑of‑Thought Through Structure, Reference Class Priming, Multi‑Persona Debate, Temperature Simulation Through Roleplay, and a Summary‑Expand Loop. • Auto-detects domain: Coding, Research/Analysis, or Decision-Making. If unclear, default to “Analytical/General”. • Includes concrete, domain-specific example fragments (tests, checks, edge cases, personas, acceptance criteria) that the next AI can actually execute or follow.
—————————— OUTPUT POLICY ——————————
• Output ONLY the rewritten prompt. Do NOT answer P. Do NOT include commentary about what you did. • The rewritten prompt must be self-contained and ready to run. • If your environment limits visible chain-of-thought, produce concise “Reasoning Summary” sections instead of raw step-by-step thoughts.
—————————— STRUCTURE TO BUILD INTO THE REWRITTEN PROMPT ——————————
A. Task Restatement & Inputs
- Restate P succinctly; list inputs, constraints, success criteria, and non-goals.
B. Reference Class Priming (quality bar + format model)
- Emulate the style and rigor of a top-tier exemplar for this domain: • Coding: “Readable, tested code and a brief engineer’s design note.” • Research: “Concise evidence brief with citations and transparent methods.” • Decision: “Executive decision memo with options, tradeoffs, and a recommendation.”
- Include explicit deliverable sections and acceptance tests.
C. Zero‑Shot CoT Through Structure (visible reasoning scaffold)
- Require the assistant to work in labeled phases (tune names to domain):
- Understand → 2) Plan → 3) Execute → 4) Verify → 5) Finalize
- Require a short “Reasoning Summary” at the end of each phase if full CoT is restricted.
D. Reverse Prompting (self-reframing before solving)
- Instruct the assistant to first produce an “Improved Task Framing” and “Acceptance Criteria” based on P, then proceed using that improved framing.
E. Summary‑Expand Loop
- Two-pass output: (1) TL;DR Plan (bulleted), then (2) Full Expanded Solution linked to each bullet.
F. Chain‑of‑Verification (explicit checks before finalizing)
- Require an auditable Verification Plan tailored to the domain; run it; revise if any check fails.
G. Adversarial Prompting (devil’s advocate review)
- Mandate an internal critique pass that hunts for errors, risky assumptions, or missing cases; require fixes.
H. Strategic Edge Case Learning (include concrete edge cases)
- Provide domain-specific edge cases (see library below) and require the solution to handle them.
I. Multi‑Persona Debate (contrasting priorities)
- At least two roles debate briefly, then synthesize: e.g., “Builder vs. Auditor”, “Innovator vs. Risk Officer”.
J. Temperature Simulation Through Roleplay
- Require two passes: “Bold/Creative” then “Cautious/Conservative”, followed by a merged final.
K. Recursive Prompt Optimization (self-revision loop)
- Include a mini loop: Draft → Evaluate vs. Acceptance Criteria → Revise → Lock.
L. Deliverables & Format Lock
- Specify exact output sections, file names (if any), and formatting (lists, tables, code blocks, etc.).
- Include a crisp “Stop after final answer” instruction.
—————————— EXAMPLE INSERTION LIBRARY (use and adapt as relevant to P; integrate into the rewritten prompt) ——————————
[CODING EXAMPLES] • Chain-of-Verification – Test Checklist (insert as acceptance tests) (EXAMPLE)
- Unit tests must pass:
- sum([]) → 0
- sum([-1, -1]) → -2
- sum([10**9, 1]) handles precision or clearly documents limits
- sum([None]) raises TypeError with message: "Non-numeric value"
- Complexity target: O(n) for single pass; document if worse.
- Static checks: no global mutations; pure function where claimed.
- Security: reject untrusted input; validate types; no eval/dynamic exec. • Adversarial Prompting – Self-critique prompts (EXAMPLE)
- “Where is this algorithm likely to fail on large or adversarial inputs?”
- “What is the smallest counterexample that breaks my assumptions?”
- “Which invariants could be violated? How will I enforce them?” • Strategic Edge Cases – Insert concrete cases (EXAMPLE)
- Empty input; very large inputs; non-ASCII/Unicode; streaming vs. batch; timeouts.
- Concurrency: simultaneous writers/readers; race conditions.
- I/O failures: partial writes, network flaps, retry idempotency. • Reverse Prompting – Improved task framing (EXAMPLE)
- “Define the function signature, I/O contracts, error model, and examples before coding.” • Recursive Optimization – Passes (EXAMPLE)
- Pass 1: Pseudocode; Pass 2: Clean code; Pass 3: Tests + benchmarks; Pass 4: Refactor & docstrings. • Over-Instruction – Step plan (EXAMPLE)
- Steps: Clarify → Design (data structures, invariants) → Pseudocode → Implement → Tests → Complexity → Review. • Zero‑Shot CoT – Reasoning sections (EXAMPLE)
- “Design Decisions:” bullets with tradeoffs; “Complexity Analysis:” Big‑O for time/space. • Reference Class Priming – Quality bar (EXAMPLE)
- “Match the clarity of a production PR: code + rationale + tests.” • Multi‑Persona Debate (EXAMPLE)
- “Senior Engineer (performance) vs. SRE (resilience) vs. Product (scope creep).” • Temperature Roleplay (EXAMPLE)
- Bold pass: propose aggressive optimizations (vectorization, memoization, parallelism).
- Cautious pass: prioritize correctness, observability, and safe defaults. • Summary‑Expand Loop (EXAMPLE)
- TL;DR: bullet plan of modules & tests → Expand: code + tests per bullet.
[RESEARCH/ANALYSIS EXAMPLES] • Chain-of-Verification – Evidence checklist (EXAMPLE)
- Cite ≥3 primary sources (last 3 years if applicable) with DOIs/links.
- Cross-verify numeric claims across at least 2 sources.
- Flag uncertainty explicitly; separate facts from interpretation. • Adversarial Prompting – Skeptical probes (EXAMPLE)
- “What credible evidence contradicts this claim?”
- “What bias or confounder could explain the result?” • Strategic Edge Cases – Scope/validity (EXAMPLE)
- Non-representative samples; survivorship bias; measurement error; missing data. • Reverse Prompting – Improved framing (EXAMPLE)
- “Define research question, scope boundaries, inclusion/exclusion criteria, decision-relevant metrics.” • Recursive Optimization – Passes (EXAMPLE)
- Pass 1: Rapid literature scan → Pass 2: Evidence table → Pass 3: Synthesis → Pass 4: Limitations & next steps. • Over-Instruction – Methods transparency (EXAMPLE)
- Provide search terms, databases, date ranges, screening criteria; include an evidence table (Study, Year, N, Effect, Notes). • Zero‑Shot CoT – Reasoning sections (EXAMPLE)
- Context → Methods → Findings → Limitations → Implications. • Reference Class Priming – Style (EXAMPLE)
- “Emulate a concise evidence brief suitable for a policy team.” • Multi‑Persona Debate (EXAMPLE)
- “Statistician (methods fidelity) vs. Domain Expert (external validity) vs. Skeptic (publication bias).” • Temperature Roleplay (EXAMPLE)
- Bold pass: aggressive interpretation of effect sizes; Cautious pass: conservative inference with CIs and caveats. • Summary‑Expand Loop (EXAMPLE)
- TL;DR key findings (bulleted) → Expand into a structured brief with citations.
[DECISION-MAKING EXAMPLES] • Chain-of-Verification – Decision audit (EXAMPLE)
- Verify inputs, constraints, and data provenance; recompute key figures independently. • Adversarial Prompting – Pre-mortem (EXAMPLE)
- “Assume we chose Option A and failed in 90 days—what caused it?” • Strategic Edge Cases – Stress tests (EXAMPLE)
- Worst-case demand, cost overrun, supplier failure, regulatory delay, staff turnover. • Reverse Prompting – Improved framing (EXAMPLE)
- “Clarify objective function, must-haves, nice-to-haves, non-negotiables; define acceptance criteria.” • Recursive Optimization – Passes (EXAMPLE)
- Pass 1: Options & criteria → Pass 2: Weighted scoring & sensitivity → Pass 3: Risks & mitigations → Pass 4: Recommendation & next steps. • Over-Instruction – Artifacts (EXAMPLE)
- Require: Options table, weighted decision matrix, risk register, timeline, KPIs. • Zero‑Shot CoT – Reasoning sections (EXAMPLE)
- Context → Options → Evaluation → Sensitivity → Recommendation. • Reference Class Priming – Style (EXAMPLE)
- “Executive decision memo, 1–2 pages, crisp tables, explicit tradeoffs.” • Multi‑Persona Debate (EXAMPLE)
- “CFO (financial), CISO (risk/security), COO (operational feasibility).” • Temperature Roleplay (EXAMPLE)
- Bold pass: decisive, speed-first; Cautious pass: risk-averse, control-first; Merge. • Summary‑Expand Loop (EXAMPLE)
- TL;DR bullets → Expanded analysis per bullet with numbers and assumptions.
—————————— FORMAT LOCK (put this inside the rewritten prompt verbatim, adapted to domain) ——————————
- Sections (rename to fit domain): UNDERSTAND, PLAN, EXECUTE, VERIFY, FINALIZE.
- Two-pass output: TL;DR Plan → Expanded Solution.
- Add a “Verification Plan” section that runs concrete checks from the example library.
- Add a “Devil’s Advocate” section and then a “Revisions After Critique” section.
- Add “Edge Cases We Handle” with 5–10 concrete bullets from the relevant library.
- Add “Multi‑Persona Debate” (2–3 brief perspectives) → “Synthesis & Decision”.
- Add “Temperature Passes” (Bold vs. Cautious) → “Merged Final”.
- Add “Acceptance Criteria” and “Definition of Done”.
- End with: “Stop after final answer. Do not include this metaprompt.”
——————————
Output QA Gate: Before printing the final output:
- Confirm the rewritten prompt starts with ### Rewritten Prompt and ends with ### End Rewritten Prompt.
- Confirm a single explicit domain is stated.
- Confirm no banned phrases are present.
- Confirm TL;DR + Expanded sections, Verification, Critique, Edge Cases, and Deliverables all exist.
Prompt to Rewrite (P):
────────────── BEGIN ORIGINAL PROMPT (P) ──────────────
[Put your prompt here]
────────────── END ORIGINAL PROMPT (P) ──────────────