A planning-first prompting pattern: think → plan → verify → execute.
Optimize for correctness and control, not speed.
Tell the model what you want and explicitly forbid solutions.
Prompt pattern
I want to solve X.
Do not write code or solutions yet.
First, help me think through the problem.
Why
Prevents reflexive answers and forces reasoning mode.
(Fowler [1])
Make the model restate the problem.
Prompt pattern
Summarize the problem in your own words.
List constraints, assumptions, and unknowns.
Why
Surfaces misunderstandings early.
(Fowler [1], Zhou et al. [2])
Ask for steps, not answers.
Prompt pattern
Propose a step-by-step plan to solve this.
Do not execute the steps.
Why
Plan-then-solve outperforms direct Chain-of-Thought on complex tasks.
(Plan-and-Solve Prompting [2])
Interrogate before execution.
Prompt pattern
What are the risks, edge cases, or weak assumptions in this plan?
Where could this fail?
Why
Models are effective at self-critique when asked explicitly.
(World-model planning framing [3])
Freeze the approach to avoid drift.
Prompt pattern
Revise the plan based on critique.
This is the final plan we will follow.
Why
Prevents silent re-planning mid-task.
(Fowler [1], Modular Agentic Planner [5])
Important
It's totally OK to stop right here, giggle to yourself, and run off with the plan. You are under no obligation to keep using AI. Go, do the thing. You got this!
Never ask for the whole solution at once.
Prompt pattern
Execute step 1 only.
Explain your reasoning.
Stop when finished.
Why
Stepwise execution improves reliability and debuggability.
(CoT research [2], PEARL [4])
Check alignment before moving on.
Prompt pattern
Does this step align with the plan?
What should we verify before step 2?
Why
Uses the model as both implementer and reviewer.
(Logical CoT for planning [6])
Advance deliberately, one step at a time.
Why
Mirrors agentic planning systems used in research and practice.
(PEARL [4], MAP [5])
- Planning beats prompting for output
- Smaller steps > longer chains
- Ask for plans before artifacts
- Lock decisions explicitly
- Treat the model like a junior engineer, not a vending machine
-
Martin Fowler & Xu Hao
An Example of LLM Prompting for Programming
https://martinfowler.com/articles/2023-chatgpt-xu-hao.html -
Lei Zhou et al. (ACL 2023)
Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning
https://aclanthology.org/2023.acl-long.147.pdf -
Yao et al. (2023)
Reasoning with Language Models Is Planning with World Models
https://arxiv.org/abs/2305.14992 -
Bansal et al. (Microsoft Research, 2023)
PEARL: Prompting Large Language Models to Plan and Execute Actions over Long Documents
https://www.microsoft.com/en-us/research/wp-content/uploads/2023/05/Pearl_Quality.pdf -
Li et al. (2023)
Improving Planning with Large Language Models: A Modular Agentic Architecture
https://arxiv.org/abs/2310.00194 -
Liu et al. (2024)
Logical Chain-of-Thought Instruction Tuning for Symbolic Planning
https://ethical-ai-chief.medium.com/new-paper-teaching-llms-to-plan-logical-chain-of-thought-instruction-tuning-for-symbolic-planning-16cbbe90d555
flowchart TD
A[Start: Intent] --> B[Block Output<br/>No solutions yet]
B --> C[Shared Understanding<br/>Restate problem + constraints]
C --> D[Generate Plan<br/>Step-by-step only]
D --> E[Critique Plan<br/>Risks, assumptions, edge cases]
E --> F[Lock Plan<br/>Freeze approach]
F --> G[Execute One Step]
G --> H[Verify Against Plan]
H -->|Aligned| I{More Steps?}
I -->|Yes| G
I -->|No| J[Done]
H -->|Misaligned| E
BTW, ChatGPT helped write this.