Skip to content

Instantly share code, notes, and snippets.

@hardyoyo
Last active December 16, 2025 22:03
Show Gist options
  • Select an option

  • Save hardyoyo/13734dc7c6dc19f4c5aa86d815bf2c94 to your computer and use it in GitHub Desktop.

Select an option

Save hardyoyo/13734dc7c6dc19f4c5aa86d815bf2c94 to your computer and use it in GitHub Desktop.
A cheat sheet for "Sneaking up" AKA "Chain of Thought", when working with an AI to accomplish a goal

Chain-of-Thought (CoT) / “Sneaking Up” on a Task — TL;DR

A planning-first prompting pattern: think → plan → verify → execute.
Optimize for correctness and control, not speed.


1. State Intent, Block Output

Tell the model what you want and explicitly forbid solutions.

Prompt pattern

I want to solve X.
Do not write code or solutions yet.
First, help me think through the problem.

Why Prevents reflexive answers and forces reasoning mode.
(Fowler [1])


2. Force Shared Understanding

Make the model restate the problem.

Prompt pattern

Summarize the problem in your own words.
List constraints, assumptions, and unknowns.

Why Surfaces misunderstandings early.
(Fowler [1], Zhou et al. [2])


3. Generate an Explicit Plan

Ask for steps, not answers.

Prompt pattern

Propose a step-by-step plan to solve this.
Do not execute the steps.

Why Plan-then-solve outperforms direct Chain-of-Thought on complex tasks.
(Plan-and-Solve Prompting [2])


4. Critique the Plan

Interrogate before execution.

Prompt pattern

What are the risks, edge cases, or weak assumptions in this plan?
Where could this fail?

Why Models are effective at self-critique when asked explicitly.
(World-model planning framing [3])


5. Lock the Plan

Freeze the approach to avoid drift.

Prompt pattern

Revise the plan based on critique.
This is the final plan we will follow.

Why Prevents silent re-planning mid-task.
(Fowler [1], Modular Agentic Planner [5])


Important

It's totally OK to stop right here, giggle to yourself, and run off with the plan. You are under no obligation to keep using AI. Go, do the thing. You got this!


6. Execute One Step Only

Never ask for the whole solution at once.

Prompt pattern

Execute step 1 only.
Explain your reasoning.
Stop when finished.

Why Stepwise execution improves reliability and debuggability.
(CoT research [2], PEARL [4])


7. Verify Against the Plan

Check alignment before moving on.

Prompt pattern

Does this step align with the plan?
What should we verify before step 2?

Why Uses the model as both implementer and reviewer.
(Logical CoT for planning [6])


8. Repeat: Execute → Verify → Continue

Advance deliberately, one step at a time.

Why Mirrors agentic planning systems used in research and practice.
(PEARL [4], MAP [5])


Rules of Thumb

  • Planning beats prompting for output
  • Smaller steps > longer chains
  • Ask for plans before artifacts
  • Lock decisions explicitly
  • Treat the model like a junior engineer, not a vending machine

Key Sources

  1. Martin Fowler & Xu Hao
    An Example of LLM Prompting for Programming
    https://martinfowler.com/articles/2023-chatgpt-xu-hao.html

  2. Lei Zhou et al. (ACL 2023)
    Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning
    https://aclanthology.org/2023.acl-long.147.pdf

  3. Yao et al. (2023)
    Reasoning with Language Models Is Planning with World Models
    https://arxiv.org/abs/2305.14992

  4. Bansal et al. (Microsoft Research, 2023)
    PEARL: Prompting Large Language Models to Plan and Execute Actions over Long Documents
    https://www.microsoft.com/en-us/research/wp-content/uploads/2023/05/Pearl_Quality.pdf

  5. Li et al. (2023)
    Improving Planning with Large Language Models: A Modular Agentic Architecture
    https://arxiv.org/abs/2310.00194

  6. Liu et al. (2024)
    Logical Chain-of-Thought Instruction Tuning for Symbolic Planning
    https://ethical-ai-chief.medium.com/new-paper-teaching-llms-to-plan-logical-chain-of-thought-instruction-tuning-for-symbolic-planning-16cbbe90d555


flowchart TD
    A[Start: Intent] --> B[Block Output<br/>No solutions yet]
    B --> C[Shared Understanding<br/>Restate problem + constraints]
    C --> D[Generate Plan<br/>Step-by-step only]
    D --> E[Critique Plan<br/>Risks, assumptions, edge cases]
    E --> F[Lock Plan<br/>Freeze approach]

    F --> G[Execute One Step]
    G --> H[Verify Against Plan]

    H -->|Aligned| I{More Steps?}
    I -->|Yes| G
    I -->|No| J[Done]

    H -->|Misaligned| E
Loading
@hardyoyo
Copy link
Author

BTW, ChatGPT helped write this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment