Skip to content

Instantly share code, notes, and snippets.

@jonwalstedt
Last active February 9, 2026 09:10
Show Gist options
  • Select an option

  • Save jonwalstedt/609075a9ff5078b51b19401b8f69ff52 to your computer and use it in GitHub Desktop.

Select an option

Save jonwalstedt/609075a9ff5078b51b19401b8f69ff52 to your computer and use it in GitHub Desktop.

Here are some prompting techniques to take you through 2026:

Don’t ask for solutions. Ask for pressure, friction, reframing, and contradictions.

Treat ChatGPT as a model of a system, not an oracle

Instead of "What's the best architecture?", ask "Give me 3 architectures that fail in different ways." Instead of "How should we design this?", ask "Walk me through the system's internal pressures and where cracks are likely to form.”

Use progressive adversarial prompting

The sequence:

  1. Baseline: "Give me the naive, default answer."
  2. Undermine it: "Now critique that answer as if it will be deployed at scale under budget constraints."
  3. Stress test: "What assumptions in your critique are brittle?"
  4. Synthesis: "Propose a version that survives that stress test."

You’re forcing the model into iterative self-correction loops. That produces higher-quality reasoning than any single prompt will.

Push for deltas, not wholes

  • "Given the context so far, what's the smallest meaningful improvement I'm not seeing?"
  • "What's the decision tree here? Give me only the branches that actually matter."

Ask for reframings instead of answers

Models are strongest at changing the shape of a problem, not giving you “the” solution.

Useful prompts

These force the model to reinterpret the situation rather than improvise solutions.

  • “Reframe this problem from the perspective of constraints instead of requirements."
  • "Reframe this as a coordination problem, not a technical one."
  • "Reframe it assuming time-to-market dominates correctness."

Failure-oriented prompts

  • "What assumptions in this design are the most fragile?"
  • "Which decisions here create long-term regret?"

Make the model reveal its epistemic boundaries.

  • "Tell me which parts of your reasoning depend on guesswork."
  • "Highlight the steps where you interpolated missing information."
  • "Give me the claims that are least defensible and why."
  • "Separate your answer into 'high-confidence reasoning' and 'low-confidence speculation'."
  • "Where might you be hallucinating structure that isn't really present in the problem?"

For when you need insight, not answers.

  • "Explain this from the perspective of someone who strongly disagrees with the plan."
  • "Tell me why a competent engineer would reject this approach outright."
  • "Explain the problem assuming the current direction is actively harmful."
  • "Invert the goal: what would we do if we wanted this project to fail?"
  • "Argue that the 'obvious' solution is actually the wrong one."

Targeted at your delta-driven thinking.

  • "What is the smallest conceptual shift that would meaningfully improve this design?"
  • "Which assumption, if adjusted slightly, unlocks the most leverage?"
  • "Give me one refinement that reduces long-term complexity."
  • "What's the next local decision we should interrogate, not the global one?"
  • "Give me the smallest experiment that disproves our current direction."

These refine your own framing, not the model’s output.

  • "Rewrite my initial question to expose the assumptions I smuggled in."
  • "Give me five more incisive versions of my question."
  • "What is the version of my question that a principal engineer would ask?"
  • "Rewrite my question from the POV of someone skeptical it even defines a real problem."
  • "Transform my question into a decision-making problem instead of a solution-seeking one."

Demand epistemic hygiene

You will get higher signal if you explicitly request uncertainty and reasoning boundaries.

Examples:

  • "Give me the parts of your answer that rely on unknowns."
  • "Which of your claims are the least certain and why?"
  • "Highlight any leaps of logic you had to make."

This forces the model to expose the scaffolding of its reasoning instead of presenting a smooth fiction.

Use the model as a skeptical collaborator

Tell it to adopt a role that constrains its incentives, eg:

  • The hostile reviewer preparing to tear your proposal apart– Forces defensive thinking and preemptive critique.

Role → Pressure → Better reasoning.

Don’t ask for creativity; ask for constraint manipulation

“Be creative” is vague. “Relax one core constraint and show how the design changes” is concrete.

Try:

  • "If latency were irrelevant but cost mattered, what architecture emerges?"
  • "If availability requirements doubled, what breaks first?"

This generates insights that matter in real engineering discussions.

Keep a stable context thread and interrogate deviations

The model reasons best with continuity. For longer sessions:

anchor with: "Here are the two assumptions we agreed on earlier — check if your next answer violates them." request: "Compress our conversation so far into a 10-line architectural brief and use that from now on."

You eliminate drift and you get consistency more like a real collaborator.

Occasionally strip ChatGPT of its intelligence

A surprisingly powerful technique:

  • "Answer as if you have no domain knowledge and can only use first principles."

This avoids pattern-matching and yields genuinely original decompositions.

Meta-technique: Ask for the alternatives to your prompt

This one is underused but devastatingly effective:

  • "Rewrite my question in 5 smarter, more incisive variants. Then answer the best one."

This upgrades your thinking while upgrading the model’s output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment