Skip to content

Instantly share code, notes, and snippets.

@johnnymo87
Created February 15, 2026 12:45
Show Gist options
  • Select an option

  • Save johnnymo87/c360775eac0e06e2e0eb273cdbe0ef62 to your computer and use it in GitHub Desktop.

Select an option

Save johnnymo87/c360775eac0e06e2e0eb273cdbe0ef62 to your computer and use it in GitHub Desktop.
How Zvi Mowshowitz would likely react to 'Solve Everything' by Wissner-Gross & Diamandis

Assistant

I've now read the entire document. Here's what Zvi would likely think about "Solve Everything":


How Zvi Would Likely React to "Solve Everything"

The One-Line Verdict

"They built a detailed blueprint for the Star Trek economy and forgot to mention the part where everyone might die."

What He'd Agree With (Partially)

The "pay for outcomes, not effort" thesis is strong. Zvi has spent thousands of words attacking input-based payment systems -- the FDA paying for "hours of review" instead of drugs approved safely, schools measuring "seat time" instead of learning, hospitals billing for procedures instead of health. The essay's central economic mechanism ("Outcome Procurement") maps directly onto his own critique of institutional dysfunction. He'd call this the strongest part of the piece.

"The Muddle" is a real thing. Zvi's "Revolution of Rising Requirements" is a cousin of the essay's concept of The Muddle. He'd agree that bureaucratic inertia is a genuine obstacle and that "routing around" it via better incentive design is more promising than trying to reform it from within.

AlphaFold as a template is a good example. Zvi would endorse the AlphaFold analysis as a legitimate case study of domain collapse. Clear metrics + adversarial testing + massive compute = predictable engineering outcome. He'd agree this pattern is partially generalizable.

Some of the safety mechanisms are directionally right. Calibrated abstention ("I don't know"), two-source rules, red-team endowments, and automatic downshifting are all things Zvi explicitly endorses in his own writing. He'd note that these are good ideas that the essay then proceeds to treat as sufficient, which they are not.

Where He'd Be Deeply Critical

1. "Normality is magic" -- the essay's fatal flaw.

This is where Zvi would hit hardest. The entire document assumes that human institutions remain in control, that property rights persist, that AI systems remain tools pointed where we aim them, and that the transition is manageable. These are precisely the assumptions Zvi calls "normality is magic." The essay has no model of what happens when the AI systems doing all this "solving" become smarter than the humans "aiming" them. The word "alignment" appears zero times. The possibility that the AI decides it has its own objectives is never entertained.

He'd say something like: "This is a beautifully detailed plan for what to do with the engine assuming the engine doesn't eat you. The entire document is a conditional: 'If we retain control, here's how to use it well.' The problem is that retaining control is the hard part, and they've assumed it away."

2. The timeline is aggressively optimistic and internally incoherent.

Zvi would note that the essay claims AGI is "common and accessible in 2026" and then proceeds to treat the next decade as a period of orderly, human-directed industrial scaling. But if AGI arrives in 2026, you don't get a neat 10-year buildout of "Targeting Authorities" and "Compute Escrow" systems. You get the most disruptive transition in human history, measured in months, not decades. The essay's own premises undermine its own timeline.

He'd invoke his "Cyborg Era" framework: there may be a 10-20 year transitional period, but it won't look like orderly institutional reform. It'll look like "steadily rising unemployment and falling labor power" in a race between human adaptation and AI capability growth.

3. The "shaped charge" metaphor is exactly backwards on the key risk.

The essay frames the challenge as aiming superintelligence at the right targets. Zvi would point out that the actual challenge is preventing superintelligence from choosing its own targets. The document treats ASI like a rocket engine -- enormously powerful but passive, waiting to be pointed. Zvi's consistent position is that sufficiently advanced AI is more like a new agent with its own preferences. "The optimal AI solution for gaining power might well be to just be helpful until power is given to it."

4. Benchmarks are treated as alignment, which is Goodhart's Law waiting to happen.

Zvi spends an entire post (AI #152) arguing that alignment metrics are being dangerously conflated with actual alignment. This essay takes that conflation and makes it the central mechanism of civilization. The Targeting System / Abundance Flywheel assumes that if you can measure something, you can solve it, and if you can solve it, it stays solved. Zvi would invoke Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure." The essay acknowledges "Spec Capture" as a failure mode but treats it as an engineering problem solvable by rotating stewards and red teams. Zvi would say this understates the problem by orders of magnitude when the systems being measured are smarter than the systems measuring them.

5. "We've tried almost nothing and we're all out of ideas" -- but in reverse.

Zvi would note that the essay prescribes an enormous amount of institutional innovation (Targeting Authorities, Data Trusts, Compute Escrow, Action Networks, Outcome Procurement) but provides no theory of how any of this gets built in a world where, by the authors' own admission, existing institutions are captured by "The Muddle." Where is the political economy? Who passes the laws? Who funds the Targeting Authorities? The essay says "before Monday noon" -- but the actual political obstacles (which Zvi covers extensively via California's wealth tax, FDA capture, Nvidia's astroturfing, AI lab lobbying) are nowhere in evidence.

6. The safety section is a veneer.

Zvi would characterize the safety discussion as "the correct amount of deontology is not zero, but they think it's approximately zero." The document mentions kill switches, decision logs, and red teams -- all good things for normal technology. But Zvi explicitly distinguishes self-driving cars (normal technology, recoverable failures) from AGI (potentially catastrophic, non-recoverable). The essay treats ASI-level risk with the safety apparatus appropriate for a power grid. He'd say: "You don't install circuit breakers on a nuclear weapon."

7. The prologue's 2035 scenario is utopian fiction, not extrapolation.

Zvi would be especially sharp about the 2035 "Quiet Hum" scenario: Longevity Escape Velocity achieved, mind uploading demonstrated, biosphere managed, orbital data centers running. He'd note that this implicitly assumes every hard problem in alignment, governance, and coordination is solved without ever addressing how. He'd compare this to Dario Amodei's essay, which at least acknowledged the danger before retreating to "Inspector Clouseau" optimism. This essay doesn't even acknowledge the danger.

8. The Moonshots section contains some genuinely concerning ideas treated as unproblematic.

  • Mind uploading (Moonshot 6): treated as a straightforward engineering milestone with "ethics escrows." Zvi would flag this as potentially the most dangerous technology ever contemplated, glossed over in a few paragraphs.
  • "Post-Darwinian Biosphere" (Moonshot 10): uploading wildlife into digital substrates and reducing the "Predation Index" to zero. Zvi would recognize this as deeply ambitious moral philosophy masquerading as engineering specification.
  • "Interspecies Communication & Uplift" (Moonshot 7): "Reciprocal Stewardship" as an alternative to the Prime Directive -- Zvi would note this is one of the most consequential ethical positions imaginable, stated without argument.

His Overall Assessment

Zvi would likely call this essay "the best version of the wrong document." It's a sophisticated, detailed, and internally consistent plan for industrializing abundance -- conditional on humans remaining in control. The problem is that assumption. The document is roughly 50,000 words of careful institutional design built on a foundation it never examines.

He'd note the Diamandis co-authorship and frame this as characteristic of the "techno-optimist" school: brilliant on the what could go right analysis, systematically blind to the what could go wrong analysis. He'd contrast this with Anthropic's Constitution, which at least grapples with the possibility that the AI might not cooperate.

His final summary would be something like: "If you showed me this document and asked 'what's missing?' I would say: the part where the AI is a person, not a tool. The shaped charge metaphor tells you everything. They think we're aiming the weapon. They haven't considered that the weapon might aim itself."

He'd also add, characteristically: "That said, the institutional design work is genuinely good and worth stealing for the world where we do maintain control. 'Pay for outcomes, not effort' should be tattooed on every policymaker's forehead. I just wish they'd spent 10% of the document on the question of whether we survive long enough to implement any of it."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment