By Buddy Williams — October 4th, 2025
I have immense gratitude for Dwarkesh Patel’s (@dwarkesh_sp) podcast. His diligent preparation, exceptional guests, and sharp interviews make it a goldmine for thinkers. Dwarkesh is clearly a brilliant mind - writer, interviewer, and analyst. But I see a core problem: he’s vocal about disliking “hand-wavy” explanations, leaning hard into quantitative analysis as the starting point. It’s a common trait among today’s sharpest minds - an addiction to quantities, always asking, “Where’s the data?” This often ties to Bayes’ Theorem for forecasting, a seemingly rational anchor. Empiricism shines here, and I get the appeal. Yet, this approach has a blind spot, a “forest for the trees” problem that obscures foresight - what some call common sense.
The issue isn’t that Dwarkesh ignores qualitative analysis; he uses it, but in a hierarchy where quantitative data sets the stage, and qualitative insights merely n
Compact reference with class names, types, method signatures and one-liner explanations.
store: TLStore - The editor's data store containing all records
inputs: InputState - Current input state including mouse position and keyboard state
user: UserPreferencesManager - User preferences manager for settings
I had an interesting conversation with Jamie and Lucas about vibe coding and the future of human coding over the past two days. It covers debates about the impact of vibe coding on coders, vibe code reliability, and predictions about the future of human coders. It demonstrates how people can disagree and talk confidently yet open-mindedly about it.
In addition to the fruitful conversations, there were several people who wrote nasty things to me. Messages of hate. While their methods were deplorable, I still learned from them. I address some of their concerns in the last thread listed here.
I discovered that Claude Code, unlike current Agents in IDEs (Cascase in Windsurf, Cursor Agents), can follow instructions outlined in another file or in a large prompt. This allows scaffolding new architectures quickly and implementing specific applications on top.
- Since Claude Code will use the directory as context, I find it useful to hide context. This allows us to greenfield progressively. I find too much context results in undesirable results.
-
Natural Intelligence Took Ages to Evolve
- Human intelligence developed over ~300,000 years as Homo sapiens emerged, shaped by survival pressures.
- Fire, used ~1 million years ago, boosted brainpower by providing more energy (Wrangham, Catching Fire, 2009).
- The brain eats up ~20% of our body’s energy, showing biology’s limits (Raichle & Gusnard, Nature Reviews Neuroscience, 2002).
-
AI Skips the Slow Grind
- AI trains neural networks with gradient descent, mimicking evolution but running billions of iterations in years (Goodfellow et al., Deep Learning, 2016).
-
No biological constraints—AI uses raw compute power to scale fast and outpace human reasoning.
Imagine a world where you don’t have to work to survive. Not because jobs disappear, but because AI makes life so abundant that work becomes a choice—like picking up a hobby. Sound crazy? Let’s explore how AI might get us there, and why it’s not just another tool like the steam engine.
People love to say, “Don’t worry about AI taking jobs—it’ll create new ones!” They point to history: the steam engine didn’t end work; it gave us factories and trains. Electricity didn’t leave us idle; it lit up cities and powered new industries. So, AI should follow suit, right? Not quite.
Given your background as a programmer with a solid grasp of algebra and your interest in AI research and Fermi calculations, your goal to fill in the gaps and apply math to quantitative modeling is both achievable and exciting. Fractional exponents like (60^{1/6}) (the 6th root of 60) hint at the broader world of mathematical concepts that can deepen your understanding and unlock new tools for your work. Since you’re aiming for efficiency and relevance to AI and Fermi-style problem-solving, I’ll tailor the recommendations to focus on key areas that align with your goals, along with practical study strategies.
- Exponents and Logarithms (Expanding on Fractional Exponents)
- Why it’s relevant: Fractional exponents connect to roots, growth rates, and scaling laws, which are common in AI (e.g., learning rates, optimization) and Fermi estimates (e.g., population growth, energy scales). Logarithms are essential for understanding complexity (e.g., (O(n \log n)
By the year 2029, less than four years from this writing, AI will be able to produce 40 hours of work without human guidance.
The units of time (1 week/month/year) are interesting milestones, but it's important to keep in mind that AI progress will have impacts on the labor economy before milestones are reached. The rapid transition period of automated workforce has already started as of Mar 2025. I suspect by year 2029, nations will need to address the economic problem of automated labor as a substitute for manual labor.
Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future
Below is a thorough summary of the transcript "Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future," based on the conversation between Carl Shulman and the host Dwarkesh Patel. This summary captures the key themes, arguments, and broader implications of the discussion, while also emphasizing Carl Shulman’s distinctive approach to research.
The conversation centers on the existential risks posed by unaligned artificial intelligence (AI)—systems not designed to prioritize human values—and the potential for such AI to disempower or dominate humanity. Carl Shulman outlines several specific mechanisms through which an AI takeover could unfold, highlighting the multifaceted nature of the threat:
