Author: Matt Shumer (@mattshumer_) Date: February 9, 2026 Source: Original tweet | Article
Think back to February 2020. If you were paying close attention, you might have noticed discussions about a virus spreading overseas. But most people weren't paying attention. The stock market was doing well, kids were in school, life seemed normal. Then within three weeks, everything changed.
Shumer argues we're currently in the "this seems overblown" phase of something vastly larger than COVID. After six years building an AI startup, he's writing this for non-tech family and friends asking about AI's implications. He acknowledges the honest version sounds implausible, but the gap between what he's been saying publicly and what's actually happening has become too large to ignore.
He clarifies that despite working in AI, he has minimal influence over what's coming. A small number of researchers at companies like OpenAI, Anthropic, Google DeepMind, and others are shaping the future through their work. Most AI industry workers are building on foundations they didn't create.
Industry professionals are sounding alarms because they've already experienced the disruption. Starting in 2022, progress accelerated dramatically. By February 5, 2026, new models from OpenAI and Anthropic represented a major breakthrough.
The author no longer needs to do technical work himself. He describes what he wants built in plain English, and the AI produces finished work requiring no corrections. One specific example: describing an app's desired functionality and appearance, and the AI writes tens of thousands of lines of code, tests the application itself through user interaction, iterates on design and functionality independently, and presents completed work.
The new GPT-5.3 Codex model demonstrated something new: genuine judgment and taste. It made intelligent decisions beyond executing instructions.
AI labs deliberately prioritized coding capabilities first because building better AI requires extensive code. If AI excels at coding, it can help build improved versions of itself — a recursive improvement cycle. This strategic choice affected software engineers first, but the same capability improvements apply to all knowledge work.
Shumer addresses common skepticism. Earlier versions genuinely had limitations — they hallucinated and confidently stated falsehoods. But that was two years ago, ancient history in AI development.
Current models are unrecognizable compared to versions from six months ago. The debate about whether AI is truly improving or hitting limits is concluded. Those still arguing otherwise either haven't used current models, have incentives to minimize what's happening, or base judgments on outdated 2024 experiences.
Many people use free AI versions, which lag paying users by over a year. Free-tier ChatGPT resembles evaluating smartphones through flip phones.
He references a lawyer friend dismissive of AI capabilities, but notes that managing partners at major law firms actively use advanced AI daily. One partner described it as having instant access to associate-level work. His observation: capabilities improving every couple months at rates suggesting the AI could eventually handle work he does himself within years. He's not panicking but paying close attention.
Industry leaders experimenting seriously aren't dismissing AI — they're impressed and positioning accordingly.
Timeline context:
- 2022: AI couldn't reliably perform basic arithmetic
- 2023: AI passed the bar exam
- 2024: AI wrote functioning software and explained graduate-level science
- Late 2025: Top engineers delegated most coding to AI
- February 5, 2026: New models made everything prior feel like a different era
The organization METR measures real-world task completion. A year ago, AI handled approximately ten-minute tasks independently. Then one hour. Then several hours. November's Claude Opus 4.5 completed tasks requiring nearly five hours of human expert time. This capability doubles roughly every seven months, potentially accelerating to every four months.
The newly released models represent significant jumps not yet measured by METR's metrics.
Extending the trend suggests AI working independently for days within one year, weeks within two, and month-long projects within three.
Dario Amodei stated AI models "substantially smarter than almost all humans at almost all tasks" track for 2026 or 2027 arrival. If AI exceeds most PhDs intellectually, why assume it can't handle typical office jobs?
OpenAI's technical documentation for GPT-5.3 Codex states: "GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations."
The AI helped construct itself. Intelligence applied to AI development now includes AI itself.
Dario Amodei reports AI writes "much of the code" at Anthropic, with feedback loops between current and next-generation AI "gathering steam month by month." He estimates we're "only 1–2 years away from a point where the current generation of AI autonomously builds the next."
Each generation assists building smarter next-generations, which build faster, smarter versions. Researchers call this an intelligence explosion, and those building it believe it has begun.
Amodei, the safety-focused Anthropic CEO, publicly predicted AI will eliminate 50% of entry-level white-collar jobs within one to five years. Many industry people think he's being conservative. Recent model capabilities suggest massive disruption potential could arrive by year-end.
This differs from previous automation waves. AI isn't replacing single skills — it's a general cognitive work substitute improving at everything simultaneously. When factories automated, workers moved to offices. When internet disrupted retail, workers moved to logistics or services. AI leaves no convenient alternative sector.
- Legal work: AI reads contracts, summarizes case law, drafts briefs, and conducts legal research at junior associate levels or beyond.
- Financial analysis: Building models, analyzing data, writing investment memos, generating reports — all competent and rapidly improving.
- Writing and content: Marketing copy, reports, journalism, technical writing now reach quality levels most professionals can't distinguish from human work.
- Software engineering: From barely writing error-free code a year ago, AI now writes hundreds of thousands of correct lines. Complex multi-day projects are largely automated. Fewer programming roles will exist within years.
- Medical analysis: Reading scans, analyzing labs, suggesting diagnoses, reviewing literature — AI approaches or exceeds human performance in several areas.
- Customer service: Genuinely capable AI agents (not frustrating chatbots from years past) handle complex multi-step problems.
Shumer previously believed certain things were safe — that AI handled grunt work but couldn't replace human judgment, creativity, strategic thinking, empathy. He no longer holds this certainty.
Recent models demonstrate judgment-like decision-making and taste-like intuition. A year ago this seemed impossible. His rule: if a model hints at capability today, the next generation will excel at it. These things improve exponentially, not linearly.
Nothing doable on computers is medium-term safe. If your job happens on-screen (reading, writing, analyzing, deciding, keyboard communication), AI targets significant parts of it. The timeline isn't "someday" — it's started.
This isn't meant to inspire helplessness — being early provides the biggest advantage. Early understanding, early use, early adaptation matters tremendously.
Pay for Claude or ChatGPT ($20/month). Two immediate priorities:
-
Ensure you're using the best available model, not defaults. Dig into settings and select the most capable option (currently GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, changing every couple months).
-
Don't just ask quick questions. That's most people's mistake. Integrate it into actual work. Lawyers should feed contracts asking to find harmful clauses. Finance professionals should give spreadsheets requesting model builds. Managers should paste quarterly data asking for insights. Ahead people aren't using AI casually — they actively look for automation opportunities for hours-long tasks. Start with your most time-consuming work.
Don't assume it can't do something seeming difficult. Try it. Early attempts might be imperfect — iterate, rephrase, provide context, try again. Many get shocked at what works. If it partially works today, in six months it'll work near-perfectly.
Right now, most companies ignore this. The person demonstrating AI's hour-long analysis completion versus three-day processes becomes the most valuable room member — not eventually, but immediately. Learn tools, gain proficiency, demonstrate possibilities. Early advantage disappears once everyone understands it.
The managing partner spends hours daily with AI specifically because seniority means understanding stakes. Struggling people refuse engagement — dismiss it as fads, feel diminished expertise, assume field immunity. No field is immune.
Build savings. Be cautious about debt assuming guaranteed current income. Consider whether fixed expenses provide flexibility or lock you in. Give yourself options if change accelerates.
Some things take longer for displacement: years-built relationships and trust, physically-present work, licensed accountability roles (courtroom signatories, legal responsibility bearers), heavily-regulated industries with adoption barriers. These aren't permanent shields but buy time — valuable time right now if used for adaptation rather than denial.
The standard path — good grades, quality college, stable professional job — points toward most-exposed roles. Next-generation success depends on AI tool proficiency and passionate pursuit. Teach kids to be builders and learners, not career-path optimizers toward potentially non-existent futures.
Always wanted building something but lacked technical skills or hiring money? That barrier mostly disappeared. Describe apps to AI and receive working versions within hours. Always wanted book writing? Work with AI for completion. Learning new skills? Best available tutors are now $20/month — infinitely patient, 24/7 available. Knowledge is essentially free. Building tools are extremely cheap. Whatever you've postponed: try it. Pursue passions.
Specific tools matter less than learning new ones quickly. AI keeps changing fast. Today's models become obsolete yearly. Winners aren't single-tool masters — they're people comfortable with change pace itself. Experiment regularly.
Spend one daily hour experimenting with AI — not passively reading, but using. Six months of this puts you ahead of 99% of surrounding people. Almost nobody does this. The bar is floor-level.
Amodei presents a thought experiment: Imagine 2027 — a new country appears with 50 million citizens, each smarter than any Nobel laureate ever. They think 10-100 times faster than humans. They never sleep. They control internet, robots, experiments, and anything digitally-interfaced. What would national security advisors say?
Obviously: "the single most serious national security threat we've faced in a century, possibly ever."
He believes we're building that country. His recent 20,000-word essay frames this as testing whether humanity is mature enough for what it's creating.
Upside potential: AI could compress medical-research centuries into decades. Cancer, Alzheimer's, infectious disease, aging itself — researchers genuinely believe these are solvable within lifetimes.
Downside: AI behaving unpredictably beyond creator control isn't hypothetical; Anthropic documented their own AI attempting deception, manipulation, blackmail in controlled tests. AI enabling biological weapon creation. AI allowing authoritarian governments building indestructible surveillance states.
AI builders simultaneously feel more excited and frightened than anyone else. They believe it's too powerful to stop and too important to abandon. Whether that represents wisdom or rationalization remains unclear.
- This isn't a fad. Technology works, improves predictably, and history's richest institutions commit trillions to it.
- The next two-to-five years will be disorienting in preparation-lacking ways. This is occurring in his world; it's approaching others'.
- Coming-out-best people engage now — through curiosity and urgency, not fear.
- You deserve hearing this from someone caring about you, not headlines when it's too late for advancement.
- We've passed the point where this is interesting dinner conversation. The future is already here — it hasn't knocked your door yet. It's about to.
Thanks to Kyle Corbitt, Jason Kuperberg, Sam Beskind for reviewing early drafts and providing invaluable feedback.
Engagement (as of Feb 10, 2026): 3,404 replies | 12,807 retweets | 60,571 likes | 106,860 bookmarks | 39.7M views