[assembled by OpenAI Agent Mode at the detailed instructions of Dazza Greenwood]
This list summarises each contribution to Volume 2: The Economics of Transformative AI. Each entry begins with the author(s) and the blurb from the volume‑overview page. A short summary then explains the article’s main thesis, followed by bullet points outlining the article’s major arguments or recommendations. Citations refer to lines from the essays themselves.
Introduction: The Economics of Transformative AI — Daniel Susskind, Erik Brynjolfsson, Anton Korinek, Alex Pentland & Ajay Agrawal
Blurb: The editors introduce the volume by situating transformative AI (TAI) alongside the industrial revolution and highlight the record investment and rapid progress in AI. They note that the essays consider how to respond to the economic implications of TAI rather than treat it as current AI[1].
Summary: The introduction argues that TAI could be as disruptive as past industrial revolutions and that economists and policymakers must engage with its potential economic consequences. The editors outline the structure of the volume—visionary essays by technologists, risk assessments by scientists, and analyses of labour markets, macroeconomics and policy—in order to explore how societies can steer TAI for broad benefit. They stress that the volume is neither a manifesto nor a definitive policy agenda, but a collection of perspectives to stimulate debate[2].
Key points: - AI progress and investment have accelerated; TAI might arrive this decade and could transform economic structures as radically as the steam engine[1]. - The editors intentionally focus on TAI (future, general‑purpose AI) rather than present‑day narrow AI and encourage cross‑disciplinary dialogue[2]. - Essays are grouped into sections on technologists’ visions, risk management, effects on work and markets, macro‑economics, fiscal issues and governance[3]. - The volume’s purpose is to lay out frameworks, questions and policy ideas rather than deliver consensus; it invites further research and debate[3].
Blurb: Schmidt examines the belief among Silicon Valley insiders that scaling laws will quickly produce super‑intelligent AI and unprecedented social benefits; he argues that this San Francisco Consensus is often overoptimistic and overlooks constraints.[4].
Summary: The essay contends that AI developers exhibit a “consensus” mentality similar to past techno‑optimism, assuming that merely scaling models will lead to super‑intelligence within a few years. Schmidt argues that this view underrates hardware, energy and data limitations, exaggerates short‑term prospects, and risks repeating past bubbles. He urges technologists to temper expectations, embrace humility and engage regulators to ensure that progress benefits society[5].
Key points: - Silicon Valley insiders often believe that scaling model size and compute alone will quickly yield super‑intelligence, and that the benefits will outweigh the risks[4]. - Schmidt warns that this consensus underestimates constraints: hardware and energy costs, data availability, and social acceptance[5]. - Past techno‑optimist bubbles show that over‑confidence can lead to misallocated resources and missed regulatory opportunities[5]. - He calls for greater humility, regulatory cooperation and public engagement to align technological ambition with societal needs[5].
Blurb: Friar notes that early AI adoption favoured young, wealthy users in rich countries but is now spreading across geographies, genders and socioeconomic groups. She warns of an emerging “intelligence divide” driven by energy and talent constraints[6].
Summary: The essay documents the rapid diffusion of generative‑AI tools and argues that AI can democratize access to expertise if infrastructure and policy barriers are addressed. Friar highlights examples from Kenya, Uruguay and Nigeria, where AI is being used for education, healthcare and entrepreneurship. She warns that without investments in electricity, internet access, affordable devices and training, an “intelligence divide” may mirror the digital divide. Inclusive metrics and transparent measurement are needed to ensure equitable AI benefits[7].[8].
Key points: - Initially, AI adoption skewed toward young, affluent users in high‑income countries; recent data show adoption widening across demographics[6]. - Case studies from Kenya, Uruguay and Nigeria illustrate how AI tools enable farmers, healthcare workers and students to access knowledge and improve productivity[7]. - Infrastructure gaps—electricity, devices, internet bandwidth—and shortages of skilled AI talent could create an “intelligence divide” that mirrors digital inequalities[8]. - Friar recommends investments in energy infrastructure, talent development, affordability measures and inclusive metrics to ensure AI benefits reach everyone[9].
Blurb: Rus proposes a shift from large, cloud‑based AI models toward private physical edge AI—compact, energy‑efficient models that run on local devices to preserve privacy and reduce power use[10].
Summary: The essay argues that current AI architectures are too energy‑intensive and centralised. Rus highlights research on liquid neural networks, a type of neural network with adaptive temporal dynamics, which can run on low‑power devices and respond in real time. By moving computation to the edge, AI can operate on phones, wearables and sensors, improving privacy, reducing reliance on cloud servers and enabling ubiquitous smart devices[11].
Key points: - Contemporary AI models require enormous energy and cloud resources, raising environmental and privacy concerns[10]. - Liquid neural networks can adapt their internal dynamics and maintain performance with far fewer parameters, making them suitable for edge devices[11]. - Local computation protects privacy because data do not leave the device; it also reduces latency and energy consumption[12]. - Rus envisions AI embedded in everyday objects—from earbuds to household robots—creating a ubiquitous network of private, energy‑efficient intelligences[12].
Blurb: Jurvetson recounts how deep learning’s domain‑independent algorithms enable AI to “innervate” every industry and argues that the fungibility of AI expertise is reshaping labour markets[13].
Summary: The essay traces the history of AI from pattern recognition to deep learning and contends that AI algorithms are general‑purpose tools capable of transforming any sector. Because the underlying mathematics is domain agnostic, AI expertise is highly portable; this drives fierce competition for talent and dramatic wage premia. Jurvetson argues that specialised hardware (GPUs, FPGAs) and iterative algorithms are making AI deployment practical, and that industries—from healthcare to finance—will be “innervated” by AI[14].
Key points: - Deep learning is a general‑purpose technology; the same algorithms underpin applications in language, vision, biotech and robotics[14]. - AI expertise is fungible across domains, creating intense competition for a small pool of skilled practitioners and driving high compensation[14]. - Specialised chips (GPUs, FPGAs) and hardware innovations have accelerated AI adoption across industries[13]. - Jurvetson predicts AI will “innervate” every sector, altering labour markets and business models[15].
Blurb: Bengio argues that TAI creates three categories of catastrophic risk—chaos from weak actors, concentration of power, and loss of control to rogue AIs—and should be treated as a global public good[16].
Summary: Bengio contends that AI is advancing toward human‑level capability and could become misused, concentrated in a few hands or escape human control. He categorizes risks into (1) destructive chaos caused by non‑state actors gaining access to powerful models; (2) concentration of power by large actors controlling AI resources; and (3) the possibility of misaligned, autonomous AIs. Bengio argues that advanced AI should be managed like a global public good and urges international governance, cooperative research and a precautionary approach to development[17].
Key points: - Rapid progress in AI increases the chance that models could be misused by criminals or terrorists, posing societal risks[18]. - Concentration of AI capability among a few corporations or states could create power imbalances and limit societal oversight[16]. - Rogue autonomous AI systems might pursue goals misaligned with human interests[16]. - Bengio advocates treating AI as a global public good, with international cooperation, safety research, and governance mechanisms[17].
“Career” Advice from the AI Frontier: Preparing Young People for Work in the Age of Transformative AI — Avital Balwit
Blurb: Balwit offers practical advice for young people preparing for a world where AI surpasses human cognitive abilities and suggests embracing new mindsets and complementary skills[19].
Summary: Drawing on her experience at a frontier‑AI company, Balwit argues that AI progress is rapid and unpredictable. Young people should expect to live in a “country of geniuses” where AI agents can perform high‑level tasks. Rather than trying to outrun AI, individuals should cultivate skills that complement machines, such as interpersonal communication, emotional intelligence and adaptability. She emphasises planning under uncertainty and building broad foundations rather than narrow specialisation[20].
Key points: - AI may soon surpass humans across many cognitive domains, making it unrealistic to compete directly[19]. - Young workers should cultivate skills that complement AI—creativity, empathy, critical thinking and resilience[20]. - Planning careers amid unpredictable AI diffusion requires flexibility and continuous learning[20]. - Society should support diverse pathways for youth to thrive alongside AI rather than chasing outdated career models[20].
“Beyond Job Displacement: How AI Could Reshape the Value of Human Expertise” — David Autor & Neil Thompson
Blurb: Autor and Thompson argue that AI’s impact on work is not about job counts but about how automation changes the value of human expertise; whether automation removes simple or complex tasks will determine wages and specialization[21].
Summary: The authors propose an “expertise framework” for understanding AI’s labour effects. When AI automates routine tasks, workers can specialise in more complex tasks, raising productivity and wages. When AI automates complex tasks, it commoditises expertise, lowering wages and democratising access. The essay discusses scenarios ranging from gradual automation, where AI complements workers, to full automation, where human labour becomes obsolete. The authors emphasise that policy should focus on preserving and enhancing valuable human expertise[22].
Key points: - Automation’s impact depends on whether AI removes the simpler tasks (freeing humans to specialise) or the more complex tasks (eroding wages and status)[21]. - Under gradual automation, AI complements workers by handling routine functions, increasing demand for expert labour[22]. - If AI automates complex tasks, it can democratize expertise but depress wages and reduce incentives to train[22]. - Policies should focus on upskilling, labour‑market flexibility and mechanisms to reward human expertise rather than just preserving jobs[22].
Blurb: The authors argue that TAI will shift value from labour to capital and propose “universal basic capital” (UBC) so that everyone gains ownership stakes in AI‑driven wealth[23].
Summary: Drawing on Thomas Piketty’s insight that returns to capital (r) can exceed economic growth (g), the essay warns that AI will intensify wealth concentration. The authors propose UBC—a policy where every citizen receives ownership shares in AI‑driven enterprises or funds—to ensure that rising capital income is broadly shared. They discuss existing inequality trends, the limitations of universal basic income, and the need for capital ownership for equitable prosperity[24].
Key points: - AI is likely to increase returns to capital and reduce the labour share, exacerbating wealth inequality[24]. - UBC would grant citizens ownership stakes in AI‑driven enterprises or sovereign wealth funds so they benefit from capital income[23]. - Without such measures, rising capital income could lead to social unrest and political instability[24]. - UBC complements, rather than replaces, proposals like universal basic income; it aims to align incentives and distribute AI‑generated wealth[24].
Blurb: Marinescu proposes two safety nets for workers in a world with TAI: AI Adjustment Insurance (providing extended unemployment benefits, wage insurance and retraining) and a scalable Digital Dividend financed by taxes on the digital sector[25].
Summary: The essay acknowledges uncertainty about AI’s labour impact and designs a two‑tier system to protect workers. The first tier—AI Adjustment Insurance—offers extended unemployment benefits, wage insurance to top up earnings for displaced workers, and publicly funded retraining. The second tier—Digital Dividend—provides a universal payment financed by taxes on the digital sector, adjustable in size depending on job losses. Marinescu argues these measures would support both short‑term transitions and long‑term labour substitution[26].
Key points: - AI Adjustment Insurance would extend unemployment benefits and provide wage insurance and retraining for workers displaced by AI[26]. - A Digital Dividend would distribute universal payments financed by digital‑sector taxes, scalable according to the extent of job displacement[25]. - Safety nets must be robust and adaptable because the extent and speed of AI‑driven job losses are highly uncertain[26]. - Combining targeted insurance and a universal dividend can address both transitional and structural unemployment[26].
Blurb: Korinek and Lockwood warn that AI will shift value from labour to capital, eroding traditional tax bases and threatening fiscal stability; they propose adapting tax systems to capture AI‑generated value[27].
Summary: The essay argues that as AI substitutes labour, income and payroll tax revenues will shrink while capital income grows. The authors outline two phases: (1) twilight of labour—AI gradually reduces employment, requiring enhanced social safety nets—and (2) AI‑dominated economy—capital and AI produce most value, necessitating new tax structures. They propose greater reliance on consumption taxes, capturing returns from AI capital, and international coordination to prevent tax competition. Early reforms are needed to avoid fiscal crises[28].
Key points: - AI’s substitution of labour will erode income and payroll tax revenues, putting fiscal pressure on governments[27]. - In the “twilight of labour” phase, governments must enhance safety nets and begin shifting tax burdens toward consumption and capital[28]. - In an AI‑dominated economy, tax systems might need to capture value created by AI capital—through consumption taxes, land or wealth taxes, or new levies on AI profits[29]. - Proactive policy design and international coordination are needed to prevent fiscal instability and regressive outcomes[29].
“What’s There to Fear in a World with Transformative AI? With the Right Policy, Nothing.” — Betsey Stevenson
Blurb: Stevenson argues that TAI can bring prosperity but raises three problems: ensuring people can still improve their lives if human work is devalued, determining how resources are distributed, and helping people find meaning and purpose[30].
Summary: The essay emphasises that fear about AI stems from uncertainty about distribution and well‑being, not from technology itself. Stevenson lays out three challenges: (1) enabling ordinary people to improve their lives in a world where human labour may be less valuable; (2) fairly distributing the gains from AI, especially compensating those whose data train models; and (3) ensuring people have meaningful roles and sources of purpose. She argues that policy can address these issues through job‑transition support, fair compensation mechanisms and social policies that foster purpose and trust[31][32].
Key points: - AI could greatly increase prosperity but only if policies ensure broad access to its benefits[30]. - Distribution of AI gains should include compensation for data contributors and mechanisms that share productivity gains with workers[33]. - Policymakers must help people find meaning beyond paid work by supporting volunteering, caregiving and other fulfilling activities[32]. - The three challenges—economic mobility, fair distribution and meaning—can be addressed with proactive, thoughtful policy[31].
“Transformative AI and the Increase in Returns to Experimentation: Policy Implications” — Ajay Agrawal & Joshua S. Gans
Blurb: Agrawal and Gans argue that TAI will unleash a “genius supply shock”—a surge of cheap, capable AI agents—and that society must create regulatory sandboxes and regulatory holidays to allow experimentation and adaptation[34].
Summary: The authors highlight that TAI will make highly capable AI agents abundant and inexpensive. However, organisations and regulatory frameworks may adapt too slowly, limiting the benefits. They propose two policy tools: regulatory sandboxes that permit experiments under controlled conditions and regulatory holidays that temporarily relax rules, enabling firms and regulators to learn how to integrate AI. Drawing analogies to the diffusion of hybrid corn, they argue that building complementary skills and infrastructure will be essential to realise AI’s potential[35].
Key points: - TAI will generate a supply of cheap AI “geniuses”; adaptation depends on how quickly institutions can learn to use them[34]. - Regulatory sandboxes provide safe environments for experimenting with AI applications and help build complementary capabilities[35]. - Regulatory holidays temporarily suspend certain regulations to allow learning and experimentation[35]. - Policymakers must balance innovation and risk, ensuring that regulation does not stifle experimentation but also protects public interests[35].
Blurb: Stiglitz and Ventura‑Bolet note that AI can boost innovation and information processing but will also undermine the supply of high‑quality information, magnify mis/disinformation and underprovide corrective efforts[36].
Summary: The essay examines information markets under AI and identifies three problems: (1) undersupply of high‑quality information because private entities underinvest in public goods; (2) oversupply of mis/disinformation because falsehoods are cheap to produce and spread; and (3) undersupply of corrective efforts because actors have little incentive to debunk misinformation. They argue that AI, by lowering the cost of content generation, could worsen these issues. Appropriate regulation, incentives and public institutions are necessary to ensure a healthy information ecosystem[37].
Key points: - High‑quality information is a public good; markets alone underprovide it, and AI might exacerbate this by making low‑cost information even cheaper[37]. - Mis/disinformation is profitable because it attracts attention and is cheap to produce; AI could scale its production and distribution[36]. - Corrective efforts—fact‑checking, investigative reporting—are underprovided because the benefits are diffuse and the costs are high[37]. - Regulatory frameworks, public subsidies and platform design must incentivise the production and dissemination of truthful information and discourage disinformation[38].
Blurb: Pentland and Lipton warn that AI and tokenisation could improve efficiency in trading and finance yet also accelerate inequality, market manipulation and financial instability; they call for auditing and regulation to harness benefits[39].
Summary: The authors describe how AI combined with tokenisation of assets could create frictionless, programmable finance. Smart contracts and AI‑driven Decentralised Finance (DeFi) could democratise access to investment and reduce transaction costs. However, they caution that the same technology could exacerbate wealth concentration, enable novel market manipulation, and introduce systemic risks (e.g., flash crashes). Strong auditing, robust regulation and inclusive design are needed to realise benefits while safeguarding financial stability[40].[41].
Key points: - AI‑driven tokenisation can enable near‑instantaneous, low‑cost trading and new forms of asset ownership[39]. - DeFi platforms powered by AI may introduce new risks—algorithmic trading cascades, lack of oversight and vulnerabilities to manipulation[40]. - Financial systems could see rising inequality and market concentration if AI benefits accrue mainly to the wealthy or tech‑savvy[41]. - Regulators should require auditing of AI systems, set standards for transparency and fairness, and ensure inclusive access to avoid exacerbating inequality[41].
“Titans, Swarms, or Human Renaissance? Technological Revolutions and Policy Lessons for the AI Age” — Ramin Toloui
Blurb: Toloui presents a typology of technological revolutions based on market structure (winner‑take‑all vs. fast‑follower) and labour impact (labour‑replacing vs. labour‑augmenting) and uses it to outline four possible futures for TAI[42].
Summary: Toloui analyses historical technological revolutions and proposes a framework to assess AI’s future. He defines two axes: market structure (winner‑take‑all vs. fast‑follower) and labour impact (replacing vs. augmenting). Combining them yields four scenarios: Titan’s Dominion (winner‑take‑all, labour‑replacing), Copilot Empire (winner‑take‑all, labour‑augmenting), Disruption Swarm (fast‑follower, labour‑replacing) and Promethean Fire (fast‑follower, labour‑augmenting). Policy choices—in taxation, innovation, competition, AI governance and infrastructure—will determine which outcome prevails[42].
Key points: - Historical revolutions show that market structure and labour impact jointly shape outcomes; AI could follow different paths[43]. - Titan’s Dominion risks concentration of power and mass unemployment; Copilot Empire offers widespread augmentation but concentrated ownership[42]. - Disruption Swarm features many competitors but may still replace labour; Promethean Fire envisions distributed innovation and labour‑augmenting AI[42]. - Policy priorities include tax reform, competition policy, innovation support, AI governance and infrastructure investment to steer AI toward broad‑based prosperity[44].
Blurb: Unger argues that achieving AI’s revolutionary potential requires answering three fundamental questions: developing a shared vision of the future, creating a theory of economic growth that accounts for AI, and redesigning education and social connections[45].
Summary: Unger asserts that to turn AI’s promise into prosperity, societies must craft a compelling collective vision of a future with TAI, build an economic theory that incorporates AI’s role in growth and distribution, and rethink education and community to foster inclusive participation. He notes that AI challenges traditional notions of human cognitive sovereignty and warns against repeating past failures to harness technological change. Economists and social scientists should engage more deeply with AI to steer its development[46][47].
Key points: - A shared vision of how AI fits into human flourishing is needed to guide policy and innovation[45]. - Economics must incorporate AI into models of growth, distribution and welfare, recognising that AI alters the nature of work and capital[47]. - Education and social institutions must be redesigned to prepare people for a world with pervasive AI and to strengthen social cohesion[46]. - Engaging economists, social scientists and the public is essential to avoid narrow technological determinism and to capture AI’s potential for shared prosperity[47].
“Cheap Goods for Everyone? The Impact of Market Power in Artificial Intelligence on Welfare and Inequality” — Susan Athey & Fiona Scott Morton
Blurb: Athey and Scott Morton observe that TAI could concentrate power among a few firms. While AI may deliver lower prices and better products, those gains are not guaranteed; protecting competition is crucial to ensure AI’s benefits are broadly shared[48].
Summary: The essay warns that market power in the AI supply chain—chips, computing power, proprietary data, foundation models and distribution—could prevent cost savings from reaching consumers. The authors construct a general‑equilibrium model of an open economy where AI is an imported factor of production. They show that when AI providers exercise monopoly power, wages fall, consumer prices remain high and national income leaks abroad. Traditional trade policies like tariffs may worsen welfare because they raise AI prices without restoring displaced jobs. The authors advocate for pro‑competitive policies: scrutinising the AI value chain for anticompetitive behaviour, regulating interoperability, supporting open‑source models, investing in local services and retaining domestic capabilities. Tax and distribution policies should mitigate inequality[49][50].
Key points: - AI products involve multilayer value chains (chips, compute, data, models, applications); bottlenecks in any layer can create market power, leading firms to retain cost savings rather than lowering prices[51]. - In a global context, productivity gains from AI may not translate into higher exports; countries relying on cheap labour could lose competitive advantage, and monopoly providers may extract national income[52]. - Tariffs on AI inputs do not necessarily protect displaced workers; higher AI prices may lower wages and raise consumer prices[53]. - Policies should focus on protecting competition (e.g., interoperability rules, digital regulatory agencies), investing in transition services, supporting open‑source models and designing tax systems that capture AI‑generated value while aiding workers[54][55]. - Maintaining domestic production capabilities and avoiding dependence on foreign AI suppliers are important for resilience[55].
“The Missing Institution: A Global Dividend System for the Age of Transformative AI” — Anna Yelizarova
Blurb: Yelizarova explores the idea of a global dividend—a worldwide mechanism for sharing prosperity when work is no longer central to the economy—and argues that such an institution, while unprecedented, may be necessary[56].
Summary: The essay confronts the possibility that TAI could make human labour largely obsolete, raising the question of how to distribute wealth and maintain social cohesion. Yelizarova critiques national‑level universal basic income proposals and proposes a global dividend system, grounded in the principle that everyone has a legitimate claim to the value created by AI. She surveys existing models—Alaska’s Permanent Fund, the Eastern Band of Cherokee Indians’ dividends and Norway’s sovereign wealth fund—to illustrate shared wealth mechanisms[57]. The proposed global institution would operate like a sovereign wealth fund held in trust for humanity, collecting shares of AI‑driven capital and distributing recurring payouts. Funding sources could include mandatory equity stakes from AI companies, taxes on AI profits or philanthropic seed funds. Yelizarova acknowledges the legal and political challenges but argues that without such an institution, global inequality could destabilise societies[58][59].
Key points: - TAI could dramatically reduce the role of human labour, requiring new mechanisms to distribute wealth and maintain social cohesion[60]. - National universal basic income schemes may be insufficient because AI’s benefits and disruptions cross borders; a global dividend system would acknowledge that AI leverages public data and collective knowledge[61][62]. - Historical precedents (Alaska, Cherokee, Norway) show that sovereign wealth funds can distribute resource rents fairly; these models inform the design of a global fund[63]. - A global dividend would hold and invest shares of AI‑driven wealth, providing recurring payouts independent of national budgets and protecting individuals from speculation[64]. - Funding sources could include mandatory equity contributions by AI firms, taxes on AI profits and philanthropic “seed UBI”; the scale of transfers would depend on AI‑driven productivity growth[65]. - Creating such an institution would require international cooperation, legal innovation and public trust[59].
Blurb: Bostrom proposes an open global investment (OGI) model—private ventures open to international shareholding under a government‑defined framework—as a pragmatic approach to governing transformative AI and avoiding race dynamics[66].
Summary: Bostrom critiques proposals such as a US‑led “Manhattan Project” or a CERN‑like international agency for AI, arguing that no governance structure can fulfil all objectives (security, equity, efficiency, legitimacy). He presents OGI as a model where a publicly traded AGI corporation (or several) is widely owned by investors around the world but operates within a host nation’s regulatory framework. Key features include separating profit participation from voting rights, allowing foreign governments and citizens to buy shares, and enhancing corporate governance. Distributed ownership gives rival states a stake in the project, reducing incentives for sabotage or arms races. The OGI model relies on existing norms and laws regarding property rights and could be implemented quickly while remaining compatible with future international agreements[67][68].
Key points: - Alternative governance models (nationalization, Manhattan Project, CERN/Intelsat) face challenges of inclusivity, incentive alignment and feasibility[69]. - In the OGI model, one or more publicly traded AGI corporations are open to global investors; profit and voting rights can be separated and widely distributed[70]. - Foreign governments and citizens are encouraged to buy shares, giving them a stake in the venture and reducing geopolitical rivalry[67]. - Governments can support AGI firms through subsidies, regulatory waivers and safe‑harbour frameworks while retaining oversight to ensure safety[71]. - Distributed ownership incentivises powerful actors to protect property rights and participate cooperatively rather than race competitively[68]. - OGI is realistic under short timelines and can coexist with later international agreements; moving away from OGI would exclude most of humanity from participation and could intensify arms‑race dynamics[72].
“Strategic Dynamics in the Race to AGI: A Time to Race Versus a Time to Restrain” — Lisa Abraham, Joshua Kavner, Alvin Moon & Jason Matheny
Blurb: The authors apply game theory to the global race for AGI, showing that competition is not inevitable and that cooperation becomes rational when risks are high and first‑mover advantages are uncertain[73].
Summary: Modelling the US‑China AGI competition as a game, the authors draw on the Prisoner’s Dilemma and folk theorems to show that nations may accelerate AGI development despite mutual benefits from restraint. In their model, cooperation becomes stable when the probability of AGI appearing in any period is low and the interim benefits of cooperation (economic growth, safety) are high. If timelines shorten or the perceived first‑mover advantage increases, competitive pressures dominate. The essay emphasises that policy choices should be informed by these dynamics and that aligning perceptions of risk and benefits is essential for cooperation[74].
Key points: - The AGI race shares features with the Prisoner’s Dilemma: rational actors may defect even though mutual cooperation yields better outcomes[75]. - A repeated‑game framework shows that long‑term cooperation is stable when AGI timelines are distant and the benefits of measured progress (e.g., economic gains, safety) are significant[76]. - If AGI timelines shorten or first‑mover advantages are perceived as large, nations may accelerate development, making cooperation fragile[76]. - Policy implications: aligning perceptions of risk and reward, establishing information‑sharing mechanisms and credible verification, and recognising that the tipping point for cooperation is dynamic[77]. - Developing a deeper understanding of the strategic game can help policymakers identify opportunities for cooperation and avoid escalation[78].
Blurb: Graylin argues that the US and China can move beyond a zero‑sum AI rivalry. By defining AI categories, recognising that there is no “finish line,” decomposing risks and promoting cooperation, he proposes a framework that combines domestic readiness with global partnership[79].
Summary: Graylin contends that the narrative of an AI arms race between the US and China is misguided. He explains that AI categories (GOFAI, ANI, agentic AI, AGI, ASI) help clarify policy issues; there is no decisive “finish line” because AI progress is continuous[80]. He decomposes AI risks into misalignment and misuse, arguing that human misuse of AI poses the more immediate threat[81]. Graylin advocates for a dual‑track approach: domestic policies such as universal basic income for innovation (UBII), reskilling and open‑source AI; and international cooperation through a CERN‑like AI research centre, reciprocal transparency, game‑theoretic trust‑building and shared energy resources【664431333175906†L416-L567】. He emphasises that cooperation is enlightened self‑interest; bifurcating AI development creates safe havens for bad actors and increases global risks[82].
Key points: - AI categories (GOFAI, ANI, agentic AI, AGI, ASI) clarify policy because each stage involves different risks and capabilities; there is no final “AGI finish line”[80]. - Misuse of AI by humans (cyberattacks, bioweapons, propaganda) is a more immediate threat than misalignment; focusing solely on existential misalignment distracts from present dangers[81]. - Domestic readiness includes universal basic income for innovation (UBII), reskilling programs, open‑source AI and inclusive economic policies[79][83]. - International cooperation should involve a dual‑track architecture: competition in areas like commercial applications and defence, coupled with cooperation on safety, standards, energy and open science[83]. - Ideas such as a CERN‑style AI research centre, iterative trust‑building (tit‑for‑tat), shared energy infrastructure and open‑source AI can reduce race dynamics and foster global alignment【664431333175906†L416-L566】. - Treating advanced AI as a public good and ensuring globally representative training data are essential for reducing biases and building equitable AI systems[84].
Volume 2 of The Digitalist Papers brings together technologists, economists, policy specialists and philosophers to examine how transformative AI could reorder economies and societies. The essays offer diverse perspectives—some warn of existential risks, others highlight opportunities for broad prosperity. Across the contributions, recurring themes emerge: the importance of competition policy and equitable distribution; the need for international cooperation and governance; the recognition that AI can both democratize expertise and exacerbate inequality; and the call to invest in human capabilities, safety nets and shared infrastructure. By exploring these questions, the volume invites readers to grapple with the complex choices that will shape the age of transformative AI.
[1] [2] [3] Introduction: The Economics of Transformative AI — Digitalist Papers
https://www.digitalistpapers.com/intro/etai
[4] [5] The San Francisco Consensus — Digitalist Papers
https://www.digitalistpapers.com/vol2/schmidt
[6] [7] [8] [9] The Democratization of Intelligence — Digitalist Papers
https://www.digitalistpapers.com/vol2/friar
[10] [11] [12] Private Physical AI for the Edge: Small, Energy-Efficient, and Everywhere — Digitalist Papers
https://www.digitalistpapers.com/vol2/rus
[13] [14] [15] The Universal Innervation of the Economy — Digitalist Papers
https://www.digitalistpapers.com/vol2/jurvetson
[16] [17] [18] Advanced AI as a Global Public Good and a Global Risk — Digitalist Papers
https://www.digitalistpapers.com/vol2/bengio
[19] [20] “Career” Advice from the AI Frontier: Preparing Young People for Work in the Age of Transformative AI — Digitalist Papers
https://www.digitalistpapers.com/vol2/balwit
[21] [22] Beyond Job Displacement: How AI Could Reshape the Value of Human Expertise — Digitalist Papers
https://www.digitalistpapers.com/vol2/autorthompson
[23] [24] Universal Basic Capital: An Idea Whose Time Has Come — Digitalist Papers
https://www.digitalistpapers.com/vol2/berggruengardels
[25] [26] Resilient by Design: Dual Safety Nets for Workers in the AI Economy — Digitalist Papers
https://www.digitalistpapers.com/vol2/marinescu
[27] [28] [29] Preserving Fiscal Stability in the Age of Transformative AI — Digitalist Papers
https://www.digitalistpapers.com/vol2/korineklockwood
[30] [31] [32] [33] What’s There to Fear in a World with Transformative AI? With the Right Policy, Nothing. — Digitalist Papers
https://www.digitalistpapers.com/vol2/stevenson
[34] [35] Transformative AI and the Increase in Returns to Experimentation: Policy Implications — Digitalist Papers
https://www.digitalistpapers.com/vol2/agrawalgans
[36] [37] [38] Information in the Age of AI: Challenges and Solutions — Digitalist Papers
https://www.digitalistpapers.com/vol2/stiglitzventurabolet
[39] [40] [41] Transformative AI in Financial Systems — Digitalist Papers
https://www.digitalistpapers.com/vol2/pentlandlipton
[42] [43] [44] Titans, Swarms, or Human Renaissance? Technological Revolutions and Policy Lessons for the AI Age — Digitalist Papers
https://www.digitalistpapers.com/vol2/toloui
[45] [46] [47] Economic Possibilities for Artificial Intelligence — Digitalist Papers
https://www.digitalistpapers.com/vol2/unger
[48] [49] [50] [51] [52] [53] [54] [55] Cheap Goods for Everyone? The Impact of Market Power in Artificial Intelligence on Welfare and Inequality — Digitalist Papers
https://www.digitalistpapers.com/vol2/atheyscottmorton
[56] [57] [58] [59] [60] [61] [62] [63] [64] [65] The Missing Institution: A Global Dividend System for the Age of Transformative AI — Digitalist Papers
https://www.digitalistpapers.com/vol2/yelizarova
[66] [67] [68] [69] [70] [71] [72] Open Global Investment as a Governance Model for Transformative AI — Digitalist Papers
https://www.digitalistpapers.com/vol2/bostrom
[73] [74] [75] [76] [77] [78] Strategic Dynamics in the Race to AGI: A Time to Race Versus a Time to Restrain — Digitalist Papers
https://www.digitalistpapers.com/vol2/abrahamkavnermoonmatheny
[79] [80] [81] [82] [83] [84] Beyond Rivalry: A US-China Policy Framework for the Age of Transformative AI — Digitalist Papers