April 22, 2025
Promethean Rivalry
The World-Altering Stakes of Sino-American AI Competition
Executive Summary
Just as nuclear weapons revolutionized 20th-century geopolitics, artificial intelligence (AI) is primed to transform 21st-century power dynamics—with world leaders increasingly suggesting its impact may prove even more profound. The contest between China and the United States to harness AI’s unprecedented potential has become the defining technological rivalry of a generation, echoing both the nuclear arms race and the space race in its urgency, scope, and implications for humanity.
While experts have focused on AI’s decisive role in military and economic competition between the two superpowers, they often miss China’s and the United States’ respective approaches to the profound moral and civilizational questions that hang in the balance of their race to AI supremacy. In the Cold War, both nuclear and space competitions assumed ethical, ideological, and national prestige dimensions well beyond the technologies’ direct contribution to hard economic or military power, with formative effects on the overall Soviet-American rivalry. Similarly, Sino-American AI competition today is poised to redefine conflict norms, state power, emerging bioethics, and catastrophic risks—four domains demanding deliberate attention due to their world-altering significance.
The contest between China and the United States to harness AI’s unprecedented potential has become the defining technological rivalry of a generation.
Just as in the Cold War, these symbolically charged domains can complicate each superpower’s pursuit of its broader geopolitical aims, fortify soft power where political leadership is exercised well, or elicit rare areas of superpower cooperation—depending on how each power shapes the global narrative around their technological pursuits. Unlike in the Cold War, these seemingly disparate areas are bound together by a common underlying technology, such that they cannot be treated independently of one another: Policies directed toward any one of these areas will impact the others. This report will explain how AI is driving these interrelated epochal developments, clarify the relative influence of the United States and China within these domains, and offer recommendations for the United States to steer AI’s world-changing stakes in its favor.
So far, the United States’ approach to AI’s impacts on conflict norms is the most coherent and well articulated of the domains considered in this report. AI’s integration across the full spectrum of military operations promises to revolutionize warfare even more fundamentally than nuclear weapons. While nuclear weapons remain largely confined to deterrence applications, AI enables enhanced capabilities in everything from logistics to autonomous weapons systems. Both the United States and China view AI as potentially decisive for future military advantage. However, AI-enabled warfare raises profound ethical challenges: the difficulty of assigning responsibility for the actions of autonomous weapons under international law, the loss of human moral judgment in battlefield decisions, the psychological alienation of combatants from automated violence, and the risks of deepening military power asymmetries between technologically advanced and developing nations. The United States and China exercise greater influence on these evolving issues than other nations due to their technical capabilities and diplomatic heft, but their influence has been limited thus far by the fact that emerging AI tools can be easily adapted to battlefield applications by less technologically advanced nations. While difficult to predict, this mitigated influence is likely to continue, with limited prospects for substantive cooperation between the superpowers. To sustain its position emphasizing appropriate levels of human judgement in AI-enabled warfighting in the face of domestic and international challenges, the United States will have to be resolute, proactive, and principled in communicating and adhering to its vision.
Leveraging AI’s considerable potential to enhance state power has been a major weakness of the United States’ AI strategy vis-à-vis China. To the extent that the competition between the United States and China represents an ideological struggle between democracy and autocracy, China’s edge is particularly lamentable. While China has aggressively pioneered AI-enabled authoritarian systems, democratic societies including the United States have been comparatively reactive and slow in developing democratic counterparts, lacking a clear vision of techno-democracy. China’s massive investments in techno-authoritarian tools have created an ecosystem with a large, captive market at home and, increasingly, abroad through its Digital Silk Road. Meanwhile, although the United States and other open societies have taken some steps to digitize democracies, American technology companies abroad often accommodate, if not actively support, techno-authoritarianism. Unsurprisingly, incentives do not exist for substantive Sino-American cooperation on the future of tech-enabled states, and a lack of vision from the United States relative to China on the issue has given Beijing a considerable advantage, with weighty implications for the future of individual freedoms and human rights worldwide. To compete, the United States will need to redouble its efforts to address its lack of a clear techno-democratic vision, asymmetric advantages in Beijing’s techno-authoritarian ecosystem, and American corporate support of authoritarian tech.
Emerging bioethics represents the most nascent domain of thought and action in Sino-American AI competition, representing uncharted diplomatic territory in a field whose norms are conventionally shaped more by scientists than states. AI is catalyzing a revolution in capabilities to understand and manipulate genomes with the potential to fundamentally alter the human condition. Both the United States and China seek to lead this transformation, which presents unprecedented ethical challenges around human genetic engineering, eugenic-potential reproductive selection, and military biotechnology. China’s track record of pursuing grotesque genetic research and reproductive control indicates fundamental differences in moral visions, with chilling implications for future generations.1 Despite the enormous potential for Sino-American collaboration to harness the genomic revolution for the benefit of all humanity—similar to space exploration in the Cold War—in practice the likelihood of substantive cooperation is limited, at least between governments. This misalignment of values is particularly lamentable given the United States and China’s first-mover advantages in shaping norms around these controversial capabilities that, left uncoordinated, lend themselves to a race to the bottom in ethical standards. In the absence of bilateral cooperation, the United States must craft a feasible approach to these challenges resonant with American values.
Addressing evolving catastrophic risks related to AI has been the most pronounced area of bilateral Sino-American AI diplomacy in recent years, albeit one in which the United States’ focus on the issue is counterproductive: creating room for China to win over other nations to a more positive vision for AI and ceding ground to China’s more dangerous AI safety culture in the process. AI’s future integration into high-risk domains like biosecurity, cybersecurity, and nuclear systems could create new vulnerabilities that cascade into severe global crises. Emerging “agentic” AI systems able to operate autonomously also pose serious issues related to loss of control, similar to the 2017 NotPetya cyberattack that spiraled out of control and ultimately rebounded to Russia.2 As is typical in high-risk technological advancements, the hazards of progress are born unequally between technologically advanced nations pushing the frontiers and those less equipped to cope in the case of serious failures. Nonetheless, developing economies have thus far been skeptical of the emphasis that leaders in the United States have placed on existential risks from AI—that is, concerns that theoretical future AI systems surpassing humanity in intelligence could threaten civilization’s survival. Echoing the fissure between nuclear “haves” and “have-nots” during the Cold War, some developing nations fear that onerous safety regimes may be used to inhibit their ability to fully capitalize on the technology—a narrative that the People’s Republic of China (PRC) has supported at the United States’ expense. The United States and the PRC have considerable influence to shape catastrophic risks from AI globally, but that influence will hinge on actions across multiple channels: diplomatic agreements, industry-level international forums, and domestic cultures of AI safety. Among these three, China’s deficient safety culture, despite the government’s regulations, is likely the greatest risk vector. Thus, simply outcompeting China in the diffusion of American AI may be more efficacious than pursuing limited diplomatic agreements on AI safety—particularly if many nations aspiring to use AI for development find such a focus on risks alienating. The United States must reorient its strategy accordingly.
Just as in the Cold War, these symbolically charged domains can complicate each superpower’s pursuit of its broader geopolitical aims, fortify soft power where political leadership is exercised well, or elicit rare areas of superpower cooperation.
Analysis of these four domains—conflict norms, state power, emerging bioethics, and catastrophic risks—presents a sobering reality: While their implications demand superpower cooperation, the prospects for meaningful Sino-American collaboration are highly limited. This disconnect between necessity and feasibility is particularly troubling given the era-defining consequences at stake—from automated killing systems and panopticon technostates to irreversible human genetic engineering and cascading AI-enabled catastrophes.
China’s diplomatic intransigence further raises the stakes of the AI race beyond military and economic advantage. Given these diplomatic limitations, even near-equal capabilities between the superpowers become hazardous—such a balance with an uncooperative China would likely trigger a race to the bottom in the norms of some of the most consequential technological questions humanity has had to contend with. America’s capacity to shape these unprecedented developments hinges on extending its AI advantage over China to the greatest extent possible. And while AI safety advocates may fear that such a race for supremacy jeopardizes AI’s responsible development, the competition for AI dominance is unavoidably already underway, with no better feasible alternatives. Rather than focusing on unlikely attempts at cooperation, the United States must pursue decisive technical superiority over China, fueled by a vision for AI that advances democratic values and global development aspirations.
Domain | Known Issues | Sino-American Influence | Potential for Bilateral Cooperation |
---|---|---|---|
Conflict Norms | • Lethal autonomous weapons and international humanitarian law • Evolving standards of moral judgement in automated warfighting • Psychological impacts of gamified lethal force • Expanding global military imbalances | Medium | Low |
State Power | • Privacy and surveillance • Censorship, propaganda, and freedom of expression • Digital forms of state coercion, control, and empowerment • Algorithmic justice and “predictive policing” • Use of autonomous weapons for policing • Government transparency, accountability, participation, and feedback • Limits on tech companies’ influence over consumers and public discourse | High | Negligible |
Emerging Bioethics | • The uses and limits of human genetic engineering • New capabilities in reproductive selection and eugenic practices • Military biotechnology | High | Low |
Emerging Bioethics | High | Low | |
Catastrophic Risks | • Integrating AI into high-risk domains (cybersecurity, biosecurity, weapons, nuclear command, etc.) • Export of safety cultures and regulatory regimes • Loss of control of “agentic” AI systems and concerns of hypothesized future superintelligence(s) | Medium | Medium-low |
Success could bolster American soft power and help shape AI’s trajectory toward human flourishing. Failure risks ceding the narrative to China’s authoritarian model, provoking international backlash against U.S. AI ambitions, and potentially surrendering humanity’s technological future to a brutal authoritarian regime. The stakes could hardly be higher.
To this end, the United States must:
- More aggressively expand its lead over China’s AI ecosystem in terms of compute capacity, data, talent, and institutions.
- Establish a President’s Council on Artificial Intelligence to address the most fundamental questions facing open societies in the age of AI and to craft an agenda for the ambitious use of AI to enhance American democracy.
- Remain realistic about the limited prospects for substantive progress with Chinese interlocutors on issues in AI-powered conflicts, genetic engineering norms, and catastrophic risks.
- Champion a vision for American AI internationally commensurate with the technology’s unique, historic potential to improve the human condition, focusing less on risks.
- Continue and further amplify its diplomatic efforts to establish norms on the responsible use of AI in militaries, AI in nuclear command and control, and—more discretely—AI safety.
- Pioneer a political declaration on the responsible uses of artificial intelligence in biotechnology.
- Work with like-minded partner nations to establish a techno-democracy innovation fund to match 15 percent of the estimated annual budget of the PRC for techno-authoritarian procurement.
Introduction
In the 20th century, nothing so transformed international relations as nuclear weapons, a seismic development fueled by rapid progress in subatomic physics and the immense strategic pressures of World War II and the Cold War. Not only did the first public use of nuclear bombs bring history’s largest war to a near-instantaneous halt; fear of their use also largely defined the later dynamics and limits of great power competition from the Cold War to the present day. Indeed, it would be difficult to overstate the geopolitical impacts of nuclear weapons, which were so immediate and all-encompassing that they defy easy comparison with any other prior development in the history of war, amounting to an epochal shift in the human condition—furnishing humanity with the novel ability to destroy itself. If the unprecedented military power of nuclear weapons defined an era, so too did the means by which the superpowers managed that power during the Cold War.
In the 21st century, the rapid development of artificial intelligence (AI) portends a historic transformation of similar—and perhaps greater—magnitude in the eyes of many of the world’s leaders. Successive American presidents, various European heads of state, Indian Prime Minister Narendra Modi, Chinese Communist Party (CCP) General Secretary Xi Jinping, and Russian President Vladimir Putin have all echoed the gravity of AI’s coming impacts—with Putin noting that whoever becomes the leader in AI development “will be the ruler of the world.”3
This focus on AI progress has understandably centered on projections of the technology’s potential for transformative military and economic advantages for whichever countries are able to best exploit its power. The world is watching as the two frontrunners in the technology, the United States and China, vie to “win” the AI race and, thereby, the superpower rivalry they find themselves in today. Many of both countries’ decision-makers, too, see themselves as locked in a struggle for AI advantage with vast implications for their own countries’ prospects vis-à-vis each other.4
But while economic and military competition undoubtedly make up the backbone of the AI rivalry between the United States and China, success in the struggle for AI advantage requires far more than simply accelerating technical capabilities beyond those of the adversary. Narrowly focusing on technical superiority not only risks compromising strategic capability development but also squanders opportunities for diplomatic leadership.
The history of the last great technology competitions shows plainly that intangible, symbolic factors proved perhaps as fundamental to shaping the trajectory of the Cold War as did the technologies’ direct economic or military contributions. America’s decisive nuclear advantage over the Soviet Union proved very shortlived, and the military capabilities unleashed by nuclear weapons functionally settled into a decades-long stalemate—even if the struggle for nuclear dominance held constant. The revolutionary economic boost that nuclear energy was meant to unleash also proved marginal for both countries.
Nonetheless, the immense prestige conferred by nuclear capabilities became a key driving force of proliferation beyond the superpower rivalry, as national leaders felt compelled to join the exclusive nuclear club. How the United States and the Soviet Union each responded to the ambitions of other nations’ aspirations to use nuclear technologies for their own peaceful or military ends was a pivotal feature of Cold War diplomacy, as was their messaging about their own ambitions. The civil society movements that grew out of protest to nuclear weapons—and the unprecedented existential risks they exposed humanity to—likewise became a considerable political force that needed to be contended with both within America’s domestic politics and in its foreign policy.
In the space race, such symbolic stakes were even more pronounced: As the early Cold War fades into distant memory, it can be difficult to recall just how momentous the Soviets’ successful Sputnik launch was for the rest of the world and how the United States’ successful moon landing shook the earth. Neither event conferred direct military or economic benefits to the originating country, but each became a focal point for the power and allure of opposing ideological systems. Equally significant, Soviet-American cooperation in space exploration became a rare arena of substantive collaboration during the Cold War that accelerated scientific progress and provided a highly visible symbol of hope and peaceful coordination to the world, despite fierce ideological divisions.
Like Prometheus’s theft of fire from the gods, harnessing nuclear energy and ascending into the cosmos demonstrated god-like power—achievements for the whole of humanity but borne by the competitive drives of aptly named superpowers.
These soft power stakes derive primarily from nuclear and space technologies’ unique, world-altering characteristics. As political philosopher Hannah Arendt relates, both technologies are remarkable insofar as they have altered the human condition—the most basic, innate circumstances that have historically framed the existence of Homo sapiens.5 Transcending humanity’s millennia of earth-bound limitations—and the species’ inherent constraints on self-destruction—represents a difference in kind from nearly all other technologies of the past, echoing mythic aspirations that have been imagined since the beginning of recorded history but never before realized.6 Like Prometheus’s theft of fire from the gods, harnessing nuclear energy and ascending into the cosmos demonstrated god-like power—achievements for the whole of humanity but borne by the competitive drives of aptly named superpowers.
Given the momentous nature of these developments, their international reception took on an influential life of its own: variously complicating countries’ pursuit of such capabilities, as many countries’ staunch resistance to nuclear buildup did, or garnering respect and legitimacy for countries, as in the worldwide acclamation of America’s lunar landing. Given how tightly such Promethean developments are tied to perceptions of human progress, they also offer powerful opportunities—if not imperatives—for superpower cooperation.
By all accounts, AI developments on the horizon fit squarely in the same category of Promethean technologies, now fueled by a superpower rivalry similar to space and nuclear technologies in the Cold War. At its most extreme, discussion of the epochal stakes of AI focus on esoteric questions about what the advent of superior, nonhuman intelligences would mean for the human species. But before such theoretical superintelligences are realized—if they ever are—there are clear areas of AI development with Promethean stakes that America must navigate as an integral part of the Sino-American AI competition. Unlike space or nuclear technologies, which pursued comparatively specific goals, AI’s impact comes from its ability to catalyze changes across numerous fields already on the cusp of controversial transformations. This report aims to clarify the stakes of four of the most consequential application areas of AI development that will be shaped in large part by the Sino-American relationship, how they relate to one another, and how the United States can best approach them.
- Conflict norms: AI capabilities in targeting, decision-making, and lethal autonomous weapons systems portend seismic strategic and moral transformations in warfighting and widening military imbalances between nations, with implications rivaling, if not surpassing, those of nuclear weapons.
- State power: AI holds the potential to create novel state capacities and mechanisms of governance, some of which are already manifesting in reportage on techno-authoritarianism and techno-democracy. To the degree that the Sino-American competition represents a struggle between opposed systems of government, AI’s potential to remake state power may fundamentally alter the tools and terrain of ideological competition.
- Emerging bioethics: AI is unlocking a revolution in genomics that suggests that long-theorized abilities to manipulate humans’ genetic makeup toward specific ends may soon be viable. Developments in enhanced genetic selection and modification offer potential for profound medical advances, but they also unleash unnerving possibilities to rewire humanity’s natural evolution.
- Catastrophic risks: AI’s integration into high-risk domains like biotechnology, cybersecurity, and nuclear command and control could significantly alter the momentous risks associated with each, transforming severe existing threats to national and international stability. How such risks are managed could come to rival the prominence of nuclear safeguards during the Cold War in significance.
This list may not prove exhaustive, but as this report will explore, there is sufficient evidence to know that rapidly developing AI capabilities are already on a trajectory to radically disrupt each of these areas in the course of Sino-American AI competition.
Common Characteristics
Though seemingly disparate, these four domains share key characteristics that make them worthy of integrated, urgent consideration from American policymakers steering the United States’ approach to AI competition with China. First, AI technologies are the foundation of all four areas, and recent breakthroughs in AI research have set the stage for coming transformations in each. Despite their diverse applications, a common set of new techniques in machine learning has unlocked an explosion in the power and breadth of AI capabilities, with a long runway ahead for continued rapid expansion. These breakthroughs echo the beginnings of prior steam and electric revolutions, in which new enabling technologies were adapted to turbocharge existing tools and create entirely novel ones in a wide range of fields. The effects of these recent AI breakthroughs continue to compound, meaning that as the overall pace of AI progress continues to accelerate, advancements in seemingly unrelated areas will often feed into one another, such as how language models have shown promise in surprising nonlanguage tasks like image classification and protein fold prediction.7 As a result, transformations in all four of these consequential areas are propelled forward in tandem, surfacing momentous opportunities and concerns in seemingly disparate fields on accelerated, interrelated timelines. Societies will have to grapple with the profound topics of genetically designed babies, surveillance states, killer robots, and novel catastrophic risk vectors roughly simultaneously as interrelated phenomena.
This shared technological basis and timeline also means that the United States’ AI diplomacy directed toward any one of the areas this report identifies will have impacts on the others, as the Biden administration’s export controls on advanced semiconductors critical to AI development have already demonstrated. This reality requires sustained government engagement, as progress—while rapid compared to historical technological advances—will nonetheless be incremental and unfold over years rather than months. Practically speaking, the mechanisms needed to ensure the responsible use (or to refrain from the use) of lethal autonomous weapons may share much with those needed to develop responsible boundaries in human genome editing. The same AI companies that build the tools for enhanced techno-democracy or techno-autocracy may also be those that determine the likelihood of major catastrophes occurring from the use of AI. Distinct as the discourses around these themes are, their common technological foundation means that policymakers cannot engage them in isolation from one another, particularly in the context of the U.S.-China relationship. This also means that these areas cannot be fully separated from one another, neither in the eyes of these countries’ citizenries nor in the imagination of the international community.
Additionally, all four areas are likely to require or invite heavy state involvement—if they are not already directly fueled by it. Developments in autonomous weapons and AI-powered advancements in state power both represent core interests for the United States and China and are therefore likely to be driven, even if indirectly, by them. The Chinese government is by far the biggest customer for techno-authoritarian tools and largely sets the agenda for their future development. There is no obvious analogue for the development of techno-democracy of similar scale, but to the extent that AI tools will find democratic expression, the United States holds unique potential and incentives to lead the charge. Both the United States and China are at the forefront of developing AI-enabled militaries and autonomous weapons capabilities—and are explicitly in competition with one another for advantage in that development. The future of genetic engineering may be actualized by private or academic labs, but both American and Chinese governments will need to define the boundaries of that progress and have already demonstrated a willingness to invest large state resources to direct national biotech efforts. The Human Genome Project represents one of the largest scientific megaprojects led by the U.S. government, and the People’s Republic of China (PRC) has pledged to invest the resources necessary to lead the world in biotechnology by 2035.8 Finally, to the extent that AI technologies present new risks of catastrophes—or exacerbate existing ones—governments bear the ultimate responsibility to mitigate and respond to such threats, necessarily requiring deep government involvement and, in some cases, international cooperation. How the two governments will each shape these areas will likely differ, not least due to their distinct approaches to technology policy, including differing stances on how public and private sectors respectively should lead innovation, and how regulations are developed and applied. But in all cases, the role of the government is likely to be significant, and perhaps profound.
Beyond these existing reasons for both the United States and China to be heavily involved in AI-powered revolutions in conflict norms, state power, emerging bioethics, and catastrophic risks, it is not difficult to imagine circumstances under which either or both superpowers would mobilize massive, moonshot-level state resources toward a specific goal of direct relevance to any of these areas. If either government came to believe that specific milestones on the horizon would confer outsized strategic advantages or national prestige to whichever country first realized them, they would likely initiate large-scale efforts akin to the Apollo Program or the Manhattan Project to attempt to realize said objectives first. Indeed, the Trump administration is already considering an initiative similar to the Manhattan Project, and the announcement of the $500 billion Stargate Project AI buildout at the White House on the second day of the Trump administration points to a similar trajectory.9
Astronaut Buzz Aldrin stands on the lunar surface facing the American flag during Apollo 11’s extravehicular activity—a mission that fulfilled President John F. Kennedy’s ambitions for a crewed lunar landing. The United States and China may initiate large-scale efforts akin to the Apollo Program for AI technology if either believes it could confer substantial strategic advantages.
NASA via UnsplashFinally, much like nuclear weapons in the Cold War, the AI-powered developments in each of these areas pose universally relevant questions to humanity, from the moral complexities of developing more powerful state systems to the momentous questions around irreversibly altering the genetic fabric of humanity. This profound ethical significance is important for two reasons. First, given the foundational role that the United States has played in the birth and development of AI technologies, the nation arguably maintains a moral obligation to the world to help guide, to the extent possible, the technology toward human flourishing in accord with American values. More pragmatically, regardless of how far that obligation may or may not extend, other nations’ perceptions of how the United States and China each manage the momentous questions that confront them matters tremendously as they pioneer new frontiers in the world’s most powerful new technology.
Just as Soviet and American approaches to nuclear and space technologies became central aspects of their global soft power during the Cold War, so also will perceptions of U.S. and Chinese approaches to AI in war, state power, human genetic manipulation, and catastrophic risks shape their prospects to win influence with other nations in today’s struggle for geopolitical advantage. How American and Chinese governments choose to direct AI progress holds the potential to both inspire and alienate would-be supporters of each superpower’s broader strategic agenda. And in some cases, how the two nations cooperate—or fail to cooperate—will have decisive implications for the security, prosperity, and norms available to most of the world’s nations.
Scope, Methods, and Findings
This report aims to contextualize the world-altering stakes of Sino-American AI rivalry across four key domains: conflict norms, state power, emerging bioethics, and catastrophic risks. For each domain, this report will review its strategic relevance in the Sino-American rivalry, analyze how China and the United States are influencing its trajectory, and assess the relative likelihood and desirability of Sino-American cooperation. Drawing on this analysis, the report proposes actionable recommendations for the U.S. government to more strategically manage its approach to AI in these high-consequence domains, grounded in a realistic view of China’s ambitions and incentives.
While this report aims to move beyond the usual generalities about the magnitude of AI’s geopolitical stakes and offer greater specificity, there are limits to how granular a picture it can offer. The pace of progress for both AI generally and each domain specifically is ultimately unpredictable. Additionally, while this report assumes that the international economic cooperation on AI will be a tremendously important aspect of Sino-American AI competition, it does not explore the theme directly, given that it has been dealt with more extensively elsewhere and requires an analysis more attuned to the broader literature on trade and development in China and the United States.10 Instead, this report focuses more narrowly on the understudied dimensions of AI soft power that derive from AI’s novel, momentous disruptions in highly controversial areas of technological advancement—the technology’s Promethean stakes.
The report does not aim to address the Promethean stakes of the four domains covered on the basis of any moral perspective. Instead, it provides a descriptive assessment while recognizing that the ethical implications of AI advancements in these domains are so profound that they will inevitably shape foreign policy decisions. As Reinhold Niebuhr observed of the necessity of America’s nuclear weapons buildup, hard moral paradoxes are inescapable in a nuclear world, “but neither is there a viable solution which disregards the moral factors,” as policies of such weighty consequence must be justified to the American public and the broader world.11 Similar dynamics may characterize the difficult decisions ahead to manage the far-reaching effects of AI on the norms of conflict, state power, emerging bioethics, and catastrophic risks. The goal of this analysis is only to clarify these moral complexities and their relation to the ongoing Sino-American AI competition—not to advocate for a specific ethical stance.
While these issues are rapidly evolving and any analysis must be provisional, clarifying the stakes of Sino-American AI rivalry in these four domains is crucial for effective U.S. policy. So also is clarifying the likelihood of productive coordination with China, constrained as it is. Given the highly limited prospects for meaningful cooperation with China, the United States must pursue a two-pronged, mutually reinforcing strategy: aggressively expanding its technical AI advantages while developing a compelling vision of AI leadership for the global community. Such a vision must be ambitious, responsible, and mutually beneficial—both sober-minded about the world-altering power of the technologies it seeks to unleash, and responsive to the aspirations of the broader global community.
- Mei Fong, “Before the Claims of Crispr Babies, There Was China’s One-Child Policy,” The New York Times, November 28, 2018, https://www.nytimes.com/2018/11/28/opinion/china-crispr-babies.html. ↩
- Andy Greenberg, “The Untold Story of NotPetya, the Most Devastating Cyberattack in History,” Wired, August 22, 2018, https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/. ↩
- “Putin: Leader in Artificial Intelligence Will Rule World,” Associated Press, September 1, 2017, https://apnews.com/ article/bb5628f2a7424a10b3e38b07f4eb90d4. During the October 31 meeting of the Politburo of the Central Committee of the Chinese Communist Party, General Secretary Xi Jinping emphasized that “accelerating the development of a new generation of artificial intelligence (AI) is related to the strategic issue of whether China can seize the opportunities for a new round of scientific and technological revolution and industrial transformation.” Rogier Creemers and Elsa Kania, “Translation: Xi Jinping Calls for ‘Healthy Development’ of AI,” DigiChina (blog), Stanford Cyber Policy Center, November 5, 2018, https://digichina.stanford.edu/work/xi-jinping-calls-for-healthy-development-of-ai-translation/. Addressing the Karmayogi Saptah program at Ambedkar International Center in New Delhi on October 20, 2024, Indian Prime Minister Narendra Modi emphasized that if India successfully uses AI to advance the progress of aspirational India, it can lead to transformational changes. “Transformational Changes Are Possible with the Right Use of Artificial Intelligence: PM Modi,” DD News, October, 20, 2024, https://ddnews.gov.in/transformational-changes-are-possible-with-the-right-use-of-artificial-intelligence-pm-modi/. President Donald Trump has stated, “Continued American leadership in Artificial Intelligence is of paramount importance to maintaining the economic and national security of the United States.” “Artificial Intelligence for the American People,” Trump White House Archives, accessed October 24, 2024, https://trumpwhitehouse.archives.gov/ai/. Former United Kingdom Prime Minister Rishi Sunak stated in a speech on October 26, 2023, “I genuinely believe that technologies like AI will bring a transformation as far-reaching as the industrial revolution, the coming of electricity or the birth of the internet.” “Prime Minister’s Speech on AI: 26 October 2023,” GOV.UK, October 26, 2023, https://www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023. President of the European Commission Ursula von der Leyen stated during the World Economic Forum Annual Meeting in 2024 that “our future competitiveness depends on AI adoption in our daily businesses, and Europe must up its game and show the way to responsible use of AI. That is AI that enhances human capabilities, improves productivity and serves society.” “From Sam Altman to António Guterres: Here’s What 10 Leaders Said About AI at Davos 2024,” World Economic Forum, January 23, 2024, https://www.weforum.org/stories/2024/01/what-leaders-said-about-ai-at-davos-2024/. ↩
- National Security Commission on Artificial Intelligence, Final Report (2021), https://assets.foleon.com/eu-central-1/de-uploads-7e3kk3/48187/nscai_full_report_digital.04d6b124173c.pdf; Jacob Stokes, Alexander Sullivan, and Noah Greene, U.S.-China Competition and Military AI (Center for a New American Security, July 25, 2023), https://www.cnas.org/publications/reports/u-s-china-competition-and-military-ai. ↩
- Hannah Arendt, The Human Condition (Chicago: Chicago University Press, 1957), prologue, 1–6. ↩
- See Adrienne Mayor, Gods and Robots: Myths, Machines, and Ancient Dreams of Technology (Princeton, NJ: Princeton University Press, 2020). ↩
- Kevin Lu et al., “Pretrained Transformers as Universal Computation Engines,” arXiv (June 30, 2021), https://doi.org/10.48550/arXiv.2103.05247. ↩
- PRC State Council, “国务院关于印发《中国制造2025》的通知 [Notice of the State Council on the Publication of ‘Made in China 2025’],” translated by Center for Security and Emerging Technology, March 8, 2022, 1–32, https://cset.georgetown.edu/wp-content/uploads/t0432_made_in_china_2025_EN.pdf; Central Commission for Cybersecurity and Informatization, “十四五”国家信息化规划 [14th Five-Year Plan for National Informatization],” translated by Rogier Creemers et al., Translation: 14th Five-Year Plan for National Informatization (DigiChina, Standford Cyber Policy Center, January 24, 2022), 1–59, https://digichina.stanford.edu/work/translation-14th-five-year-plan-for-national-informatization-dec-2021/. ↩
- Mohar Chatterjee, Anthony Adragna, and Gabby Miller, “What DeepSeek Means for Tech Policy—and Doesn’t,” PolicoPro Morning Tech, accessed January 29, 2025, https://subscriber.politicopro.com/newsletter/2025/01/whatdeepseek-means-for-tech-policy-and-doesnt-00200892; Derek Robertson, “Why Trump’s AI Plan Just Caused a Billionaire Slapfight,” Politico, January 22, 2025, https://www.politico.com/newsletters/digital-future-daily/2025/01/22/why-trumps-ai-plan-just-caused-a-billionaire-slapfight-00200049. ↩
- Jason Furman and Robert Seamans, “AI and the Economy,” Innovation Policy and the Economy 19 (January 1, 2019): 161–91, https://doi.org/10.1086/699936. ↩
- Reinhold Niebuhr, The Irony of American History, Andrew J. Bacevich, ed. (Chicago: University of Chicago Press, 2008), 40, https://press.uchicago.edu/ucp/books/book/chicago/I/bo5864609.html. ↩
More from CNAS
-
Countering the Digital Silk Road: Indonesia
This year marks the 10th anniversary of the Digital Silk Road (DSR), China’s ambitious initiative to shape critical digital infrastructure around the world to advance its geop...
By Vivek Chilukuri & Ruby Scanlon
-
Biopower
For policymakers, the question is not whether the biorevolution has transformative power, but which nation will responsibly harness that power...
By Vivek Chilukuri & Hannah Kelley
-
Catalyzing Crisis
The arrival of ChatGPT in November 2022 initiated both great excitement and fear around the world about the potential and risks of artificial intelligence (AI). In response, s...
By Bill Drexel & Caleb Withers
-
AI and the Evolution of Biological National Security Risks
New AI capabilities may reshape the risk landscape for biothreats in several ways. AI is enabling new capabilities that might, in theory, allow advanced actors to optimize bio...
By Bill Drexel & Caleb Withers