January 20, 2025
Promote and Protect America’s AI Advantage
- The Trump administration should ensure that breakthrough capabilities in artificial intelligence (AI) are developed in the United States and keep China from fully narrowing the capability gap.
- The Trump administration should focus on promoting domestic innovation by cutting red tape and advancing technical research, while protecting U.S. advances through enhanced security for datacenters and high-risk models.
- To promote America’s AI lead, the administration should streamline regulation and fast-track infrastructure development to enable continued AI dominance.
- To protect U.S. AI innovation, the administration should work with the private sector to enhance security protocols for infrastructure related to frontier AI development, ensure that security and oversight are extended internationally, and develop mechanisms to prevent adversaries from exploiting the most powerful AI models.
AI stands at the center of great power competition, with the potential to turbocharge economic growth, military capabilities, and technological leadership. As leading researchers project the possibility of human-level AI within the next decade, ensuring these breakthrough capabilities are developed in the United States is critical for both U.S. technology superiority and managing potential risks to national security.1 While the United States currently leads in AI development, China is working actively to narrow the gap. Export controls on AI chips and restrictions on outbound investments are important but cannot alone preserve America’s edge. The Trump administration must take decisive action on two fronts: promoting domestic AI innovation by cutting red tape and advancing research and protecting U.S. advances from foreign threats through enhanced security for datacenters and high-risk models.
To promote America’s AI lead, the next administration should:
Streamline regulation and fast-track infrastructure development. Left unmet, the unprecedented energy demands of AI datacenters could inhibit U.S. AI leadership. Within five years, training a leading AI model is projected to consume around five gigawatts of power, equivalent to roughly five large-scale nuclear reactors.2 Datacenters already face long delays connecting to the grid due to power availability constraints. Lengthy and complex permitting processes are creating significant delays in the construction of new energy facilities and transmission lines.3 To ensure a reliable energy supply, leading AI developers are pursuing new technologies, including small modular nuclear reactors.4 However, these initiatives alone cannot fully meet projected AI energy demand.
Faced with domestic constraints, U.S. firms increasingly are looking to construct datacenters overseas in nations such as the United Arab Emirates and Saudi Arabia that offer abundant, cheap energy and streamlined regulation.5 While some international deployment of AI infrastructure is necessary for global operations, enabling other jurisdictions to dominate AI datacenter development risks undermining America’s technological leadership. The January 2025 Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure begins to address this by making federal sites available for AI datacenter buildout.6 But more will be required. To maintain U.S. AI leadership, at a minimum, the U.S. government should streamline permitting to allow domestic energy to meet AI datacenter needs.
The Trump administration must take decisive action on two fronts: promoting domestic AI innovation by cutting red tape and advancing research and protecting U.S. advances from foreign threats through enhanced security for datacenters and high-risk models.
Advance AI science to enable reliable deployment at scale. Insufficient investment in technical solutions for AI safety and security could undermine America’s technological leadership. Just as nuclear incidents in the 1970s and ’80s eroded public confidence in nuclear energy deployment, significant AI failures could severely limit adoption of this transformative technology.6 Achieving world-leading AI safety, security, and reliability is essential for deploying AI effectively across critical domains such as national security and government operations.
The administration should champion ambitious technical initiatives to enable the safe deployment of increasingly powerful AI systems. This requires coordinating with industry to advance security and safety innovation throughout the AI stack from AI hardware-enabled mechanisms for increased security to robust model safeguards. Key government offices such as the AI Safety Institute (AISI) and the Defense Advanced Research Projects Agency (DARPA) should leverage their technical expertise to accelerate these efforts.7
To protect U.S. AI innovation from foreign threats, the next administration should:
Bolster the security of AI datacenters. While promoting domestic AI infrastructure is crucial, these efforts will prove futile if adversaries can simply steal sensitive intellectual property such as AI model weights. The frontier of AI development requires massive investments—hundreds of millions of dollars in chips, computing power, and specialized talent—yet these investments can be circumvented through theft of trained models and proprietary technology. As AI capabilities advance and training costs at the technological frontier escalate, nation-state adversaries face mounting incentives to steal these capabilities. This threat is especially acute from China, which has limited access to the export-controlled AI chips needed to train the most advanced models. Improved security standards therefore must accompany America’s buildout of AI datacenters.
To that end, the U.S. government should work with the private sector to establish enhanced security protocols for infrastructure related to frontier AI development. The recent Chinese “Salt Typhoon” hack of major U.S. telecommunications companies underscores the challenge of protecting the most sensitive critical infrastructure against state-backed attackers, and the need for stronger public-private collaboration to protect against such sophisticated threats.9 A coordinated response—leveraging the expertise of the National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency, the National Institute of Standards and Technology, and the AISI—is essential to develop, implement, and verify enhanced security protocols for U.S. frontier AI facilities.
Extend security requirements internationally. This security framework must extend beyond U.S. borders. The flow of cutting-edge American chips and intellectual property to U.S. allies and partners should come with comparable security standards. As proposed through the Department of Commerce’s Framework for Artificial Intelligence Diffusion, the Validated End User program for datacenters offers a potential mechanism for this alignment, enabling the United States to mandate specific security and oversight protocols as prerequisites for large exports of advanced AI chips to overseas datacenters.10 If implemented correctly, this program could prevent U.S. technological advantages from proliferating to China, while still enabling American technology to underpin the global AI ecosystem.
Achieving world-leading AI safety, security, and reliability is essential for deploying AI effectively across critical domains such as national security and government operations.
Monitor and manage risks at the frontier, with authority to act when needed. As AI capabilities rapidly advance, the U.S. government must develop mechanisms to prevent adversaries from exploiting the most powerful AI models. There already is evidence of China exploring the use of open-sourced U.S. AI models in military applications.11 While today’s open-source models do not offer meaningful advantages over China’s domestic capabilities, a “run faster” agenda for U.S. AI leadership would aim to widen the gap with China. Future capabilities emerging at AI’s technological frontier could pose substantially greater risks. Failing to protect such capabilities would undermine U.S. national security and squander hard won gains.
The Trump administration should authorize and fund the AISI while directing it to prioritize critical national security threats, including in the chemical, biological, radiological, nuclear, and cyber domains. Working alongside national security agencies and private sector partners, the AISI should actively monitor capabilities and risks emerging at the frontier of AI development. This oversight should prioritize reporting and capability testing for the most advanced, compute-intensive models, minimizing impact on the broader AI ecosystem.
When technical safety measures prove insufficient against serious risks from next-generation AI systems, the government needs explicit authority to prevent these models from reaching adversaries who could weaponize them against the United States and its allies. Establishing this authority should be a priority in the administration’s first 100 days.
A well-scoped federal approach would help prevent fragmented state-level regulation that would increase regulatory burden on AI developers. It also would help maintain U.S. leadership in global AI standards setting, avoiding scenarios where other jurisdictions like the European Union dictate terms unfavorable to U.S. interests.
A vibrant open-source technology ecosystem remains vital to America’s technological leadership, driving innovation, security, and adoption. Any oversight and risk management framework must be carefully targeted, focusing exclusively on models most likely to present significant national security risks. This balanced approach would preserve the benefits of open-source development while safeguarding critical national security interests.
Download the Full Report
- Oversight of AI: Insiders’ Perspective: Hearing Before the Senate Committee on the Judiciary’s Subcommittee on Privacy, Technology and the Law, 118th Cong. (2024) (statement of Helen Toner, Director of Strategy and Foundational Research Grants, Center for Security and Emerging Technology, Georgetown University), https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectives. ↩
- Shirin Ghaffary, “OpenAI Pitched White House on Unprecedented Data Center Buildout,” Bloomberg, September 24, 2024, https://www.bloomberg.com/news/articles/2024-09-24/openai-pitched-white-house-on-unprecedented-data-center-buildout; Tim Fist and Arnab Datta, How to Build the Future of AI in the United States (Institute for Progress, October 23, 2024), https://ifp.org/future-of-ai-compute/. ↩
- Fist and Datta, How to Build the Future of AI in the United States; Janet Egan et al., “Response to Request For Comment: ‘Bolstering Data Center Growth, Resilience, and Security,”’ Center for a New American Security (CNAS), November 14 2024, https://www.cnas.org/publications/commentary/response-to-request-for-comment-bolstering-data-center-growth-resilience-and-security; Zachary Skidmore, “Meta Nuclear Powered AI Data Center Scuppered by Discovery of Rare Bee Species,” Data Center Dynamics, November 4 2024, https://www.datacenterdynamics.com/en/news/meta-nuclear-powered-ai-data-center-scuppered-by-discovery-of-rare-bee-species/. ↩
- Skidmore, “Meta Nuclear Powered AI Data Center Scuppered.” ↩
- “Microsoft, UAE’s AI Firm G42 to Set Up Two New Centres in Abu Dhabi,” Reuters, September 17, 2024, https://www.reuters.com/technology/microsoft-uaes-ai-firm-g42-set-up-two-new-centres-abu-dhabi-2024-09-17/; Matthew Martin, “Chip Startup Groq Backs Saudi AI Ambitions with Aramco Deal,” Bloomberg, September 16, 2024, https://www.bloomberg.com/news/articles/2024-09-16/chip-startup-groq-backs-saudi-ai-ambitions-with-aramco-deal; and “Aramco Digital and Groq Announce Progress in Building the World’s Largest Inferencing Data Center in Saudi Arabia Following LEAP MOU Signing,” Groq, September 12, 2024, https://groq.com/news_press/aramco-digital-and-groq-announce-progress-in-building-the-worlds-largest-inferencing-data-center-in-saudi-arabia-following-leap-mou-signing/. ↩
- “Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure,” The White House, January 14, 2025, https://www.whitehouse.gov/briefing-room/presidential-actions/2025/01/14/executive-order-on-advancing-united-states-leadership-in-artificial-intelligence-infrastructure/. ↩
- Matthew Fuhrmann. “Splitting Atoms: Why Do Countries Build Nuclear Power Plants?” International Interactions 38 (2011): 29–57, https://www.tandfonline.com/doi/abs/10.1080/03050629.2012.640209. ↩
- Onni Aarne, Tim Fist, and Caleb Withers, Secure, Governable Chips: Using On-Chip Mechanisms to Manage National Security Risks from AI & Advanced Computing (CNAS, January 8, 2024), https://www.cnas.org/publications/reports/secure-governable-chips. ↩
- John Sakellariadis, “Up to 80 Telcos Likely Hit by Sweeping Chinese Hack,” PoliticoPro, November 22, 2024, https://subscriber.politicopro.com/article/2024/11/up-to-80-telcos-likely-hit-by-sweeping-chinese-hack-00191304. ↩
- “Biden-Harris Administration Announces Regulatory Framework for the Responsible Diffusion of Advanced Artificial Intelligence Technology,” Bureau of Industry and Security, January 13, 2025, https://www.bis.gov/press-release/biden-harris-administration-announces-regulatory-framework-responsible-diffusion; “Commerce Updates Validated End User (VEU) Program for Eligible Data Centers to Bolster U.S. National Security, Promote Export Control Compliance,” Bureau of Industry and Security, September 30, 2024, https://www.bis.gov/press-release/commerce-updates-validated-end-user-veu-program-eligible-data-centers-bolster-us. ↩
- James Pomfret and Jessie Pang, “Exclusive: Chinese Researchers Develop AI Model for Military Use on Back of Meta's Llama,” Reuters, November 1, 2024, https://www.reuters.com/technology/artificial-intelligence/chinese-researchers-develop-ai-model-military-use-back-metas-llama-2024-11-01/. ↩
More from CNAS
-
Accelerate America’s Quantum Technology Leadership
As the U.S.-China competition for quantum technology leadership continues to intensify, the Trump administration should prioritize both advancing and protecting the country’s ...
By Constanza M. Vidal Bustamante
-
Secure America’s Tech Competitiveness
The Trump administration must bolster America’s science, technology, engineering, and mathematics (STEM) workforce and broader technological competitiveness—documented shortag...
By Sam Howell
-
Make America the Biopower
No country has a better biotechnology hand than America. The administration has a historic opportunity to play it wisely and secure the United States’ position as the 21st cen...
By Vivek Chilukuri
-
Lawfare Daily: Janet Egan and Lennart Heim on the AI Diffusion Rule
Janet Egan, Senior Fellow at the Center for a New American Security (CNAS) and Lennart Heim, an AI researcher at RAND, join Kevin Frazier, a Tarbell Fellow at Lawfare, to anal...
By Janet Egan