October 25, 2024

Noteworthy: “Memorandum on Advancing the United States’ Leadership in Artificial Intelligence”

In this edition of Noteworthy, researchers from across CNAS dissect the recently released Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence from President Biden.

Experts make in-line comments on the most notable statements on the administration's attempt to balance advanced artificial intelligence technology with national security and human rights. Learn more about CNAS's work on artificial intelligence.

To organize interviews with CNAS experts contact [email protected].

Section 1.  Policy.  (a)  This memorandum fulfills the directive set forth in subsection 4.8 of Executive Order 14110 of October 30, 2023 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence).  This memorandum provides further direction on appropriately harnessing artificial intelligence (AI) models and AI-enabled technologies in the United States Government, especially in the context of national security systems (NSS), while protecting human rights, civil rights, civil liberties, privacy, and safety in AI-enabled national security activities.  A classified annex to this memorandum addresses additional sensitive national security issues, including countering adversary use of AI that poses risks to United States national security.

     (b)  United States national security institutions have historically triumphed during eras of technological transition.  To meet changing times, they developed new capabilities, from submarines and aircraft to space systems and cyber tools.  To gain a decisive edge and protect national security, they pioneered technologies such as radar, the Global Positioning System, and nuclear propulsion, and unleashed these hard-won breakthroughs on the battlefield.  With each paradigm shift, they also developed new systems for tracking and countering adversaries’ attempts to wield cutting-edge technology for their own advantage.

     (c)  AI has emerged as an era-defining technology and has demonstrated significant and growing relevance to national security. The United States must lead the world in the responsible application of AI to appropriate national security functions.  AI, if used appropriately and for its intended purpose, can offer great benefits.

The emphasis on the "responsible application" of AI is subtle but essential. It's not enough for the United States to simply lead in applying the most powerful and cutting-edge AI systems across the national security enterprise; America has an opportunity—and a responsibility—to offer the world a model for how to lead in a way that is consistent with our democratic values and commitment to civil liberties, international humanitarian law, and the law of armed conflict.

— Vivek Chilukuri | Senior Fellow and Director, Technology and National Security Program

If misused, AI could threaten United States national security, bolster authoritarianism worldwide, undermine democratic institutions and processes, facilitate human rights abuses, and weaken the rules-based international order. 

China’s Digital Silk Road facilitates the export of its AI technologies, data infrastructure, and surveillance tools to emerging markets, often strengthening authoritarian regimes. Unlike the United States, China can undercut competition with artificially low-cost solutions, leaving nations at risk of falling under its influence.

To effectively counter this strategy, the United States should first promote strong governance standards, emphasizing the importance of safe, reliable AI that aligns with democratic values. Countries across the Global South are eager for training and education on cybersecurity and AI applications, and U.S. companies, alongside diplomatic channels, should spearhead these initiatives.

But it’s not just about the message—it’s about delivering results. It’s crucial for the United States to maintain a relentless focus on innovation. By delivering AI solutions that are not only cutting-edge but demonstrably superior in quality, the United States can negate China’s low-cost advantage. Only by doing both can the United States offer a compelling alternative that both supports democratic governance and meets the technological and development needs of emerging markets.

— Ruby Scanlon | Research Assistant, Technology and National Security Program

Harmful outcomes could occur even without malicious intent if AI systems and processes lack sufficient protections.    

This line acknowledging the possibility of accidental or unintentional harm is significant. Much of the memorandum is about the risk of falling behind competitors and the need to speed U.S. government adoption of AI, but AI itself carries risks if not used properly.

— Paul Scharre | Executive Vice President, Director of Studies

(d)  Recent innovations have spurred not only an increase in AI use throughout society, but also a paradigm shift within the AI field — one that has occurred mostly outside of Government. 

As the point above mentions, the United States government has a strong history of taking advantage of technological innovations, and this point is critical to remember for AI. Most of the technologies mentioned (radar, GPS, and nuclear) were government funded or guided, but with AI this is clearly not the case. This paradigm shift will have to be matched by a shift in the way that the United States engages with technology.

— Michael Depp | Research Associate, AI Safety and Stability Project

This era of AI development and deployment rests atop unprecedented aggregations of specialized computational power, as well as deep scientific and engineering expertise, much of which is concentrated in the private sector.  This trend is most evident with the rise of large language models, but it extends to a broader class of increasingly general-purpose and computationally intensive systems.  The United States Government must urgently consider how this current AI paradigm specifically could transform the national security mission.

More so than with previous tech-driven paradigm shifts for national security, it is private companies—not the government—driving the overwhelming share of decisions concerning investment, technology development, and governance. This isn't the Manhattan Project, with a few government labs driving breakthrough research. It's not the defense industrial base, where government is the ultimate client for a few large firms. For frontier AI models, instead, we have several private companies pushing the frontier, governing themselves, and selling a product with significant national security implications mostly to other private companies (and foreign governments). This will require a new model of cogovernance, cooperation, and information-sharing between Washington and leading AI companies to ensure responsible development and adoption in the years ahead.

— Vivek Chilukuri | Senior Fellow and Director, Technology and National Security

     (e)  Predicting technological change with certainty is impossible, but the foundational drivers that have underpinned recent AI progress show little sign of abating.  These factors include compounding algorithmic improvements, increasingly efficient computational hardware, a growing willingness in industry to invest substantially in research and development, and the expansion of training data sets.  AI under the current paradigm may continue to become more powerful and general-purpose.  Developing and effectively using these systems requires an evolving array of resources, infrastructure, competencies, and workflows that in many cases differ from what was required to harness prior technologies, including previous paradigms of AI.

This point is critical. Over the last decade, these trends have driven deep learning systems from incipient proof-of-concepts in language and image generation to massively utilized systems like ChatGPT. These trends can likely continue, such that we’ll see frontier models trained with at least 10,000x the compute of GPT-4 by 2030. This memorandum appropriately recognizes that it would be reckless to fail to prepare for dramatic advancements in national security–relevant AI capabilities—even in areas where current models have so far proved underwhelming. While last year’s AI executive order included many one-time reports and initiatives, this memorandum takes a more sustained approach. For instance, it directs the Department of Commerce to "establish an enduring capability to lead voluntary unclassified pre-deployment safety testing of frontier AI models on behalf of the United States Government" (through NIST's AI Safety Institute).

— Caleb Withers | Research Associate, Technology and National Security Program

This is an important acknowledgment by the U.S. government about the current trends in scaling towards larger, more computationally-intensive, general-purpose models. A massive paradigm shift is underway in the field of AI, with enormous geopolitical implications.

— Paul Scharre | Executive Vice President, Director of Studies

     (f)  If the United States Government does not act with responsible speed and in partnership with industry, civil society, and academia to make use of AI capabilities in service of the national security mission — and to ensure the safety, security, and trustworthiness of American AI innovation writ large — it risks losing ground to strategic competitors.  Ceding the United States’ technological edge would not only greatly harm American national security, but it would also undermine United States foreign policy objectives and erode safety, human rights, and democratic norms worldwide.

As the capabilities of general-purpose AI models continue to advance, so too will their transformative potential to support U.S. national security: not least through increasingly sophisticated abilities to analyze data at speed and scale for cyber, surveillance, and military and intelligence operations. But given the high stakes in such contexts, full-throated transformation of the U.S. national security enterprise will hinge on the extent that systems are trustworthy, robust, reliable, and explainable—and significant challenges remain in each of these areas. Compounding these challenges, AI models used in national security contexts will be prime targets for adversarial attacks. In short, the national security enterprise is at a critical juncture: it is uniquely poised to reap the benefits of frontier models, but it also has an understandably low tolerance for their current shortcomings. The memorandum rightly emphasizes that the national security enterprise cannot be a passive consumer of the AI frontier: it must own its important role in driving U.S. leadership in safe, secure, and trustworthy capabilities.

— Caleb Withers | Research Associate, Technology and National Security Program

     (g)  Establishing national security leadership in AI will require making deliberate and meaningful changes to aspects of the United States Government’s strategies, capabilities, infrastructure, governance, and organization.  AI is likely to affect almost all domains with national security significance, and its use cannot be relegated to a single institutional silo.  The increasing generality of AI means that many functions that to date have been served by individual bespoke tools may, going forward, be better fulfilled by systems that, at least in part, rely on a shared, multi-purpose AI capability.  Such integration will only succeed if paired with appropriately redesigned United States Government organizational and informational infrastructure.

The administration is right to take an expansive view of AI's implications across the national security enterprise and call for a cross-cutting, integrated approach. However, it's easy to write this in a memorandum; it is quite another thing to implement it across agencies that often resist reform. This will require a sustained push from the very top over many years. Whether this memorandum realizes its ambition will depend on whether the next administration picks up the baton and makes it a priority. It should.

— Vivek Chilukuri | Senior Fellow and Director, Technology and National Security Program

     (h)  In this effort, the United States Government must also protect human rights, civil rights, civil liberties, privacy, and safety, and lay the groundwork for a stable and responsible international AI governance landscape.  Throughout its history, the United States has been a global leader in shaping the design, development, and use of new technologies not only to advance national security, but also to protect and promote democratic values.  The United States Government must develop safeguards for its use of AI tools, and take an active role in steering global AI norms and institutions.  The AI frontier is moving quickly, and the United States Government must stay attuned to ongoing technical developments without losing focus on its guiding principles.

The national and economic security case for U.S. leadership in emerging technologies is well recognized. However, the ethical implications of technological advancements are less discussed but equally important.

Emerging technologies could bestow new capabilities to shape or amplify particular ethical norms. Despite their positive potential, technologies like AI, quantum, and biotech also hold the ability to exacerbate global socioeconomic divides, breach individuals' privacy and security, and drive job displacement. The first country to develop and deploy these powerful new tools will gain the upper hand in determining whether they are used to promote and protect democratic values or to undermine them.

The Chinese Communist Party's surveillance system in Xinjiang offers a glimpse of what the world could look like should China achieve its quest to become the global technology superpower. Through its Belt and Road Initiative, China is already exporting surveillance technology to countries around the world and establishing various training programs to influence the ways in which they are employed. The United States cannot wait any longer to reassert its technology leadership and foster the development of a tech ecosystem that advances democratic principles.

— Sam Howell | Associate Fellow, Technology and National Security Program

(i)  This memorandum aims to catalyze needed change in how the United States Government approaches AI national security policy.  In line with Executive Order 14110, it directs actions to strengthen and protect the United States AI ecosystem; improve the safety, security, and trustworthiness of AI systems developed and used in the United States; enhance the United States Government’s appropriate, responsible, and effective adoption of AI in service of the national security mission; and minimize the misuse of AI worldwide.

Sec. 2.  Objectives.  It is the policy of the United States Government that the following three objectives will guide its activities with respect to AI and national security.

     (a)  First, the United States must lead the world’s development of safe, secure, and trustworthy AI.  To that end, the United States Government must — in partnership with industry, civil society, and academia — promote and secure the foundational capabilities across the United States that power AI development.  The United States Government cannot take the unmatched vibrancy and innovativeness of the United States AI ecosystem for granted; it must proactively strengthen it, ensuring that the United States remains the most attractive destination for global talent and home to the world’s most sophisticated computational facilities.  The United States Government must also provide appropriate safety and security guidance to AI developers and users, and rigorously assess and help mitigate the risks that AI systems could pose.

This is welcome. It is often easier for an administration to take protective measures that restrict, for instance, China's access to advanced AI chips. But those steps alone are insufficient to ensure U.S. AI leadership. Efforts on the "promote" side are vital, but they are often much harder. For example, steps to make new investments in research and development (R&D), reform our immigration system, and modernize our national security institutions usually require action from Congress.

— Vivek Chilukuri | Senior Fellow and Director, Technology and National Security Program

To date, the United States government has taken various actions to maintain its lead over China in AI compute infrastructure, such as controls on chip exports and outbound investment. However, this exclusionary approach alone will not suffice if the United States fails to enable its own AI infrastructure buildout. Bottlenecks at home are already emerging, threatening U.S. AI leadership. Delays in domestic energy permitting and availability have already spurred greater investment in overseas data centers. This has included in the United Arab Emirates and Saudi Arabia, where ample cheap energy, sovereign wealth backing, and minimal regulatory barriers offer a stark contrast to the United States. A failure to address this could limit the ability of U.S. companies to stay at the cutting edge of AI development, and result in more advanced AI models being trained outside the United States.

— Janet Egan | Senior Fellow, Technology and National Security Program

     (b)  Second, the United States Government must harness powerful AI, with appropriate safeguards, to achieve national security objectives.  Emerging AI capabilities, including increasingly general-purpose models, offer profound opportunities for enhancing national security, but employing these systems effectively will require significant technical, organizational, and policy changes.  The United States must understand AI’s limitations as it harnesses the technology’s benefits, and any use of AI must respect democratic values with regard to transparency, human rights, civil rights, civil liberties, privacy, and safety.

     (c)  Third, the United States Government must continue cultivating a stable and responsible framework to advance international AI governance that fosters safe, secure, and trustworthy AI development and use; manages AI risks; realizes democratic values; respects human rights, civil rights, civil liberties, and privacy; and promotes worldwide benefits from AI.  It must do so in collaboration with a wide range of allies and partners.  Success for the United States in the age of AI will be measured not only by the preeminence of United States technology and innovation, but also by the United States’ leadership in developing effective global norms and engaging in institutions rooted in international law, human rights, civil rights, and democratic values.

The commitment to advancing international AI governance, and particularly to promoting worldwide benefits from AI, is an important and encouraging sign. It remains to be seen, however, to what extent the U.S. government prioritizes multilateralism as it shapes the global AI landscape. Recent U.S. actions to control which countries can import advanced chips have raised concerns in several countries that the United States may seek to unilaterally dictate who gets access to advanced AI and who does not. This, in turn, has fueled a drive for "sovereign AI" capabilities around the world, as countries seek to lessen their dependence on U.S. technology. (Learn more about this trend in a recent episode of CNAS' Derisky Business podcast.) To pursue global leadership on AI, the United States government must credibly demonstrate that customers around the world can trust U.S. technology companies and depend on the U.S. government not to cavalierly overuse export controls and other defensive tools to hoard AI capabilities.

— Geoffrey Gertz | Senior Fellow, Energy, Economics & Security Program

Sec. 3.  Promoting and Securing the United States’ Foundational AI Capabilities.  (a)  To preserve and expand United States advantages in AI, it is the policy of the United States Government to promote progress, innovation, and competition in domestic AI development; protect the United States AI ecosystem against foreign intelligence threats; and manage risks to AI safety, security, and trustworthiness.  Leadership in responsible AI development benefits United States national security by enabling applications directly relevant to the national security mission, unlocking economic growth, and avoiding strategic surprise.  United States technological leadership also confers global benefits by enabling like-minded entities to collectively mitigate the risks of AI misuse and accidents, prevent the unchecked spread of digital authoritarianism, and prioritize vital research.

AI is significantly lowering the barriers for hacktivists, cybercriminals, and state-backed actors to launch cyberattacks. With dark web access to generative AI tools, criminals can quickly produce large volumes of high-quality malware and phishing schemes, fueling the rise of cybercrime-as-a-service and ransomware-as-a-service. State-backed groups, including those from North Korea, China, and Russia, are leveraging AI for more sophisticated operations. Recently, Microsoft reported that North Korean groups used large language models for spearfishing campaigns, while Chinese-linked actors have employed AI to spread disinformation in Taiwan to erode democratic institutions. Similarly, Russian cyber actors have used AI for information warfare, particularly targeting Ukraine and the United States ahead of the 2024 presidential elections.

However, AI also has the potential to enhance cybersecurity by improving training, speeding up responses, and automating threat detection. For instance, South Korea updated its National Cybersecurity Strategy in 2024 to include the integration of AI systems to strengthen threat identification. To harness AI's benefits while mitigating its risks, countries must invest in cybersecurity and adopt coordinated strategies. A recent CNAS event examined how this can be accomplished in the Indo-Pacific, a region facing rapid digitalization and increasingly aggressive attacks from state-backed actors.

— Morgan Peirce | Research Assistant, Technology and National Security Program

     3.1.  Promoting Progress, Innovation, and Competition in United States AI Development.  (a)  The United States’ competitive edge in AI development will be at risk absent concerted United States Government efforts to promote and secure domestic AI progress, innovation, and competition.  Although the United States has benefited from a head start in AI, competitors are working hard to catch up, have identified AI as a top strategic priority, and may soon devote resources to research and development that United States AI developers cannot match without appropriately supportive Government policies and action.  It is therefore the policy of the United States Government to enhance innovation and competition by bolstering key drivers of AI progress, such as technical talent and computational power.

This is perhaps being charitable to the United States. Because many highly capable AI models are currently being released open source, Chinese AI developers are not far behind top U.S. labs. If the best U.S. labs are only 18 to 24 months ahead of Chinese competitors, but the Pentagon is 5 to 10 years behind the private sector in AI adoption, then the U.S. lead doesn’t amount to much.

— Paul Scharre | Executive Vice President, Director of Studies

     (b)  It is the policy of the United States Government that advancing the lawful ability of noncitizens highly skilled in AI and related fields to enter and work in the United States constitutes a national security priority.  Today, the unparalleled United States AI industry rests in substantial part on the insights of brilliant scientists, engineers, and entrepreneurs who moved to the United States in pursuit of academic, social, and economic opportunity.  Preserving and expanding United States talent advantages requires developing talent at home and continuing to attract and retain top international minds.

Various foreign talent provisions were included in last year's AI executive order, but this memorandum moves a step further by presenting the expansion of technically skilled foreign workers in the United States as a national security priority. This is a bold statement given the infamously divisive political dynamics of immigration policy, especially during an election season. In contrast, building a domestic workforce is simply mentioned at the end of the paragraph, and no specific provisions elaborate on it.

This reflects the urgency of that national security imperative in the eyes of the administration: building a skilled domestic workforce takes a long time, and existing international talent should be promptly leveraged to advance U.S. AI leadership in the meantime—especially as the talent pool is scarce and the global competition for it is fierce. Still, a disproportionate emphasis on the foreign relative to the domestic workforce is likely to generate opposition—from both sides of the aisle.

— Constanza M. Vidal Bustamante | Fellow, Technology and National Security Program

Being a magnet for global talent is an asymmetric advantage that the United States has over China. The U.S. government’s current self-imposed barriers to high-skilled immigration are the United States’ biggest own-goal in the global AI competition. Easing the ability of top foreign scientists to study and work in the United States is one of the most important things the U.S. can do to maintain its AI leadership position.

— Paul Scharre | Executive Vice President, Director of Studies

(c)  Consistent with these goals:

(i)    On an ongoing basis, the Department of State, the Department of Defense (DOD), and the Department of Homeland Security (DHS) shall each use all available legal authorities to assist in attracting and rapidly bringing to the United States individuals with relevant technical expertise who would improve United States competitiveness in AI and related fields, such as semiconductor design and production.

The tensions between domestic and foreign workforce prioritization become more apparent when semiconductors come into the fold. The Biden administration has touted the CHIPS Act as a historical opportunity to rebuild the U.S. manufacturing workforce and create thousands of “good jobs.” It's unclear how the administration balances its domestic semiconductor workforce development efforts with this memorandum's push to rapidly bring in foreign talent to support U.S. leadership in AI. For example, will jobs that don't require advanced degrees be reserved for U.S. workers?

— Constanza M. Vidal Bustamante | Fellow, Technology and National Security Program

These activities shall include all appropriate vetting of these individuals and shall be consistent with all appropriate risk mitigation measures.  This tasking is consistent with and additive to the taskings on attracting AI talent in section 5 of Executive Order 14110.

The urgency to attract high-skilled foreign nationals is here tempered with security concerns, but the details of what constitutes "appropriate vetting" are left unresolved. For example, will individuals be vetted based on affiliations with foreign countries of concern and/or entities of concern? How will the vetting inform whether the foreign national is allowed to enter the United States and under what conditions? How will the security risks be weighed against the need to secure AI leadership? Will the same vetting be applied to workers in the private sector and to students at universities and research centers? The details are important, especially given increased awareness of the diverse illicit and licit means of abusing U.S. intellectual property and technology employed by foreign adversaries (as outlined by this same memorandum in its next section). Any decisions here might shape ongoing discussions regarding research security and technology protection policies for other strategic fields like semiconductors and quantum technologies (e.g., NSPM-33 on research security, NSM-10 on quantum computing and other technologies, "deemed export" controls for semiconductor and quantum, etc.).

— Constanza M. Vidal Bustamante | Fellow, Technology and National Security Program

(ii)   Within 180 days of the date of this memorandum, the Chair of the Council of Economic Advisers shall prepare an analysis of the AI talent market in the United States and overseas, to the extent that reliable data is available.

Future administrations would be wise to expand this request to other emerging technology areas as well. The United States has well-documented talent gaps in quantum technology, for example, but lacks a detailed understanding of which specific skills and backgrounds are needed to continue moving the technology forward. Understanding the talent market in other countries could also open new opportunities for mutually beneficial exchange programs and research collaborations.

— Sam Howell | Associate Fellow, Technology and National Security Program

The talent market for AI is intertwined with that of other strategic technology fields, many of which are experiencing STEM talent shortages. An analysis of the talent market should incorporate multiple technology fields and identify the knowledge, skills, and abilities (KSAs) that are unique as well as shared across them. A comprehensive understanding of these KSAs could support the finetuning of training and reskilling programs, and ultimately, a robust technical workforce that could more easily switch jobs as hiring cycles fluctuate.

— Constanza M. Vidal Bustamante | Fellow, Technology and National Security Program

(iii)  Within 180 days of the date of this memorandum, the Assistant to the President for Economic Policy and Director of the National Economic Council shall coordinate an economic assessment of the relative competitive advantage of the United States private sector AI ecosystem, the key sources of the United States private sector’s competitive advantage, and possible risks to that position, and shall recommend policies to mitigate them.  The assessment could include areas including (1) the design, manufacture, and packaging of chips critical in AI-related activities; (2) the availability of capital; (3) the availability of workers highly skilled in AI-related fields; (4) computational resources and the associated electricity requirements; and (5) technological platforms or institutions with the requisite scale of capital and data resources for frontier AI model development, as well as possible other factors.

It is worth noting that meeting any deadlines past January 20, 2025—including this—will depend on the next administration choosing to take up the baton.

This is a welcome step but speaks to a broader gap in U.S. technology policy—the lack of a net assessment capability for critical and emerging technologies. We often discover we're behind in a key technology, from 5G to advanced chip manufacturing, when it's too late. We need a capability that can fuse the intelligence community's assessment of tech trends abroad with analysis of domestic tech trends to identify gaps and recommend solutions. I wrote about this challenge earlier this year.

— Vivek Chilukuri | Senior Fellow and Director, Technology and National Security Program

(iv)   Within 90 days of the date of this memorandum, the Assistant to the President for National Security Affairs (APNSA) shall convene appropriate executive departments and agencies (agencies) to explore actions for prioritizing and streamlining administrative processing operations for all visa applicants working with sensitive technologies.  Doing so shall assist with streamlined processing of highly skilled applicants in AI and other critical and emerging technologies.  This effort shall explore options for ensuring the adequate resourcing of such operations and narrowing the criteria that trigger secure advisory opinion requests for such applicants, as consistent with national security objectives.

Foreign workforce considerations are here expanded beyond AI to include all "sensitive technologies," a term that isn't defined but could include quantum, biotech, and clean energy technologies, based on previous statements from the Biden administration. Although it only calls for "exploratory" meetings, this provision could lead to a more comprehensive plan to reduce the bureaucratic burden and long wait times facing visa applicants across strategic technology fields, rather than tackling one field at a time. Some challenges might include defining what counts as a "sensitive" technology, what specific roles or skills will be used to classify visa applicants into technology fields and processing priorities, and how this prioritization might be conditioned by applicants' affiliations to countries of concern.

— Constanza M. Vidal Bustamante | Fellow, Technology and National Security Program

     3.2.  Protecting United States AI from Foreign Intelligence Threats.  (a)  In addition to pursuing industrial strategies that support their respective AI industries, foreign states almost certainly aim to obtain and repurpose the fruits of AI innovation in the United States to serve their national security goals.  Historically, such competitors have employed techniques including research collaborations, investment schemes, insider threats, and advanced cyber espionage to collect and exploit United States scientific insights.  It is the policy of the United States Government to protect United States industry, civil society, and academic AI intellectual property and related infrastructure from foreign intelligence threats to maintain a lead in foundational capabilities and, as necessary, to provide appropriate Government assistance to relevant non-government entities.

The PRC has made the exploitation of American science and technology a staple of its strategy to surpass the United States technologically, and by all accounts has been highly effective in finding ways to siphon American technologies for its advantage. The challenge is especially acute with frontier AI models, which take massive amounts of time, energy, and capital to build, but which can be easily stored, transported, and stolen.

— Bill Drexel | Fellow, Technology and National Security Program

(b)  Consistent with these goals:

(i)   Within 90 days of the date of this memorandum, the National Security Council (NSC) staff and the Office of the Director of National Intelligence (ODNI) shall review the President’s Intelligence Priorities and the National Intelligence Priorities Framework consistent with National Security Memorandum 12 of July 12, 2022 (The President’s Intelligence Priorities), and make recommendations to ensure that such priorities improve identification and assessment of foreign intelligence threats to the United States AI ecosystem and closely related enabling sectors, such as those involved in semiconductor design and production.

China has engaged in industrial espionage in the semiconductor industry for decades, but there’s no doubt that the country’s efforts to acquire sensitive IP and trade secrets have intensified in the wake of U.S. export controls on advanced chips and machinery.

Historically, commercial interests largely drove Chinese espionage in the semiconductor industry. Commercial interests are certainly still relevant, but Chinese espionage today is probably driven more by the country’s broader strategic objectives, particularly its stated desire to become a self-reliant and globally dominant science and technology superpower. China views U.S. and allied export controls—which limit its access to the most advanced chip design, fabrication, materials, chemicals, and manufacturing equipment—not just as an economic challenge but a direct threat to its national security and geopolitical influence. Industrial espionage is thus one strategy China employs to attempt to mitigate the effect of export controls and ensure its own technological superiority.

We’ve seen a marked increase in the number of IP theft lawsuits brought by U.S. companies against Chinese entities since the United States unveiled its new export control regime. Anecdotally, multiple major semiconductor companies have also reported unprecedented levels of attempted IP theft, compelling significant increases in their spending on security measures.

— Sam Howell | Associate Fellow, Technology and National Security Program

(ii)  Within 180 days of the date of this memorandum, and on an ongoing basis thereafter, ODNI, in coordination with DOD, the Department of Justice (DOJ), Commerce, DOE, DHS, and other IC elements as appropriate, shall identify critical nodes in the AI supply chain, and develop a list of the most plausible avenues through which these nodes could be disrupted or compromised by foreign actors.  On an ongoing basis, these agencies shall take all steps, as appropriate and consistent with applicable law, to reduce such risks.

This gets comparatively little focus compared to preventing the proliferation of semiconductor design and production because it largely takes place with U.S. allies. However, the recent Israeli supply chain attacks and fears of stopped production due to weather indicate that we need to think more deeply about ensuring the AI spigot remains open and safe.

— Michael Depp | Research Associate, AI Safety and Stability Project

     (c)  Foreign actors may also seek to obtain United States intellectual property through gray-zone methods, such as technology transfer and data localization requirements.  AI-related intellectual property often includes critical technical artifacts (CTAs) that would substantially lower the costs of recreating, attaining, or using powerful AI capabilities.  The United States Government must guard against these risks.

The United States' lead in AI compute will be rendered moot if adversaries can bypass the need for costly training on large-scale compute clusters by simply stealing the AI models themselves. This can be done by gaining access to the model weights—the digital parameters that encode the core intelligence of the model. While data centers seek to maintain strong security as a core part of their business, they are unlikely to be equipped to fully fend off sophisticated state-based actors without greater engagement from the national security community.

It's also encouraging to see the U.S. government expand the focus beyond just model weights to critical technical artifacts (CTAs), which would also include other sensitive IP like AI algorithms.

— Janet Egan | Fellow, Technology and National Security Program

     (c)  Commerce, acting through the AI Safety Institute (AISI) within the National Institute of Standards and Technology (NIST), shall serve as the primary United States Government point of contact with private sector AI developers to facilitate voluntary pre- and post-public deployment testing for safety, security, and trustworthiness of frontier AI models.  In coordination with relevant agencies as appropriate, Commerce shall establish an enduring capability to lead voluntary unclassified pre-deployment safety testing of frontier AI models on behalf of the United States Government, including assessments of risks relating to cybersecurity, biosecurity, chemical weapons, system autonomy, and other risks as appropriate (not including nuclear risk, the assessment of which shall be led by DOE).  Voluntary unclassified safety testing shall also, as appropriate, address risks to human rights, civil rights, and civil liberties, such as those related to privacy, discrimination and bias, freedom of expression, and the safety of individuals and groups.  Other agencies, as identified in subsection 3.3(f) of this section, shall establish enduring capabilities to perform complementary voluntary classified testing in appropriate areas of expertise.  The directives set forth in this subsection are consistent with broader taskings on AI safety in section 4 of Executive Order 14110, and provide additional clarity on agencies’ respective roles and responsibilities.

The focus on privacy and human rights here is welcome, but also serves as a reminder that the lack of a comprehensive federal privacy law continues to undermine broader U.S. efforts at regulating AI and the digital economy. Voluntary safety testing of privacy risks associated with frontier AI models would be more effective if they could build on top of the baseline protections of a comprehensive federal privacy law. Congress should get to work.

— Geoffrey Gertz | Senior Fellow, Energy, Economics & Security Program

(iii)  Within 180 days of the date of this memorandum, AISI, in consultation with other agencies as appropriate, shall develop or recommend benchmarks or other methods for assessing AI systems’ capabilities and limitations in science, mathematics, code generation, and general reasoning, as well as other categories of activity that AISI deems relevant to assessing general-purpose capabilities likely to have a bearing on national security and public safety.

It's encouraging to see that AISI has been given the mandate to look at more general, cross-cutting capabilities. Relative to humans, a key struggle for current frontier AI models is staying "on track": continuing to do useful work toward a high-level goal over long periods of time, rather than losing coherence or retreading unproductive territory. Progress to overcome this shortcoming could prove transformative across various dual-use domains: from autonomous hacking, through to chemical, biological, radiological and nuclear (CBRN) experimentation, through to research and development in military technologies more generally.

— Caleb Withers | Research Associate, Technology and National Security Program

(v)    Within 270 days of the date of this memorandum, and at least annually thereafter, AISI shall submit to the President, through the APNSA, and provide to other interagency counterparts as appropriate, at minimum one report that shall include the following:

(A)  A summary of findings from AI safety assessments of frontier AI models that have been conducted by or shared with AISI;

(B)  A summary of whether AISI deemed risk mitigation necessary to resolve any issues identified in the assessments, along with conclusions regarding any mitigations’ efficacy; and

(C)  A summary of the adequacy of the science-based tools and methods used to inform such assessments.

     (f)  Consistent with these goals, other agencies specified below shall take the following actions, in coordination with Commerce, acting through AISI within NIST, to provide classified sector-specific evaluations of current and near-future AI systems for cyber, nuclear, and radiological risks:

Classified evaluations will be an important complement to AISI's unclassified efforts, because they are necessary to understand the full extent of risks from frontier AI models in the hands of U.S. adversaries. The capabilities of frontier AI models can often be significantly enhanced by drawing on high-quality, task-specific data—much of which resides within governments for national security-relevant domains. More broadly, when it comes to determining if frontier AI systems pose new, meaningful threats, unclassified assessments will have obvious limitations for risks such as nuclear proliferation.

With this said, the memorandum rightfully acknowledges a need for caution in these sensitive domains: these efforts will require care to keep the classified information used secure, and to prevent other countries from misconstruing evaluative efforts as offensive efforts.

— Caleb Withers | Research Associate, Technology and National Security Program

(i)   DOD, Commerce, DOE, DHS, ODNI, NSF, NSA, and the National Geospatial-Intelligence Agency (NGA) shall, as appropriate and consistent with applicable law, prioritize research on AI safety and trustworthiness.  As appropriate and consistent with existing authorities, they shall pursue partnerships as appropriate with leading public sector, industry, civil society, academic, and other institutions with expertise in these domains, with the objective of accelerating technical and socio-technical progress in AI safety and trustworthiness.  This work may include research on interpretability, formal methods, privacy enhancing technologies, techniques to address risks to civil liberties and human rights, human-AI interaction, and/or the socio-technical effects of detecting and labeling synthetic and authentic content (for example, to address the malicious use of AI to generate misleading videos or images, including those of a strategically damaging or non-consensual intimate nature, of political or public figures).

The sociotechnical aspects of AI safety and trustworthiness often get short shrift in favor of technical issues, but the history of automation suggests that the issues are a major source of accidents. Emphasizing this point was a major theme in our report, Catalyzing Crisis: A Primer on Artificial Intelligence, Catastrophes, and National Security.

— Bill Drexel | Fellow, Technology and National Security Program

(ii)  Within 120 days of the date of this memorandum, the Department of State, DOD, DOJ, DOE, DHS, and IC elements shall each, in consultation with the Office of Management and Budget (OMB), identify education and training opportunities to increase the AI competencies of their respective workforces, via initiatives which may include training and skills-based hiring.

     (d)  To accelerate the use of AI in service of its national security mission, the United States Government needs coordinated and effective acquisition and procurement systems.  This will require an enhanced capacity to assess, define, and articulate AI-related requirements for national security purposes, as well as improved accessibility for AI companies that lack significant prior experience working with the United States Government.

The need to streamline government acquisition and procurement systems applies to multiple emerging technology fields, many of which are led by nontraditional vendors. Insights from the Department of Defense–Office of the Director of National Intelligence–Office of Management and Budget (DOD-ODNI-OMB) working group called upon here might therefore be more broadly applicable to support increased collaboration between the government and commercial technology companies of all sizes and across strategic technology fields.

— Constanza M. Vidal Bustamante | Fellow, Technology and National Security Program

(i)    DOD and the IC shall, in consultation with DOJ as appropriate, review their respective legal, policy, civil liberties, privacy, and compliance frameworks, including international legal obligations, and, as appropriate and consistent with applicable law, seek to develop or revise policies and procedures to enable the effective and responsible use of AI, accounting for the following:

(A)  Issues raised by the acquisition, use, retention, dissemination, and disposal of models trained on datasets that include personal information traceable to specific United States persons, publicly available information, commercially available information, and intellectual property, consistent with section 9 of Executive Order 14110;

(B)  Guidance that shall be developed by DOJ, in consultation with DOD and ODNI, regarding constitutional considerations raised by the IC’s acquisition and use of AI;

(C)  Challenges associated with classification and compartmentalization;

(D)  Algorithmic bias, inconsistent performance, inaccurate outputs, and other known AI failure modes;

(E)  Threats to analytic integrity when employing AI tools;

(F)  Risks posed by a lack of safeguards that protect human rights, civil rights, civil liberties, privacy, and other democratic values, as addressed in further detail in subsection 4.2 of this section;

(G)  Barriers to sharing AI models and related insights with allies and partners; and

(H)  Potential inconsistencies between AI use and the implementation of international legal obligations and commitments.

In April, the U.S. AI Safety Institute and UK AI Safety Institute signed a joint memorandum of understanding agreeing to work together to develop testing protocols for AI models and complete at least one joint testing exercise.

In many ways, this kind of joint collaboration is only the tip of the spear for how the United States and its allies should be combining their efforts. There remain major regulatory hurdles that prevent the United States from sharing some AI-enabled systems with allies and partners directly. As a starting point for accomplishing this provision of the AI National Security Memorandum, American officials could begin by further exempting some of the United States' European allies from existing regulations that prevent technology sharing. Furthermore, engaging in technology-sharing activities inside the NATO alliance could act as an efficient way for the United States to double its efforts without spreading itself too thin.

— Noah Greene | Research Assistant, AI Safety and Stability Project

     (h)  The United States’ network of allies and partners confers significant advantages over competitors.  Consistent with the 2022 National Security Strategy or any successor strategies, the United States Government must invest in and proactively enable the co-development and co-deployment of AI capabilities with select allies and partners.

The investment and coproduction of emerging technology inside the NATO alliance has become a popular area for discussion in recent years. Yet, the brass tacks of defense industry collaboration between the United States and individual European states remains fairly fractured. Codevelopment and codeployment can have the added benefit of enhancing interoperable capabilities when desired, while also distributing the cost burden. However, one of the major drawbacks is the drawn-out timeline for collaborating on and deploying new systems when one country is involved, let alone two or more. A potential solution is focusing on the codevelopment of cheaper, less exquisite systems inside of the transatlantic alliance. Regardless, there is an inherent level of risk that the U.S. government is going to have to live with if it plans to move as fast as it says it does.

— Noah Greene | Research Assistant, AI Safety and Stability Project

(i)  On an ongoing basis, DOD and ODNI shall issue or revise relevant guidance to improve consolidation and interoperability across AI functions on NSS.  This guidance shall seek to ensure that the United States Government can coordinate and share AI-related resources effectively, as appropriate and consistent with applicable law.

Promoting interoperability will of course improve collaboration, information sharing, and resource sharing across the interagency as AI adoption increases. However, the government will need to balance standards for interoperability against the risk of entrenching such standards and locking federal procurement for particular AI models and developers. This matters to ensure not only competitiveness within the private AI ecosystem, but also the government's ability to adopt improved capabilities as they arise.

— Vivek Chilukuri | Senior Fellow and Director, Technology and National Security Program

(i)  Heads of covered agencies shall, consistent with their authorities, monitor, assess, and mitigate risks directly tied to their agency’s development and use of AI.  Such risks may result from reliance on AI outputs to inform, influence, decide, or execute agency decisions or actions, when used in a defense, intelligence, or law enforcement context, and may impact human rights, civil rights, civil liberties, privacy, safety, national security, and democratic values.  These risks from the use of AI include the following:

(A)  Risks to physical safety:  AI use may pose unintended risks to human life or property.

(B)  Privacy harms:  AI design, development, and operation may result in harm, embarrassment, unfairness, and prejudice to individuals.

(C)  Discrimination and bias:  AI use may lead to unlawful discrimination and harmful bias, resulting in, for instance, inappropriate surveillance and profiling, among other harms.

The national security dialogue on AI discrimination and bias must extend beyond U.S. borders. As Chinese-style smart cities powered by Huawei and ZTE expand across the Global South, the United States needs to spotlight the risks tied to these AI-driven urban technologies. While they offer modern conveniences, these systems often come with heavy surveillance and data privacy concerns that can reinforce authoritarian practices and infringe on individual freedoms. By spearheading discussions on ethical AI and democratic values, the United States can guide emerging markets in adopting technology that boosts efficiency while ensuring transparency and accountability in implementation.

— Ruby Scanlon | Research Assistant, Technology and National Security Program

(i)  Within 120 days of the date of this memorandum, the Department of State, in coordination with DOD, Commerce, DHS, the United States Mission to the United Nations (USUN), and the United States Agency for International Development (USAID), shall produce a strategy for the advancement of international AI governance norms in line with safe, secure, and trustworthy AI, and democratic values, including human rights, civil rights, civil liberties, and privacy.  This strategy shall cover bilateral and multilateral engagement and relations with allies and partners.  It shall also include guidance on engaging with competitors, and it shall outline an approach to working in international institutions such as the United Nations and the Group of 7 (G7), as well as technical organizations.  The strategy shall:

The explicit inclusion of guidance on engaging with competitors tracks well with one of the trickier issues facing our diplomacy on AI issues: namely, what to do about China. So far, AI has been an area of rare dialogue between the United States and China, but tangible actions from discussions have so far not materialized. This will be a space to continue to watch.

— Bill Drexel | Fellow, Technology and National Security Program

(A)  Develop and promote internationally shared definitions, norms, expectations, and standards, consistent with United States policy and existing efforts, which will promote safe, secure, and trustworthy AI development and use around the world.  These norms shall be as consistent as possible with United States domestic AI governance (including Executive Order 14110 and OMB Memorandum M-24-10), the International Code of Conduct for Organizations Developing Advanced AI Systems released by the G7 in October 2023, the Organization for Economic Cooperation and Development Principles on AI, United Nations General Assembly Resolution A/78/L.49, and other United States-supported relevant international frameworks (such as the Political Declaration on Responsible Military Use of AI and Autonomy) and instruments.  By discouraging misuse and encouraging appropriate safeguards, these norms and standards shall aim to reduce the likelihood of AI causing harm or having adverse impacts on human rights, democracy, or the rule of law.

The United States isn't working in a vacuum: if it wants to help build effective norms that other countries follow, this focus on synergizing with existing initiatives that already have buy in and maintaining consistency through its own documents is essential.

— Michael Depp | Research Associate, AI Safety and Stability Project

Sign Up

Stay up-to-date on the latest in artificial intelligence and national security from CNAS.

Authors

  • Paul Scharre

    Executive Vice President and Director of Studies

    Paul Scharre is the Executive Vice President and Director of Studies at CNAS. He is the award-winning author of Four Battlegrounds: Power in the Age of Artificial Intelligence...

  • Vivek Chilukuri

    Senior Fellow and Director, Technology and National Security Program

    Vivek Chilukuri is the Senior Fellow and Program Director of the Technology and National Security Program at CNAS. His work focuses on the responsible development and deployme...

  • Geoffrey Gertz

    Senior Fellow, Energy, Economics & Security Program

    Geoffrey Gertz is a Senior Fellow in the Energy, Economics & Security Program at CNAS. His research focuses on economic tools for protecting and promoting critical technol...

  • Bill Drexel

    Fellow, Technology and National Security Program

    Bill Drexel is a Fellow for the Technology and National Security Program at CNAS. His work focuses on Sino-American competition, artificial intelligence, and technology as an ...

  • Janet Egan

    Senior Fellow, Technology and National Security Program

    Janet Egan is a Senior Fellow with the Technology and National Security Program at the Center for a New American Security (CNAS). Her research focuses on the national security...

  • Constanza M. Vidal Bustamante

    Fellow, Technology and National Security Program

    Constanza M. Vidal Bustamante, Ph.D. is a Fellow with the Technology and National Security Program at CNAS, where she leads the Center’s quantum technology policy portfolio an...

  • Sam Howell

    Associate Fellow, Technology and National Security Program

    Sam Howell is an associate fellow with the Technology and National Security Program at the Center for a New American Security (CNAS). Her research interests include quantum in...

  • Michael Depp

    Research Associate, AI Safety and Stability Project

    Michael Depp is a Research Associate supporting the center’s initiative on artificial intelligence safety and stability. His research focuses on the international governance o...

  • Caleb Withers

    Research Associate, Technology and National Security Program

    Caleb Withers is a Research Associate for the Technology and National Security Program at CNAS, supporting the center’s initiative on artificial intelligence safety and stabil...

  • Noah Greene

    Research Assistant, AI Safety and Stability Project

    Noah Greene currently serves as the research assistant at the Center for a New American Security (CNAS) for the center’s cross-program initiative on artificial intelligence sa...

  • Ruby Scanlon

    Research Assistant, Technology and National Security Program

    Ruby Scanlon is a Research Assistant for the Technology and National Security Program at CNAS, supporting the Center’s research on US-China technology competition, China's inn...

  • Morgan Peirce

    Research Assistant, Technology and National Security Program

    Morgan is a Research Assistant for the Technology and National Security Program at CNAS, supporting the center’s research on quantum technology and cybersecurity. Before CNAS,...