November 02, 2023

NOTEWORTHY: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

CNAS experts unpack the recently released executive order from the White House that establishes new guidelines and goals for how artificial intelligence is responsibly used in the United States. The goal of the EO is to simultaneously protect the safety and privacy of Americans, as well as maintain the country's competitive advantage in the global race for AI leadership.

The following are selected excerpts. Read the full executive order here.


By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered as follows:

Section 1. Purpose. Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.

My Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government-wide approach to doing so. The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society.

In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built. I firmly believe that the power of our ideals; the foundations of our society; and the creativity, diversity, and decency of our people are the reasons that America thrived in past eras of rapid change. They are the reasons we will succeed again in this moment. We are more than capable of harnessing AI for justice, security, and opportunity for all.

Sec. 2. Policy and Principles. It is the policy of my Administration to advance and govern the development and use of AI in accordance with eight guiding principles and priorities. When undertaking the actions set forth in this order, executive departments and agencies (agencies) shall, as appropriate and consistent with applicable law, adhere to these principles, while, as feasible, taking into account the views of other agencies, industry, members of academia, civil society, labor unions, international allies and partners, and other relevant organizations:

There is virtue in a principle-based regulatory approach to a technology as fast-moving as AI. Regulating around specific technical standards and thresholds risks becoming outdated as technology evolves. The downside, of course, is ambiguity. The majority of these principles also emphasize security, safety, rights, privacy, and responsibility. This is the Administration acknowledging that long-term AI development requires public trust at a moment of general techno-pessimism. At the same time, the Administration appears to be signaling abroad that respect for democratic values is foundational to America's model for AI, offering a contrast to China's model of mass surveillance and social control.

— Vivek Chilukuri | Senior Fellow and Director, Technology and National Security Program

(b) Promoting responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges. This effort requires investments in AI-related education, training, development, research, and capacity, while simultaneously tackling novel intellectual property (IP) questions and other problems to protect inventors and creators. Across the Federal Government, my Administration will support programs to provide Americans the skills they need for the age of AI and attract the world’s AI talent to our shores — not just to study, but to stay — so that the companies and technologies of the future are made in America. The Federal Government will promote a fair, open, and competitive ecosystem and marketplace for AI and related technologies so that small developers and entrepreneurs can continue to drive innovation. Doing so requires stopping unlawful collusion and addressing risks from dominant firms’ use of key assets such as semiconductors, computing power, cloud storage, and data to disadvantage competitors, and it requires supporting a marketplace that harnesses the benefits of AI to provide new opportunities for small businesses, workers, and entrepreneurs.

While much of the EO is focused on the need to tackle AI risks, the Administration still tries to manage the careful balancing act between promoting innovation and protecting individuals and society. With the emphasis on attracting AI talent to government and American universities, they clearly recognize the need for experts that will be able to support AI regulation and innovation across a host of distinct sectors.

— Josh Wallin | Fellow, Defense Program

The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:

Highly capable, general-purpose AI models such as GPT-4 have drawn increased attention from policymakers. The Executive Order refers to these as "dual-use foundation models" and defines the term.

Foundation models can be expensive to train, using thousands of advanced chips and costing tens of millions of dollars. Once they have been trained, however, they can often be easily modified using minimal training costing a few hundred dollars. Companies often train their models to refuse dangerous tasks, but these safeguards can be easily trained away, hence the inclusion of this clause.

— Paul Scharre | Executive Vice President and Director of Studies,

(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;

(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or

(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.

The definition highlights three potential national security risks, all of which are somewhat speculative. Nascent abilities have been seen in state-of-the-art models that suggests future systems could have these capabilities.

— Paul Scharre | Executive Vice President and Director of Studies,

Sec. 4. Ensuring the Safety and Security of AI Technology.

4.1. Developing Guidelines, Standards, and Best Practices for AI Safety and Security. (a) Within 270 days of the date of this order, to help ensure the development of safe, secure, and trustworthy AI systems, the Secretary of Commerce, acting through the Director of the National Institute of Standards and Technology (NIST), in coordination with the Secretary of Energy, the Secretary of Homeland Security, and the heads of other relevant agencies as the Secretary of Commerce may deem appropriate, shall:

(i) Establish guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems, including:

(A) developing a companion resource to the AI Risk Management Framework, NIST AI 100-1, for generative AI;

(B) developing a companion resource to the Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models; and

(C) launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.

In practice, disagreement about the appropriate strength and scope of AI regulation often stems from differing views on how soon these systems may pose acute national security threats. In recent years, there have been steady and remarkable improvements in foundation model capabilities. However, it is difficult to predict exactly when new qualitative capabilities will emerge.

This section promotes the development of best practices, information-sharing, and U.S. government capabilities in AI evaluations and red-teaming, especially around the most severe potential risks. If executed well, this will help ensure a responsive, empirical and proportionate response to future dual-use capabilities.

— Caleb Withers | Research Assistant, Technology and National Security Program

(i) Companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records regarding the following:

(A) any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats;

(B) the ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights; and

(C) the results of any developed dual-use foundation model’s performance in relevant AI red-team testing based on guidance developed by NIST pursuant to subsection 4.1(a)(ii) of this section, and a description of any associated measures the company has taken to meet safety objectives, such as mitigations to improve performance on these red-team tests and strengthen overall model security. Prior to the development of guidance on red-team testing standards by NIST pursuant to subsection 4.1(a)(ii) of this section, this description shall include the results of any red-team testing that the company has conducted relating to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and development of associated exploits; the use of software or tools to influence real or virtual events; the possibility for self-replication or propagation; and associated measures to meet safety objectives; and

(ii) Companies, individuals, or other organizations or entities that acquire, develop, or possess a potential large-scale computing cluster to report any such acquisition, development, or possession, including the existence and location of these clusters and the amount of total computing power available in each cluster.

The EO includes a notification requirement for companies training dual-use foundation models. Companies must report to the government the results of their red-team tests of the model. Left unaddressed in the EO is what would happen if the government judged that a company's safety measures were inadequate. At present, the EO does not establish a requirement for companies to get regulatory approval before deployment

— Paul Scharre | Executive Vice President and Director of Studies,

(b) The Secretary of Commerce, in consultation with the Secretary of State, the Secretary of Defense, the Secretary of Energy, and the Director of National Intelligence, shall define, and thereafter update as needed on a regular basis, the set of technical conditions for models and computing clusters that would be subject to the reporting requirements of subsection 4.2(a) of this section. Until such technical conditions are defined, the Secretary shall require compliance with these reporting requirements for:

One criticism of setting technical thresholds for evaluating models is that the field of AI is moving so quickly that any computation-based threshold for models would have to change over time. The EO anticipates this and tasks the Secretary of Commerce with updating these technical thresholds as needed.

— Paul Scharre | Executive Vice President and Director of Studies,

(i) any model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations,

"Integer or floating-point operations" is a measure of the amount of computation used to train a model. Administration officials have said that this threshold of 10^26 operations was set with the aim of capturing future systems but not existing ones, which are believed to fall under this threshold. Some AI labs have not shared the technical details of their models publicly, but leaked information suggests GPT-4 was trained on around 2x10^25 operations, or roughly 5X less than the EO threshold. The amount of computation used to train large models has been doubling roughly every 10 months, so that threshold could be met soon.

— Paul Scharre | Executive Vice President and Director of Studies,

or using primarily biological sequence data and using a quantity of computing power greater than 10^23 integer or floating-point operations; and

This definition attempts to capture more "narrow" (application-specific) models in the biological domain, rather than the general-purpose models targeted by the prior definition. This makes a lot of sense: narrow bio models are a pretty clear way for someone to cause a lot of harm with AI in the near-term. This often-cited paper from last year shows one way in which this can be done: simply take an AI system trained to find therapeutic drugs, and "run it in reverse."

— Tim Fist | Fellow, AI Safety and Stability

(ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 10^20 integer or floating-point operations per second for training AI.

This definition captures the large AI and high-tech dedicated computing facilities that will be useful for training future generations of powerful foundation models. This computing capacity corresponds to around 50,000 state-of-the-art AI chips today: enough chips to train a model with the computation requirements from the definition above within ~40 days, assuming the chips were sitting at 30% utilization (typical for this kind of workload). This is around 3 times more computing capacity than what was likely used to train today's most powerful foundation model, GPT-4. Note that it's still possible to train a cutting-edge foundation model using a data center with computing capacity under this threshold; it'd just take longer.

— Tim Fist | Fellow, AI Safety and Stability

(c) Because I find that additional steps must be taken to deal with the national emergency related to significant malicious cyber-enabled activities declared in Executive Order 13694 of April 1, 2015 (Blocking the Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities), as amended by Executive Order 13757 of December 28, 2016 (Taking Additional Steps to Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities), and further amended by Executive Order 13984, to address the use of United States Infrastructure as a Service (IaaS) Products by foreign malicious cyber actors, including to impose additional record-keeping obligations with respect to foreign transactions and to assist in the investigation of transactions involving foreign malicious cyber actors, I hereby direct the Secretary of Commerce, within 90 days of the date of this order, to:

(i) Propose regulations that require United States IaaS Providers to submit a report to the Secretary of Commerce when a foreign person transacts with that United States IaaS Provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity (a “training run”). Such reports shall include, at a minimum, the identity of the foreign person and the existence of any training run of an AI model meeting the criteria set forth in this section, or other criteria defined by the Secretary in regulations, as well as any additional information identified by the Secretary.

U.S. firms currently have at least 70% market share in IaaS ("infrastructure as a service", commonly known as "cloud computing"). This means that many foreign AI developers use cloud services offered by U.S. firms, who have data centers all over the world. These measures aim to gain visibility on whether these services are being used by foreign actors to train dual-use foundation models.

— Tim Fist | Fellow, AI Safety and Stability

(iii) Determine the set of technical conditions for a large AI model to have potential capabilities that could be used in malicious cyber-enabled activity, and revise that determination as necessary and appropriate. Until the Secretary makes such a determination, a model shall be considered to have potential capabilities that could be used in malicious cyber-enabled activity if it requires a quantity of computing power greater than 10^26 integer or floating-point operations and is trained on a computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum compute capacity of 10^20 integer or floating-point operations per second for training AI.

Note the "and" here: cloud reporting requirements only kick in when the model being trained is big enough, and only then if the data center being used to train the model is above the same computing capacity threshold used earlier. This basically means any foreign actor trying to avoid this reporting requirement has the option of training an equally powerful model, but just waiting longer to train it.

— Tim Fist | Fellow, AI Safety and Stability

4.3. Managing AI in Critical Infrastructure and in Cybersecurity.

(a) To ensure the protection of critical infrastructure, the following actions shall be taken:

(i) Within 90 days of the date of this order, and at least annually thereafter, the head of each agency with relevant regulatory authority over critical infrastructure and the heads of relevant SRMAs, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency within the Department of Homeland Security for consideration of cross-sector risks, shall evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber attacks, and shall consider ways to mitigate these vulnerabilities. Independent regulatory agencies are encouraged, as they deem appropriate, to contribute to sector-specific risk assessments.

This, combined with the other methods listed elsewhere at preventing dangerous model proliferation, is a good template for how to address specific AI risks. There should be both a focus on the AI itself as well as preparing other mitigation methods against these future threats.

— Michael Depp | Research Associate, AI Safety and Stability Project

4.4. Reducing Risks at the Intersection of AI and CBRN Threats. (a) To better understand and mitigate the risk of AI being misused to assist in the development or use of CBRN threats—with a particular focus on biological weapons—the following actions shall be taken:

Biological security has been a major focus of industry and government leaders' concern as perhaps the first area where AI could dramatically alter the risks of a major catastrophe. In a Senate Judiciary Committee hearing in July of 2023, Dario Amodei, head of leading AI lab Anthropic, suggested that “a straightforward extrapolation of today’s [AI] systems” would mean that within 2-3 years, AI tools will be able to dramatically accelerate bioweapons efforts. How true that admonition proves to be is yet to be seen, but this executive order suggests that the government is at least taking the threat very seriously.

— Bill Drexel | Associate Fellow, Technology and National Security Program

(ii) Within 120 days of the date of this order, the Secretary of Defense, in consultation with the Assistant to the President for National Security Affairs and the Director of OSTP, shall enter into a contract with the National Academies of Sciences, Engineering, and Medicine to conduct — and submit to the Secretary of Defense, the Assistant to the President for National Security Affairs, the Director of the Office of Pandemic Preparedness and Response Policy, the Director of OSTP, and the Chair of the Chief Data Officer Council — a study that:

(A) assesses the ways in which AI can increase biosecurity risks, including risks from generative AI models trained on biological data, and makes recommendations on how to mitigate these risks;

(B) considers the national security implications of the use of data and datasets, especially those associated with pathogens and omics studies, that the United States Government hosts, generates, funds the creation of, or otherwise owns, for the training of generative AI models, and makes recommendations on how to mitigate the risks related to the use of these data and datasets;

(C) assesses the ways in which AI applied to biology can be used to reduce biosecurity risks, including recommendations on opportunities to coordinate data and high-performance computing resources; and

(D) considers additional concerns and opportunities at the intersection of AI and synthetic biology that the Secretary of Defense deems appropriate.

(b) To reduce the risk of misuse of synthetic nucleic acids, which could be substantially increased by AI’s capabilities in this area, and improve biosecurity measures for the nucleic acid synthesis industry, the following actions shall be taken:

Genomic data is a growing area of concern for the future of the bioeconomy, particularly with China taking an aggressive, industrial approach to harvesting bio data around the world for use in, among other things, People's Liberation Army (PLA) research. Some are calling it a DNA arms race.

— Bill Drexel | Associate Fellow, Technology and National Security Program

(i) Within 180 days of the date of this order, the Director of OSTP, in consultation with the Secretary of State, the Secretary of Defense, the Attorney General, the Secretary of Commerce, the Secretary of Health and Human Services (HHS), the Secretary of Energy, the Secretary of Homeland Security, the Director of National Intelligence, and the heads of other relevant agencies as the Director of OSTP may deem appropriate, shall establish a framework, incorporating, as appropriate, existing United States Government guidance, to encourage providers of synthetic nucleic acid sequences to implement comprehensive, scalable, and verifiable synthetic nucleic acid procurement screening mechanisms, including standards and recommended incentives. As part of this framework, the Director of OSTP shall:

(A) establish criteria and mechanisms for ongoing identification of biological sequences that could be used in a manner that would pose a risk to the national security of the United States; and

(B) determine standardized methodologies and tools for conducting and verifying the performance of sequence synthesis procurement screening, including customer screening approaches to support due diligence with respect to managing security risks posed by purchasers of biological sequences identified in subsection 4.4(b)(i)(A) of this section, and processes for the reporting of concerning activity to enforcement entities.

Getting more sequencing companies to screen orders for hazardous requests has been a longstanding priority of the biosecurity community, independent of recent developments in AI. At present, around four fifths of such companies have some sort of screening mechanism, leaving open a range of loopholes that bad actors could exploit to make a bioweapon. Some fear that new AI tools will exacerbate this issue, with at least one study showing a Large Language Model (LLM) advising users on which DNA synthesis companies are unlikely to screen orders if they were seeking to create a pandemic virus. This directive is a step in the right direction and will be incentivized by the provision below stating that where applicable any government funding of life-sciences research will require the use of companies that employ the proposed new screening mechanisms. Even still, biosecurity experts would like to see greater adoption of such measures, including internationally going forwards.

— Bill Drexel | Associate Fellow, Technology and National Security Program

4.6. Soliciting Input on Dual-Use Foundation Models with Widely Available Model Weights.

When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model. To address the risks and potential benefits of dual-use foundation models with widely available weights, within 270 days of the date of this order, the Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State, shall:

Releasing model weights publicly, as some companies have done, is a boon to start-ups and academics because it levels the playing field by eliminating the need to spend tens of millions of dollars to train a large foundation model. It also levels the playing field with China, effectively negating U.S. chip export controls. Chinese AI labs don't need advanced chips to train their own foundation models if they can simply download trained models from the internet. Export controls on the most advanced dual-use foundation models to prevent companies from publicly releasing the model weights would reduce the risks of misuse and help preserve America's competitive advantage. Restricting the release of model weights is a contentious idea, but one the Administration is signaling they will need to grapple with.

— Paul Scharre | Executive Vice President and Director of Studies,

(a) solicit input from the private sector, academia, civil society, and other stakeholders through a public consultation process on potential risks, benefits, other implications, and appropriate policy and regulatory approaches related to dual-use foundation models for which the model weights are widely available, including:

(i) risks associated with actors fine-tuning dual-use foundation models for which the model weights are widely available or removing those models’ safeguards;

(ii) benefits to AI innovation and research, including research into AI safety and risk management, of dual-use foundation models for which the model weights are widely available; and

(iii) potential voluntary, regulatory, and international mechanisms to manage the risks and maximize the benefits of dual-use foundation models for which the model weights are widely available; and

(b) based on input from the process described in subsection 4.6(a) of this section, and in consultation with the heads of other relevant agencies as the Secretary of Commerce deems appropriate, submit a report to the President on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, as well as policy and regulatory recommendations pertaining to those models.

To date, it has proven relatively trivial to undo any built-in safeguards when a foundation model’s weights (the raw numerical parameters learned during training) are released for download (rather than, for example, through an online interface or API). While releasing weights can have also advantages - such as allowing models to be further researched, tuned, and run on users’ own hardware - developers of downloadable models have not substantively engaged with the question of when dual-use capabilities might be of sufficient concern that harms exceed relevant benefits. It’s promising to see that key relevant considerations have been articulated here. Hopefully, the administration's solicitation of input will go some way towards building consensus on this issue, and otherwise surface which disagreements have sufficient stakes that government intervention may be needed.

— Caleb Withers | Research Assistant, Technology and National Security Program

4.8. Directing the Development of a National Security Memorandum. To develop a coordinated executive branch approach to managing AI’s security risks, the Assistant to the President for National Security Affairs and the Assistant to the President and Deputy Chief of Staff for Policy shall oversee an interagency process with the purpose of, within 270 days of the date of this order, developing and submitting a proposed National Security Memorandum on AI to the President. The memorandum shall address the governance of AI used as a component of a national security system or for military and intelligence purposes. The memorandum shall take into account current efforts to govern the development and use of AI for national security systems. The memorandum shall outline actions for the Department of Defense, the Department of State, other relevant agencies, and the Intelligence Community to address the national security risks and potential benefits posed by AI.

This EO conspicuously leaves out much discussion on military and intelligence employment of AI, opting instead to develop these in the forthcoming National Security Memorandum (or through vehicles like the updated Political Declaration on Responsible Military Use of AI and Autonomy). It remains to be seen whether these will cause significant policy shifts for the Department of Defense, who has been grappling with responsible AI development for several years and has established guidance like the Responsible AI Strategy & Implementation Pathway.

— Josh Wallin | Fellow, Defense Program

Sec. 5. Promoting Innovation and Competition.

The executive order aims to keep the U.S. at the forefront of AI innovation, while doing so responsibly and safely. Attracting global talent is a key opportunity to do so, helping us more fully realize our AI aspirations while simultaneously depriving our strategic competitors. Despite the mention of “attracting” talent, most leading AI researchers already want to work in the U.S., we just have to let them. It’s heartening to see the administration working creatively and comprehensively towards this goal within the bounds of current legislation; the ball is now squarely in Congress’s court to further these efforts and cement U.S. leadership in the global competition for AI talent.

— Caleb Withers | Research Assistant, Technology and National Security Program

5.1. Attracting AI Talent to the United States. (a) Within 90 days of the date of this order, to attract and retain talent in AI and other critical and emerging technologies in the United States economy, the Secretary of State and the Secretary of Homeland Security shall take appropriate steps to:

The language used here—“AI and other critical and emerging technologies”—drives home an important point: the United States' shortage of STEM talent affects more than just the AI industry. STEM talent gaps impact U.S. competitiveness in every technology area, from AI to semiconductors, quantum, critical minerals, and more.

The semiconductor industry suffers gaps across every major talent group required to operate a fab. The United States is expected to face a shortfall of hundreds of thousands of workers by 2030, and major U.S. semiconductor companies already struggle to find enough operators and technicians to keep foundries running.

In the quantum sector, the number of active job postings outpaces the number of graduates ready to fill those positions by three times, and more than half of U.S.-based quantum computing companies are actively hiring. Filling these vacancies will be difficult—most existing quantum-relevant graduates reside outside of the United States, and the development of new expertise takes up to 10 years of postsecondary education.

The critical minerals industry is similarly ill-prepared to meet the demands of U.S.-China technology competition. 86 percent of surveyed mining executives experienced recruiting and retention challenges in 2022, and 71 percent indicated that talent shortages prevented them from delivering on strategic targets. Further, the pipeline of candidates to replace retiring workers is meager—the United States awarded just 327 mining and mineral engineering degrees in 2020.

In short, the United States lacks a robust STEM-capable workforce, and the consequences extend beyond AI. Targeted, high-skilled immigration reform is the fastet remedy to this problem. The EO on AI is a strong first step towards attracting and retaining the best international talent, but the buck cannot stop here. Congress should build on the EO with a series of measures to ease barriers to entry for high-skilled workers, such as raising the H-1B visa cap for STEM experts.

— Sam Howell | Research Associate, Technology and National Security Program

5.2. Promoting Innovation. (a) To develop and strengthen public-private partnerships for advancing innovation, commercialization, and risk-mitigation methods for AI, and to help promote safe, responsible, fair, privacy-protecting, and trustworthy AI systems, the Director of NSF shall take the following steps:

(i) Within 90 days of the date of this order, in coordination with the heads of agencies that the Director of NSF deems appropriate, launch a pilot program implementing the National AI Research Resource (NAIRR), consistent with past recommendations of the NAIRR Task Force. The program shall pursue the infrastructure, governance mechanisms, and user interfaces to pilot an initial integration of distributed computational, data, model, and training resources to be made available to the research community in support of AI-related research and development. The Director of NSF shall identify Federal and private sector computational, data, software, and training resources appropriate for inclusion in the NAIRR pilot program. To assist with such work, within 45 days of the date of this order, the heads of agencies whom the Director of NSF identifies for coordination pursuant to this subsection shall each submit to the Director of NSF a report identifying the agency resources that could be developed and integrated into such a pilot program. These reports shall include a description of such resources, including their current status and availability; their format, structure, or technical specifications; associated agency expertise that will be provided; and the benefits and risks associated with their inclusion in the NAIRR pilot program. The heads of independent regulatory agencies are encouraged to take similar steps, as they deem appropriate.

This provision flies under the radar, but it could lay the groundwork for dramatically lowering barriers to access for compute, which is both expensive and essential for training advanced AI models. The idea here is to help researchers, nonprofits, and small businesses benefit from AI even when they don't have the resources to purchase large-scale advanced computing outright. If the pilot is successful, it could provide other countries a model for how to break down the high walls of compute to allow more people to enjoy the benefits of advanced AI.

— Vivek Chilukuri | Senior Fellow and Director, Technology and National Security Program

(iii) within 270 days of the date of this order or 180 days after the United States Copyright Office of the Library of Congress publishes its forthcoming AI study that will address copyright issues raised by AI, whichever comes later, consult with the Director of the United States Copyright Office and issue recommendations to the President on potential executive actions relating to copyright and AI. The recommendations shall address any copyright and related issues discussed in the United States Copyright Office’s study, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training.

This is an important point for AI training writ large that is flying under the radar. With so many cases working their way through the courts and little released guidance from the copyright office, changes to this dynamic could have large ramification for how future LLMs are trained.

— Michael Depp | Research Associate, AI Safety and Stability Project

(i) issue a public report describing the potential for AI to improve planning, permitting, investment, and operations for electric grid infrastructure and to enable the provision of clean, affordable, reliable, resilient, and secure electric power to all Americans;

(ii) develop tools that facilitate building foundation models useful for basic and applied science, including models that streamline permitting and environmental reviews while improving environmental and social outcomes;

(iii) collaborate, as appropriate, with private sector organizations and members of academia to support development of AI tools to mitigate climate change risks;

(iv) take steps to expand partnerships with industry, academia, other agencies, and international allies and partners to utilize the Department of Energy’s computing capabilities and AI testbeds to build foundation models that support new applications in science and energy, and for national security, including partnerships that increase community preparedness for climate-related risks, enable clean-energy deployment (including addressing delays in permitting reviews), and enhance grid reliability and resilience; and

(v) establish an office to coordinate development of AI and other critical and emerging technologies across Department of Energy programs and the 17 National Laboratories.

While many of the EO's references to the intersection of AI and bio err on the side of bolstering biosecurity, this section provides for a more positive application of AI to amplify the benefits of emerging biotechnologies toward climate action. The AI/bio nexus holds grave potential for misuse—but we must also recognize and invest in it's potential for improved climate conditions as a fundamental facet of national security.

— Hannah Kelley | Research Associate, Technology and National Security

10.2. Increasing AI Talent in Government. (a) Within 45 days of the date of this order, to plan a national surge in AI talent in the Federal Government, the Director of OSTP and the Director of OMB, in consultation with the Assistant to the President for National Security Affairs, the Assistant to the President for Economic Policy, the Assistant to the President and Domestic Policy Advisor, and the Assistant to the President and Director of the Gender Policy Council, shall identify priority mission areas for increased Federal Government AI talent, the types of talent that are highest priority to recruit and develop to ensure adequate implementation of this order and use of relevant enforcement and regulatory authorities to address AI risks, and accelerated hiring pathways.

It’s heartening to see the administration willing to move with gusto and agility to scale up AI talent within government. This will be crucial for ensuring that upcoming efforts appropriately grapple with the technical realities of this rapidly evolving domain, and can go toe-to-toe with the biggest labs where necessary.

— Caleb Withers | Research Assistant, Technology and National Security Program

(d) To meet the critical hiring need for qualified personnel to execute the initiatives in this order, and to improve Federal hiring practices for AI talent, the Director of OPM, in consultation with the Director of OMB, shall:

(i) within 60 days of the date of this order, conduct an evidence-based review on the need for hiring and workplace flexibility, including Federal Government-wide direct-hire authority for AI and related data-science and technical roles, and, where the Director of OPM finds such authority is appropriate, grant it; this review shall include the following job series at all General Schedule (GS) levels: IT Specialist (2210), Computer Scientist (1550), Computer Engineer (0854), and Program Analyst (0343) focused on AI, and any subsequently developed job series derived from these job series;

(ii) within 60 days of the date of this order, consider authorizing the use of excepted service appointments under 5 C.F.R. 213.3102(i)(3) to address the need for hiring additional staff to implement directives of this order;

(iii) within 90 days of the date of this order, coordinate a pooled-hiring action informed by subject-matter experts and using skills-based assessments to support the recruitment of AI talent across agencies;

(iv) within 120 days of the date of this order, as appropriate and permitted by law, issue guidance for agency application of existing pay flexibilities or incentive pay programs for AI, AI-enabling, and other key technical positions to facilitate appropriate use of current pay incentives;

Increasing pay of skilled workers in government is definitely necessary to attract talent but we should be clear that additional incentives and pathways will be needed since there is very little chance that any government can compete with private sector salaries. A good component, but this is highly dependent on the other subcomponents working out as well.

— Michael Depp | Research Associate, AI Safety and Stability Project

Sec. 11. Strengthening American Leadership Abroad. (a) To strengthen United States leadership of global efforts to unlock AI’s potential and meet its challenges, the Secretary of State, in coordination with the Assistant to the President for National Security Affairs, the Assistant to the President for Economic Policy, the Director of OSTP, and the heads of other relevant agencies as appropriate, shall:

(i) lead efforts outside of military and intelligence areas to expand engagements with international allies and partners in relevant bilateral, multilateral, and multi-stakeholder fora to advance those allies’ and partners’ understanding of existing and planned AI-related guidance and policies of the United States, as well as to enhance international collaboration; and

Section 11 very specifically leaves out military and intelligence areas for international engagement. With the update to the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, which now has over 30 other state signatories, it appears that the Administration will seek engagement via this process, rather than an effort born of the EO.

— Josh Wallin | Fellow, Defense Program

(ii) lead efforts to establish a strong international framework for managing the risks and harnessing the benefits of AI, including by encouraging international allies and partners to support voluntary commitments similar to those that United States companies have made in pursuit of these objectives and coordinating the activities directed by subsections (b), (c), (d), and (e) of this section, and to develop common regulatory and other accountability principles for foreign nations, including to manage the risk that AI systems pose.

This is proving to be a thorny subject. What will this framework look like? Where will it be housed? What will it cover? Who will be involved? It is good to see the United States commit to leading these efforts without specifying what it will look like. This is going to be a long effort and will likely change shape many times in the next few years. Early commitment to the ideal that has staying power will be necessary for success.

— Michael Depp | Research Associate, AI Safety and Stability Project

(b) To advance responsible global technical standards for AI development and use outside of military and intelligence areas, the Secretary of Commerce, in coordination with the Secretary of State and the heads of other relevant agencies as appropriate, shall lead preparations for a coordinated effort with key international allies and partners and with standards development organizations, to drive the development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing.

In particular, the Secretary of Commerce shall:

(i) within 270 days of the date of this order, establish a plan for global engagement on promoting and developing AI standards, with lines of effort that may include:

(A) AI nomenclature and terminology;

(B) best practices regarding data capture, processing, protection, privacy, confidentiality, handling, and analysis;

(C) trustworthiness, verification, and assurance of AI systems; and

(D) AI risk management;

Relying on technical standard setting as an arena for international cooperation is a time-honored tradition so it is no surprise this is included. Defining terms and best practices can provide to be subtly political however, and I think global engagement on these issues will be harder than meets the eye.

— Michael Depp | Research Associate, AI Safety and Stability Project

(i) The Secretary of State and the Administrator of the United States Agency for International Development, in coordination with the Secretary of Commerce, acting through the director of NIST, shall publish an AI in Global Development Playbook that incorporates the AI Risk Management Framework’s principles, guidelines, and best practices into the social, technical, economic, governance, human rights, and security conditions of contexts beyond United States borders. As part of this work, the Secretary of State and the Administrator of the United States Agency for International Development shall draw on lessons learned from programmatic uses of AI in global development.

This is a positive step in the right direction, but there is a long way to go in helping other countries capitalize on the benefits that AI can provide. China is already trying to roll out AI's benefits through its Belt and Road Initiative, which could have considerable downstream effects on future norms of the technology. The U.S. should seek to be far more ambitious in diffusing the benefits of AI to the Global South in a safe, responsible manner.

— Bill Drexel | Associate Fellow, Technology and National Security Program

Sec. 12. Implementation. (a) There is established, within the Executive Office of the President, the White House Artificial Intelligence Council (White House AI Council). The function of the White House AI Council is to coordinate the activities of agencies across the Federal Government to ensure the effective formulation, development, communication, industry engagement related to, and timely implementation of AI-related policies, including policies set forth in this order.

To make progress in government, it helps a lot to have top cover from senior leadership. Although the executive order gives AI-related work presidential imprimatur—hopefully spurring the bureaucracy to act—this will inevitably fade with time. Establishing this high-level AI Council at the White House, composed of secretary and director-level members across the interagency, will renew that senior-level top cover to keep AI policy towards the top of the agenda, hopefully.

— Vivek Chilukuri | Senior Fellow and Director, Technology and National Security Program

Authors

  • Paul Scharre

    Executive Vice President and Director of Studies

    Paul Scharre is the executive vice president and director of studies at the Center for a New American Security (CNAS). He is the award-winning author of Four Battlegrounds: Po...

  • Vivek Chilukuri

    Senior Fellow and Director, Technology and National Security Program

    Vivek Chilukuri is the senior fellow and program director of the Technology and National Security Program at the Center for a New American Security (CNAS). His areas of focus ...

  • Tim Fist

    Senior Adjunct Fellow, Technology and National Security Program

    Tim Fist is a Senior Adjunct Fellow with the Technology and National Security Program at CNAS. His work focuses on the governance of artificial intelligence using compute/comp...

  • Bill Drexel

    Fellow, Technology and National Security Program

    Bill Drexel is a fellow with the Technology and National Security Program at the Center for a New American Security (CNAS). His work focuses on Sino-American competition, arti...

  • Josh Wallin

    Fellow, Defense Program

    Josh Wallin is a fellow with the Defense Program at the Center for a New American Security (CNAS). His research forms part of the Artificial Intelligence (AI) Safety and Stabi...

  • Hannah Kelley

    Former Research Associate, Technology and National Security Program

    Hannah Kelley is a former Research Associate with the Technology and National Security Program at CNAS. Her work at the Center focused on U.S. national technology strategy and...

  • Sam Howell

    Associate Fellow, Technology and National Security Program

    Sam Howell is an associate fellow with the Technology and National Security Program at the Center for a New American Security (CNAS). Her research interests include quantum in...

  • Michael Depp

    Research Associate, AI Safety and Stability Project

    Michael Depp is a research associate for the Artificial Intelligence Safety and Stability initiative at the Center for a New American Security (CNAS). His research focuses on ...

  • Caleb Withers

    Research Associate, Technology and National Security Program

    Caleb Withers is a research associate for the Technology and National Security Program at the Center for a New American Security (CNAS), supporting the Center’s initiative on ...