July 24, 2023

NOTEWORTHY: Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI

In this edition of Noteworthy, researchers from the AI Safety and Stability team comment on the announcement from the White House on securing voluntary commitments on safety, security, and trust from leading AI companies.

Voluntary commitments – underscoring safety, security, and trust – mark a critical step toward developing responsible AI.


The voluntary commitments that leading AI companies have adopted and the White House has secured are a valuable first step. But that step must lead towards future binding regulations.

— Paul Scharre | Executive Vice President and Director of Studies, CNAS

Biden-Harris Administration will continue to take decisive action by developing an Executive Order and pursuing bipartisan legislation to keep Americans safe.

Since taking office, President Biden, Vice President Harris, and the entire Biden-Harris Administration have moved with urgency to seize the tremendous promise and manage the risks posed by Artificial Intelligence (AI) and to protect Americans’ rights and safety.


This is a true statement, from the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, to the AI Bill of Rights, and now these commitments, there has been a flurry of activity from the administration on AI safety and risk. Unfortunately, it has been mostly with nonbinding statements or voluntary frameworks. Moving beyond this will require congressional action and also international agreement beyond just principles.

— Michael Depp | Research Associate, AI Safety and Stability Project

As part of this commitment, President Biden is convening seven leading AI companies at the White House today – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to announce that the Biden-Harris Administration has secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.


Despite only seven companies appearing on this list, this actually captures most actors with the expertise and supercomputing infrastructure necessary to build systems at the current frontier of capabilities.

This means that securing voluntary commitments from these actors is a meaningful development.

— Tim Fist | Fellow, Technology and National Security Program

Companies that are developing these emerging technologies have a responsibility to ensure their products are safe. To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.

These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI.


These voluntary commitments should be applauded. To make them more effective, I'd also like to see them paired with mechanisms through which companies could prove that they've adequately satisfied particular commitments. Especially in the security domain, there's a big difference between doing something on paper, and doing something to a sufficient level to adequately mitigate key risks.

— Tim Fist | Fellow, Technology and National Security Program

As the pace of innovation continues to accelerate, the Biden-Harris Administration will continue to remind these companies of their responsibilities and take decisive action to keep Americans safe.

There is much more work underway. The Biden-Harris Administration is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation.


The New York Times suggests that this executive order will focus on new AI-related export controls. What should we expect in this space?

First are export controls on frontier models themselves. If a model can be used to threaten national security, there should probably be guardrails on a developer's ability to widely distribute it. Second is closing the gaps in the current system of controls on semiconductors, including addressing the smuggling of high-end chips, and figuring out what to do about cloud computing. We suggest some solutions in this recent piece.

— Tim Fist | Fellow, Technology and National Security Program

Today, these seven leading AI companies are committing to:

Ensuring Products are Safe Before Introducing Them to the Public

  • The companies commit to internal and external security testing of their AI systems before their release.

In an accompanying document, the White House clarifies that where these commitments mention particular AI systems, they are referring to "generative models that are overall more powerful than the current industry frontier." This is an important category of AI systems to regulate, as argued in a recent white paper on frontier AI regulation, which features two authors from CNAS.

However, we should keep in mind the implications of this scoping: These commitments do not apply to any AI systems released that are less capable or as capable as today's best general-purpose AI system, OpenAI's GPT-4. It also seems they will not apply to more narrow-purpose systems that do not display the breadth of capabilities that GPT-4 does. Regulating these narrow systems will also be important, but securing voluntary commitments from relevant companies would be near impossible, as the range of actors who can produce narrow systems is much larger than for frontier systems.

— Tim Fist | Fellow, Technology and National Security Program

  • This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks,such as biosecurity and cybersecurity, as well as its broader societal effects.

Something to keep in mind here is that all of the risks listed are intersectional and will require very expansive expertise in order to effectively red team. These are not primarily AI problems, but are existing issues that AI is being added to, so there will be existing work that will be beneficial here.

— Michael Depp | Research Associate, AI Safety and Stability Project

AI systems are advancing rapidly in domains like biosecurity and cybersecurity, where they could pose safety and security threats.

Google DeepMind’s AlphaFold 2 highlights the potential of large deep learning models to rival human capabilities—predicting protein structures with near-experimental accuracy. But scientific capabilities like this risk being used for harm. Researchers have, for example, "inverted" a generative AI tool for discovering therapeutic molecules to instead find thousand of potential chemical weapons, both known and novel. Researchers also showed that a GPT-4 powered system could design, plan, and execute complex scientific experiments, including some that involved synthesizing chemical weapons.

Similarly, leading AI labs used the vast amount of computer code available online to train models that are fine-tuned for coding capabilities. These capabilities can help automate the identification of vulnerabilities in code and computer systems.

There is currently no straightforward way to identify the full extent of dangerous capabilities associated with AI systems like these, so it’s great to see that the voluntary commitments emphasize the role of third-party testing and red teaming, as well as information-sharing about best practices.

— Caleb Withers | Research Assistant, Technology and National Security Program
  • The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.

Building Systems that Put Security First

  • The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.

You can think of "model weights" as simply a software file, containing a list of billions to trillions of numbers representing connections between artificial neurons in an AI system's "brain." Though their inner workings are inscrutable, model weights contain the capabilities (e.g., the ability to do math and write essays) of the AI system. As the commitments listed here imply, model weights are therefore important to protect if the AI system has dangerous capabilities.

From a purely financial perspective, companies also obviously have an incentive to protect model weights: it takes tens of millions of dollars to train a frontier AI system, and these systems have huge commercial value.

— Tim Fist | Fellow, Technology and National Security Program

  • These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.

This is the only reference to open-source AI in the announcement, but this is a thorny problem facing regulators. If open-sourcing of frontier AI systems is permitted, many of the security and safety commitments mentioned here become much less valuable.

Even if an AI developer puts in place safeguards to try and suppress dangerous behavior (e.g., assisting with building a bomb), if that AI system is then open-sourced, someone with access to the system can use a process called "fine-tuning" to re-elicit the behavior.

— Tim Fist | Fellow, Technology and National Security Program

Given China's history of intellectual property theft and industrial espionage, highlighting the need for robust cybersecurity and insider threat safeguards is essential. Newer frontier labs especially may not fully recognize the threat posed by Chinese efforts to steal models, particularly given the emphasis that the Chinese Communist Party places on AI for its future. But the risks that stem from advanced AI development are far more acute in China and the prospects of China stealing advanced AI models from open societies is all too real. These companies should anticipate elaborate attempts at technology theft that have proven successful in high-security environments before

— Bill Drexel | Associate Fellow, Technology and National Security Program

This line about model weights being released "only when intended and when security risks are considered" is a gaping hole in the voluntary commitments. Once the model weights are released, other less responsible actors can easily modify the model to strip away its safeguards. After Meta released its language model Llama 2 last week, it took only one day for an uncensored version of one of the Llama 2 variants to appear online

— Paul Scharre | Executive Vice President and Director of Studies, CNAS

  • The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly.

This is generally a great idea. However, depending on how the reporting is done, this can prove problematic. For models behind an API, this will do nothing but good because the company responsible for it can make the necessary changes and affect everyone who uses the model. For open-source models like Llama 2 where individual users would have the responsibility to patch their own model, reporting done improperly can just alert bad actors to system vulnerabilities.

— Michael Depp | Research Associate, AI Safety and Stability Project

Earning the Public’s Trust

  • The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.

Watermarking refers to processes where AI outputs are tagged in various ways to mark them as AI-generated. In an accompanying document, the White House indicates that these commitments apply only to "audio or visual content" (not text). This is a missed opportunity to incentivize exploration of text-based watermarking, given today's most powerful systems are generally text-based. However, while watermarking has an important role to play, it will not be a panacea for dealing with sophisticated actors looking to avoid detection. Watermarking techniques can generally be defeated through deliberate modification of AI-generated content (for example, paraphrasing AI-generated text, or making additional edits to AI-generated images)

— Caleb Withers | Research Assistant, Technology and National Security Program

Watermarking AI-generated media is a vital step, but only one component of a set of technical and policy solutions that are needed to deal with the challenge of AI-generated media that are quickly becoming indistinguishable from reality.

— Paul Scharre | Executive Vice President and Director of Studies,
  • The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.
  • The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.
  • The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges. From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all.

As we advance this agenda at home, the Administration will work with allies and partners to establish a strong international framework to govern the development and use of AI.


The United States has taken the leading role in coordinating international cooperation on AI. The U.S. is well-positioned to do so, since it is home to the world’s leading AI labs and technology companies. In last week’s announcement, the White House appears to be reaffirming the recent calls for large-scale international collaboration on AI through intergovernmental organizations (IGOs). They are seeking to use the G-7 as a collaborative body, in parallel with the United Nations, and will continue more bilateral efforts unrelated to IGOs.

While working through IGOs may be helpful for socializing (through the talk-shop nature of the institutions) certain AI issues, it is not helpful for developing significant international standards for AI development, testing, and evaluation, and implementation. IGOs are slow- moving bodies by nature. The lack of progress of the UN’s Group of Governmental Experts on Lethal Autonomous Weapons is one example of this. As has been highlighted, these types of institutions will struggle to keep up with the staggering pace of AI advancement. Simultaneously, there is also a resource dilemma. The U.S. diplomats working on AI issues on an IGO level may very well be better served by investigating areas for bilateral cooperation, such as U.S.-China or U.S.-U.K. norms development. While the White House is not calling for an IGO responsible for regulating AI, as others have, it is important to keep these policy criticisms in mind as calls for such an institution may grow into the future.

— Noah Greene | Project Assistant, AI Safety and Stability Project

It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.


These efforts are very important, especially as the United States tries to become a leader in AI regulation and safety. It is conspicuous that despite all of these conversations, we have yet to see more movement on others following the U.S. lead and issuing substantially similar commitments.

— Michael Depp | Research Associate, AI Safety and Stability Project

The United States seeks to ensure that these commitments support and complement Japan’s leadership of the G-7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom’s leadership in hosting a Summit on AI Safety, and India’s leadership as Chair of the Global Partnership on AI. We also are discussing AI with the UN and Member States in various UN fora.

There has been quite a lot of discussion in recent months about multilateral actions to address the emerging risks of advanced AI systems. The big question now is what types of risks should be addressed by which sorts of bodies. Multilateral agencies are often slow, blunt instruments, and countries will differ considerably on what sorts of controls they might find appropriate, onerous, or unacceptable. Some issues will thus inevitably be better addressed at the national level. The divergence in values between the U.S. and China—representing the world's most advanced AI ecosystems—presents an additional challenge to multilateral cooperation. Historically, near misses are often essential to garnering the political will necessary to constrain particularly dangerous technological capabilities.

— Bill Drexel | Associate Fellow, Technology and National Security Program

Today’s announcement is part of a broader commitment by the Biden-Harris Administration to ensure AI is developed safely and responsibly, and to protect Americans from harm and discrimination.

  • Earlier this month, Vice President Harris convened consumer protection, labor, and civil rights leaders to discuss risks related to AI and reaffirm the Biden-Harris Administration’s commitment to protecting the American public from harm and discrimination.
  • Last month, President Biden met with top experts and researchers in San Francisco as part of his commitment to seizing the opportunities and managing the risks posed by AI, building on the President’s ongoing engagement with leading AI experts.
  • In May, the President and Vice President convened the CEOs of four American companies at the forefront of AI innovation—Google, Anthropic, Microsoft, and OpenAI—to underscore their responsibility and emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society. At the companies’ request, the White House hosted a subsequent meeting focused on cybersecurity threats and best practices.
  • The Biden-Harris Administration published a landmark Blueprint for an AI Bill of Rights to safeguard Americans’ rights and safety, and U.S. government agencies have ramped up their efforts to protect Americans from the risks posed by AI, including through preventing algorithmic bias in home valuation and leveraging existing enforcement authorities to protect people from unlawful bias, discrimination, and other harmful outcomes.
  • President Biden signed an Executive Order that directs federal agencies to root out bias in the design and use of new technologies, including AI, and to protect the public from algorithmic discrimination.
  • Earlier this year, the National Science Foundation announced a $140 million investment to establish seven new National AI Research Institutes, bringing the total to 25 institutions across the country.
  • The Biden-Harris Administration has also released a National AI R&D Strategic Plan to advance responsible AI.
  • The Office of Management and Budget will soon release draft policy guidance for federal agencies to ensure the development, procurement, and use of AI systems is centered around safeguarding the American people’s rights and safety.

Read More:

Technology & National Security

Artificial Intelligence Safety and Stability

Nations around the world are investing in artificial intelligence (AI) to improve their military, intelligence, and other national security capabilities. Yet AI technology, at...

Read More

Authors

  • Paul Scharre

    Executive Vice President and Director of Studies

    Paul Scharre is the executive vice president and director of studies at the Center for a New American Security (CNAS). He is the award-winning author of Four Battlegrounds: Po...

  • Tim Fist

    Senior Adjunct Fellow, Technology and National Security Program

    Tim Fist is a Senior Adjunct Fellow with the Technology and National Security Program at CNAS. His work focuses on the governance of artificial intelligence using compute/comp...

  • Bill Drexel

    Fellow, Technology and National Security Program

    Bill Drexel is a fellow with the Technology and National Security Program at the Center for a New American Security (CNAS). His work focuses on Sino-American competition, arti...

  • Michael Depp

    Research Associate, AI Safety and Stability Project

    Michael Depp is a research associate for the Artificial Intelligence Safety and Stability initiative at the Center for a New American Security (CNAS). His research focuses on ...

  • Caleb Withers

    Research Associate, Technology and National Security Program

    Caleb Withers is a research associate for the Technology and National Security Program at the Center for a New American Security (CNAS), supporting the Center’s initiative on ...

  • Noah Greene

    Research Assistant, AI Safety and Stability Project

    Noah Greene is a research assistant for the Artificial Intelligence Safety and Stability Project at the Center for a New American Security (CNAS). In this role, Greene works w...