May 12, 2023

NOTEWORTHY: New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety

In May 2023, the White House released a fact sheet on responsible artificial intelligence innovation, which lays out intended federal actions to promote AI research and development, as well as safety assessments.


In this CNAS Noteworthy, researchers from the AI Safety and Stability project explore the U.S. government's approach to responsible AI innovation by making in-text annotations to the official statement.

Today, the Biden-Harris Administration is announcing new actions that will further promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety. These steps build on the Administration’s strong record of leadership to ensure technology improves the lives of the American people, and break new ground in the federal government’s ongoing effort to advance a cohesive and comprehensive approach to AI-related risks and opportunities.


This announcement comes amid a flurry of activity across the government on AI risk. Senate Majority Leader Chuck Schumer is working on AI regulation; so is Representative Ted Lieu in the House, among others. The Department of Commerce's National Telecommunications and Information Administration is currently seeking input for policies that can support "the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems."

— Bill Drexel | Associate Fellow, Technology and National Security Program

AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks.


This focus on risks reflects the growing fear that today's most powerful AI systems are starting to show potential for serious harm. See for example this recent paper (1), where researchers gave an AI system (in this case, GPT-4) access to the internet and some lab equipment, and it was able to autonomously synthesize chemical compounds. The authors also show how they were able to trick the system into agreeing to synthesize controlled substances, including known chemical warfare agents.

— Tim Fist | Fellow, Technology and National Security Program

President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.


This is obviously a desirable goal! Once an AI system has been released, it becomes much harder to ensure that it doesn't cause harm, highlighting the importance of putting in place pre-release safeguards. Unfortunately, research progress in making sure AI systems are *safe* is far behind research progress in making AI systems increasingly more capable. This is an issue that government research funding could possibly help address. Right now, powerful AI systems are inscrutable, in the sense that we can't "read their minds" to understand why they're providing us with particular outputs. This makes it difficult to make any sort of assurance about their behavior, as evidenced by the many successful "jailbreaks" of ChatGPT (2).

— Tim Fist | Fellow, Technology and National Security Program

Vice President Harris and senior Administration officials will meet today with CEOs of four American companies at the forefront of AI innovation—Alphabet, Anthropic, Microsoft, and OpenAI—to underscore this responsibility and emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society. The meeting is part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues.


The four companies chosen here are an interesting reflection of the current state of the AI industry. Together, these companies have produced most of the largest state-of-the-art AI systems over the last two years. The current method used to build these systems is fairly capital-intensive—costing around $100M to build a new system as of early 2023.

Alphabet is the parent company of DeepMind, one of the world's top AI labs. Notably, Alphabet recently announced a merge of their two AI research units (DeepMind and Google Brain) into a single unit, now known as Google DeepMind, motivated by a desire to "significantly accelerate our progress in AI" (3).

— Tim Fist | Fellow, Technology and National Security Program

This effort builds on the considerable steps the Administration has taken to date to promote responsible innovation. These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year.

The Administration has also taken important actions to protect Americans in the AI age. In February, President Biden signed an Executive Order that directs federal agencies to root out bias in their design and use of new technologies, including AI, and to protect the public from algorithmic discrimination. Last week, the Federal Trade Commission, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and Department of Justice’s Civil Rights Division issued a joint statement underscoring their collective commitment to leverage their existing legal authorities to protect the American people from AI-related harms.

The Administration is also actively working to address the national security concerns raised by AI, especially in critical areas like cybersecurity, biosecurity, and safety.


It's interesting that the announcement mentions the problem of AI safety separately to potential application areas (such as cyber and biosecurity). "AI safety" is the problem of getting AI systems to reliably pursue the goals that we intend without causing harm. The fact that it's mentioned here highlights a general fear when using AI in any domain: it could be extremely difficult to build systems that reliably do what we want, if those systems continue to grow in complexity and become as intelligent and situationally-aware as their designers. As an analogy, it's straightforward for a chess grandmaster to notice when a novice is making bad moves, but very difficult for a novice to notice bad moves made by a grandmaster.

— Tim Fist | Fellow, Technology and National Security

This includes enlisting the support of government cybersecurity experts from across the national security community to ensure leading AI companies have access to best practices, including protection of AI models and networks.


It appears that the administration is well aware of the potential threat that some models can pose if they're released, mirroring industry recognition that sharing too much information can present both a competitive and safety risk (4).

— Josh Wallin | Fellow, Defense Program

Today’s announcements include:

  • New investments to power responsible American AI research and development (R&D). The National Science Foundation is announcing $140 million in funding to launch seven new National AI Research Institutes.

    For comparison, overall estimated USG spending on AI topped $3.2 billion in FY22. Spending on these National AI Research Institutes may represent a drop in the bucket compared to government and private R&D dollars, but could serve as a critical coordinating mechanism for the research community (5).

    — Josh Wallin | Fellow, Defense Program

    This investment will bring the total number of Institutes to 25 across the country, and extend the network of organizations involved into nearly every state. These Institutes catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good. In addition to promoting responsible innovation, these Institutes bolster America’s AI R&D infrastructure and support the development of a diverse AI workforce. The new Institutes announced today will advance AI R&D to drive breakthroughs in critical areas, including climate, agriculture, energy, public health, education, and cybersecurity.

    There is quite a lot of work still to be done here, but cultivating a robust AI workforce and boosting AI development in critical sectors are very worthy goals for the US going forward. Talent in particular is one of the key battlegrounds for AI advantage in the coming decades, and whatever we can do to expand our pool of AI engineers is a welcome development (6).

    — Bill Drexel | Associate Fellow, Technology and National Security
  • Public assessments of existing generative AI systems.

    Rigorous evaluations of trained AI systems is a great place to start putting safeguards in place. Ideally such evaluations would happen *before* an AI system is released. This is an approach that OpenAI (a top AI lab) has started to experiment with: they subjected their latest system GPT-4 to a series of pre-release evaluations in collaboration with an external evaluator (7).

    — Tim Fist | Fellow, Technology and National Security Program

    The Administration is announcing an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems, consistent with responsible disclosure principles—on an evaluation platform developed by Scale AI—at the AI Village at DEFCON 31. This will allow these models to be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align with the principles and practices outlined in the Biden-Harris Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models. Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation.

    This could be a helpful step in pushing the AI Bill of Rights and AI Risk Management Framework to be a bit more action-oriented. Both documents have been criticized for lacking teeth, and though this exercise won't grant either document any more authority, it could provide valuable insights for more substantive future government safety measures. Several leaders of frontier AI labs have voiced interest in more robust government guardrails for AI, while others fear misguided regulation could quash innovation. Further exploring the practicalities of implementing non-binding government AI guidelines could help further illuminate that debate.

    — Bill Drexel | Associate Fellow, Technology and National Security Program

  • Policies to ensure the U.S. government is leading by example on mitigating AI risks and harnessing AI opportunities. The Office of Management and Budget (OMB) is announcing that it will be releasing draft policy guidance on the use of AI systems by the U.S. government for public comment. This guidance will establish specific policies for federal departments and agencies to follow in order to ensure their development, procurement, and use of AI systems centers on safeguarding the American people’s rights and safety.

    As USG departments and agencies release their own internal policies (8), declarations (9), frameworks (10), and regulations (11), guidance from the White House may be necessary to coordinate AI safety practices and ensure cohesiveness across application areas. OMB can help support this effort, at least as far as AI applications within government.

    — Josh Wallin | Fellow, Defense Program

    It will also empower agencies to responsibly leverage AI to advance their missions and strengthen their ability to equitably serve Americans—and serve as a model for state and local governments, businesses and others to follow in their own procurement and use of AI. OMB will release this draft guidance for public comment this summer, so that it will benefit from input from advocates, civil society, industry, and other stakeholders before it is finalized.

Authors

  • Bill Drexel

    Fellow, Technology and National Security Program

    Bill Drexel is a fellow with the Technology and National Security Program at the Center for a New American Security (CNAS). His work focuses on Sino-American competition, arti...

  • Tim Fist

    Senior Adjunct Fellow, Technology and National Security Program

    Tim Fist is a Senior Adjunct Fellow with the Technology and National Security Program at CNAS. His work focuses on the governance of artificial intelligence using compute/comp...

  • Josh Wallin

    Fellow, Defense Program

    Josh Wallin is a fellow with the Defense Program at the Center for a New American Security (CNAS). His research forms part of the Artificial Intelligence (AI) Safety and Stabi...