July 06, 2023

Frontier AI Regulation: Managing Emerging Risks to Public Safety

Responsible AI innovation can provide extraordinary benefits to society, such as delivering medical and legal services to more people at lower cost, enabling scalable personalized education, and contributing solutions to pressing global challenges like climate change and pandemic prevention. However, guardrails are necessary to prevent the pursuit of innovation from imposing excessive negative externalities on society. There is increasing recognition that government oversight is needed to ensure AI development is carried out responsibly; we hope to contribute to this conversation by exploring regulatory approaches to this end.

We think that it is important to begin taking practical steps to regulate frontier AI today, and that the ideas discussed in this paper are a step in that direction.

In this paper, we focus specifically on the regulation of frontier AI models, which we define as highly capable foundation models that could have dangerous capabilities sufficient to pose severe risks to public safety and global security. Examples of such dangerous capabilities include designing new biochemical weapons, producing highly persuasive personalized disinformation, and evading human control.

This article was originally published by Arxiv by authors Markus Anderljung, Joslyn Barnhart, Anton Korinek, Jade Leung, Cullen O'Keefe, Jess Whittlestone, Shahar Avin, Miles Brundage, Justin Bullock, Duncan Cass-Beggs, Ben Chang, Tantum Collins, Tim Fist, Gillian Hadfield, Alan Hayes, Lewis Ho, Sara Hooker, Eric Horvitz, Noam Kolt, Jonas Schuett, Yonadav Shavit, Divya Siddarth, Robert Trager, Kevin Wolf.

  • Commentary
    • TIME
    • February 20, 2025
    As Trump Reshapes AI Policy, Here’s How He Could Protect America’s AI Advantage

    The nation that solidifies its AI advantage will shape the trajectory of the most transformative technology of our era....

    By Janet Egan, Paul Scharre & Vivek Chilukuri

  • Commentary
    • Lieber Institute
    • February 19, 2025
    Ukraine Symposium – The Continuing Autonomous Arms Race

    This war-powered technology race does not appear to be losing steam, and what happens on the battlefields of Ukraine can potentially define how belligerents use military auton...

    By Samuel Bendett

  • Commentary
    • Lawfare
    • February 14, 2025
    Beyond DeepSeek: How China’s AI Ecosystem Fuels Breakthroughs

    While the United States should not mimic China’s state-backed funding model, it also can’t leave AI’s future to the market alone....

    By Ruby Scanlon

  • Reports
    • February 13, 2025
    Averting AI Armageddon

    In recent years, the previous bipolar nuclear order led by the United States and Russia has given way to a more volatile tripolar one, as China has quantitatively and qualitat...

    By Jacob Stokes, Colin H. Kahl, Andrea Kendall-Taylor & Nicholas Lokker

View All Reports View All Articles & Multimedia