March 05, 2024

What an American Approach to AI Regulation Should Look Like

As the world grapples with how to regulate artificial intelligence, Washington faces a unique dilemma: how to secure America’s position as the global AI leader, while guarding against AI’s possible risks? Although any country seeking to regulate AI must balance regulation and innovation, this task is especially hard for the United States because we have more to lose. The United Kingdom, European Union, and China all have formidable AI companies, but U.S. firms dominate the field, propelled by our uniquely open innovation ecosystem. This dominance was on display recently, which saw OpenAI release Sora, a powerful new text-to-video platform, and Google introduce Gemini 1.5, its next-generation AI model that can absorb requests more than 30 times the size of its predecessor.

If these trends continue, and AI proves the game-changer that many expect—surrendering U.S. leadership is not an option. But as the recent Senate hearing with social media executives reminds us, neither is leaving another powerful technology completely unregulated.

Lawmakers should recognize that whatever laws they pass must have the foresight and flexibility to endure as AI evolves.

So far, the EU and China have raced ahead on AI regulation, but they have different objectives in mind. The EU’s recent AI Act prioritized minimizing social harms—like AI-powered discrimination in hiring—through a comprehensive, “risk-based” approach. China’s AI regulations, unsurprisingly, focused on reasserting state control over information. Neither approach will favor AI innovation (as some EU member states have already groused). Washington’s challenge is to develop a uniquely American approach to AI regulation that secures our leadership and protects our people—and the world—from the technology’s potential dangers.

Although the Biden Administration’s AI executive order was a valuable first step, there are limits to what the executive branch can do on its own. Only Congress can provide America with an enduring legal framework to govern this transformative technology. As lawmakers weigh their options, they must balance an array of competing priorities: the need to ensure an open and competitive AI ecosystem, manage safety risks, control the proliferation of potentially harmful AI systems, and stay ahead of China. To accomplish these goals, the United States will need a flexible and adaptive regulatory framework to keep pace with a rapidly evolving technology.

Read the full article from Time.

  • Podcast
    • June 5, 2025
    Episode 1: The Hand Behind Unmanned – America’s Quest for Autonomous War

    What are unmanned weapons and where did America’s quest for autonomy begin? The answer takes us back to the birthplace of the country and David Bushnell’s famed kegs of floati...

    By Paul Scharre

  • Podcast
    • June 4, 2025
    Rational Security: The “Huffin’ and Puffin” Edition

    This week, Scott sat down with Lawfare’s Ukraine Fellow Anastasiia Lapatina and Contributing Editors Eric Ciaramella and Alex Zerden, an adjunct senior fellow at the Center fo...

    By Alex Zerden

  • Podcast
    • June 2, 2025
    Ukraine Strikes Military and Psychological Blow to Russia

    Ukraine attacked targets deep inside Russia yesterday, in what Kyiv called “Operation Spiderweb.” The Security Service of Ukraine claimed responsibility for the attack using d...

    By Samuel Bendett

  • Reports
    • May 28, 2025
    Atomic Advantage

    Executive Summary One of the most consequential national security contests now unfolds on battlefields invisible to the naked eye—across the faint radiofrequency signals of th...

    By Constanza M. Vidal Bustamante

View All Reports View All Articles & Multimedia