November 07, 2024

The United States Must Win The Global Open Source AI Race

On Nov. 1, Reuters reported that Chinese researchers, including ones affiliated with the People’s Liberation Army (PLA), used one of Meta’s Llama models for military purposes last year. The news led to a quick and robust reaction from many, including U.S. policymakers, arguing for further restrictions on open source AI. Michael McCaul, Chairman of the House Foreign Affairs Committee, said the recently proposed ENFORCE ACT–a bill that could effectively prohibit American AI developers from releasing open-weight models–was necessary to “keep American AI out of China’s hands.”

Unlike models such as OpenAI’s ChatGPT or Anthropic’s Claude, the Llama family of language models are “open-weight,” meaning that their weights–the numbers that define its functionality–are available for anyone to download for free online. Other well-known open source AI providers include Mistral, based in France, and Falcon, developed in the United Arab Emirates. For years, debates have been raging n the strategic benefits and risks of two AI ecosystems: one that is based primarily on proprietary and closed-source AI systems and one that is supportive of open source.

As AI becomes increasingly integrated into the world’s digital infrastructure, the importance of open source AI will grow, too, as it is likely to be a key building block in driving AI’s global diffusion and adoption.

While public access to open-weight models does represent a real tradeoff between control, security, and innovation – as the Llama example underscores – the story is more complicated. Critics of open source models fail to recognize the key role these models will play in securing U.S. security interests in the long term. Rather than focusing on the risks of open source AI, policymakers should ask whether the world should rely on U.S.-developed AI – or the increasingly capable open source models from China.

The Risks of Open Source AI

Those who are more skeptical of open source AI argue that the best way to mitigate AI’s negative impacts and security risks is to develop new regulations and restrict its global distribution. Threat actors can modify open models and remove critical safety features, creating new security risks. Moreover, as open models can be run on anyone’s hardware, the original developer cannot monitor its usage for dangerous or harmful applications in ways that closed model providers can (at least, in theory). It is for this reason that closed-source AI companies are investing vast resources to prevent the theft or export of their model weights.

Read the full article on Just Security.

  • Commentary
    • TIME
    • February 20, 2025
    As Trump Reshapes AI Policy, Here’s How He Could Protect America’s AI Advantage

    The nation that solidifies its AI advantage will shape the trajectory of the most transformative technology of our era....

    By Janet Egan, Paul Scharre & Vivek Chilukuri

  • Commentary
    • Lieber Institute
    • February 19, 2025
    Ukraine Symposium – The Continuing Autonomous Arms Race

    This war-powered technology race does not appear to be losing steam, and what happens on the battlefields of Ukraine can potentially define how belligerents use military auton...

    By Samuel Bendett

  • Commentary
    • Lawfare
    • February 14, 2025
    Beyond DeepSeek: How China’s AI Ecosystem Fuels Breakthroughs

    While the United States should not mimic China’s state-backed funding model, it also can’t leave AI’s future to the market alone....

    By Ruby Scanlon

  • Reports
    • February 13, 2025
    Averting AI Armageddon

    In recent years, the previous bipolar nuclear order led by the United States and Russia has given way to a more volatile tripolar one, as China has quantitatively and qualitat...

    By Jacob Stokes, Colin H. Kahl, Andrea Kendall-Taylor & Nicholas Lokker

View All Reports View All Articles & Multimedia