November 07, 2024
Sovereign AI in a Hybrid World: National Strategies and Policy Responses
OpenAI’s release of ChatGPT in November 2022 triggered an ongoing global race to develop and deploy generative artificial intelligence (AI) products. ChatGPT also raised concerns among governments that the sudden and wide release of access to OpenAI’s AI chatbot—and the subsequent launch of several competitive large language model (LLM) services—could significantly impact their economic development, national security, and societal fabric.
Some governments, most prominently the European Union, took quick action to address the risks of LLMs and other forms of generative AI through new regulations. Many have also responded to the emergence of the technology by investing in and building hardware and software alongside—and sometimes in competition with—private-sector entities. In addition to or instead of regulating AI to help ensure its responsible development and deployment, governments building AI are market participants, rather than regulators, working to achieve strategic goals such as promoting their nations’ economic competitiveness through the technology.
Going forward, the U.S. government will need to ensure that it continues to work with allies and partners as it attempts to mitigate the risks of international AI diffusion, enhance its benefits, and bolster U.S. leadership in AI.
Taiwan, for example, launched a $7.4 million project in 2023 to develop and deploy a homegrown LLM called the Trustworthy AI Dialogue Engine, or TAIDE. Built by enhancing Meta’s Llama open models with Taiwanese government and media data, one of TAIDE’s goals was to counter the influence of Chinese AI chatbots—required by Chinese law to adhere to “core socialist” values—and protect Taiwan with a domestic AI alternative more aligned with Taiwanese culture and facts.
Model development, however, is just one of many government AI strategies. For example, earlier this year, the French government invested approximately $44 million to retrofit the Jean Zay supercomputer near Paris with about 1,500 new AI chips. Jean Zay, owned by the French government, had already been used by Hugging Face (an American AI company) and other organizations to train the Bloom open-source, multilanguage LLM. The upgrade is part of a broader French government strategy to develop domestic AI computing accessible to French researchers, start-ups, and companies.
Read the full article on Lawfare.
More from CNAS
-
Response to Request For Comment: “Bolstering Data Center Growth, Resilience, and Security”
CNAS experts emphasize the importance of data centers for artificial intelligence...
By Janet Egan, Geoffrey Gertz, Caleb Withers & Grace Park
-
What Is ‘Sovereign AI’ Anyway?
Pablo Chavez, Adjunct Senior Fellow with CNAS's Technology and National Security Program, joins POLITICO Tech to discuss how the term “sovereign AI” gets thrown around a lot i...
By Pablo Chavez
-
AI and the Evolution of Biological National Security Risks
New AI capabilities may reshape the risk landscape for biothreats in several ways. AI is enabling new capabilities that might, in theory, allow advanced actors to optimize bio...
By Bill Drexel & Caleb Withers
-
The 1960s Novella That Got AI (Mostly) Right
A secret military project. A vast artificial mind. Questions of consciousness. These form the premise of Dino Buzzati’s The Singularity, originally published in 1960 at the da...
By Paul Scharre