September 16, 2024
Regulating AI Is Easier Than You Think
Artificial intelligence is poised to deliver tremendous benefits to society. But, as many have pointed out, it could also bring unprecedented new horrors. As a general-purpose technology, the same tools that will advance scientific discovery could also be used to develop cyber, chemical, or biological weapons. Governing AI will require widely sharing its benefits while keeping the most powerful AI out of the hands of bad actors. The good news is that there is already a template on how to do just that.
In the 20th century, nations built international institutions to allow the spread of peaceful nuclear energy but slow nuclear weapons proliferation by controlling access to the raw materials—namely weapons-grade uranium and plutonium—that underpins them. The risk has been managed through international institutions, such as the Nuclear Non-Proliferation Treaty and International Atomic Energy Agency. Today, 32 nations operate nuclear power plants, which collectively provide 10% of the world’s electricity, and only nine countries possess nuclear weapons.
The U.S. can work with other nations to build on this foundation to put in place a structure to govern computing hardware across the entire lifecycle of an AI model: chip-making equipment, chips, data centers, training AI models, and the trained models that are the result of this production cycle.
Countries can do something similar for AI today. They can regulate AI from the ground up by controlling access to the highly specialized chips that are needed to train the world’s most advanced AI models. Business leaders and even the U.N. Secretary-General António Guterres have called for an international governance framework for AI similar to that for nuclear technology.
The most advanced AI systems are trained on tens of thousands of highly specialized computer chips. These chips are housed in massive data centers where they churn on data for months to train the most capable AI models. These advanced chips are difficult to produce, the supply chain is tightly controlled, and large numbers of them are needed to train AI models.
Read the full article from TIME.
More from CNAS
-
The Just Security Podcast: Diving Deeper into DeepSeek
The DeepSeek saga raises urgent questions about China’s AI ambitions, the future of U.S. technological leadership, and the strategic implications of open-source AI models. How...
By Keegan McBride
-
The Implications of DeepSeek
When the Chinese artificial intelligence company DeepSeek unveiled its AI chatbot just weeks ago, it shook up the U.S. tech industry and set off an AI competition. DeepSeek sa...
By Jordan Schneider
-
The Brute Force Method for Training AI Models Is Dead, Says Full-Stack Generative AI CEO May Habib
Full-Stack Generative AI CEO May Habib and Jordan Schneider, adjunct fellow in the Technology and National Security Program, join 'Power Lunch' to discuss Nvidia, Singapore an...
By Jordan Schneider
-
DeepSeek DeepDive + Hands-On With Operator + Hot Mess Express!
ChinaTalks’ Jordan Schneider, adjunct fellow of the Technology and National Security Program at the Center for a New American Security, joins to explain the Chinese A.I. indus...
By Jordan Schneider