May 24, 2024
Tort Law and Frontier AI Governance
The development and deployment of highly capable, general-purpose frontier AI systems—such as GPT-4, Gemini, Llama 3, Claude 3, and beyond—will likely produce major societal benefits across many fields. As these systems grow more powerful, however, they are also likely to pose serious risks to public welfare, individual rights, and national security. Fortunately, frontier AI companies can take precautionary measures to mitigate these risks, such as conducting evaluations for dangerous capabilities and installing safeguards against misuse. Several companies have started to employ such measures, and industry best practices for safety are emerging.
Frontier AI developers can take precautions to reduce the risks that their most advanced systems will increasingly pose.
It would be unwise, however, to rely entirely on industry and corporate self-regulation to promote the safety and security of frontier AI systems. Some frontier AI companies might employ insufficiently rigorous precautions, or refrain from taking significant safety measures altogether. Other companies might fail to invest the time and resources necessary to keep their safety practices up to date with the rapid pace at which AI capabilities are advancing. Given competitive pressures, moreover, the irresponsible practices of one frontier AI company might have a contagion effect, weakening other companies’ incentives to proceed responsibly as well.
Read the entire article from Lawfare.
More from CNAS
-
The Implications of DeepSeek
When the Chinese artificial intelligence company DeepSeek unveiled its AI chatbot just weeks ago, it shook up the U.S. tech industry and set off an AI competition. DeepSeek sa...
By Jordan Schneider
-
DeepSeek DeepDive + Hands-On With Operator + Hot Mess Express!
ChinaTalks’ Jordan Schneider, adjunct fellow of the Technology and National Security Program at the Center for a New American Security, joins to explain the Chinese A.I. indus...
By Jordan Schneider
-
PONI Live Debate: AI Integration in NC3
Dr. Paul Scharre, executive vice president and director of studies at the Center for New American Security joins in a live debate moderated to discuss AI Integration in NC3. ...
By Paul Scharre
-
Lawfare Daily: Janet Egan and Lennart Heim on the AI Diffusion Rule
Janet Egan, Senior Fellow at the Center for a New American Security (CNAS) and Lennart Heim, an AI researcher at RAND, join Kevin Frazier, a Tarbell Fellow at Lawfare, to anal...
By Janet Egan