August 21, 2024
Regulating Artificial Intelligence Must Not Undermine NIST’s Integrity
The United States is the global leader in the development of AI and is well-positioned to influence AI’s future trajectories. Decisions made today in the US will have a long-lasting impact, both domestically and globally, on how we build, use, and experience AI. However, recent legislative proposals and executive actions on AI risk entangling the National Institute of Standards and Technology (NIST) in politically charged decisions, potentially calling the organization’s neutrality into question.
This is an outcome that must be prevented. NIST plays a key role in supporting American scientific and economic leadership in AI, and a strong, respected, and politically neutral NIST is a critical component for supporting America’s leadership in technological development and innovation.
A strong NIST will continue to help build standards that are adopted globally and lay the foundation for further American AI innovation and dissemination.
For over a century, NIST has helped advance American commerce, innovation, and global technological leadership. NIST’s experts have developed groundbreaking standards, techniques, tools, and evaluations that have pushed the frontier of measurement science. Today, almost every product or service we interact with has been impacted by the “technology, measurement, and standards provided by the NIST.” More recently, in the context of ongoing global AI competition, NIST has also been active in developing important standards for AI-based systems.
Key to this success has always been NIST’s ability to keep politics away from science, remaining neutral, and focusing on what it does best: measurement science. Now, in the name of AI Safety, many emerging proposals would task NIST with conducting and evaluating AI-based systems themselves. These risks are further compounded by the introduction of an increasingly politicized AI Safety Institute (AISI). Though these points might seem trivial, the long-term implications are significant.
Read the full article from the Tech Policy Press.
More from CNAS
-
Ukraine Symposium – The Continuing Autonomous Arms Race
This war-powered technology race does not appear to be losing steam, and what happens on the battlefields of Ukraine can potentially define how belligerents use military auton...
By Samuel Bendett
-
Beyond DeepSeek: How China’s AI Ecosystem Fuels Breakthroughs
While the United States should not mimic China’s state-backed funding model, it also can’t leave AI’s future to the market alone....
By Ruby Scanlon
-
Averting AI Armageddon
In recent years, the previous bipolar nuclear order led by the United States and Russia has given way to a more volatile tripolar one, as China has quantitatively and qualitat...
By Jacob Stokes, Colin H. Kahl, Andrea Kendall-Taylor & Nicholas Lokker
-
France Pursues an AI “Third Way”
This AI third way is not AI sovereignty in a traditional sense, which at a high level is a nation’s policy of placing the development, deployment, and control of AI models, in...
By Pablo Chavez