March 10, 2025
The United States Must Avoid AI’s Chernobyl Moment
In January, U.S President Donald Trump tasked his advisors to develop by July 2025 an AI Action Plan, a roadmap intended to “sustain and enhance America’s AI dominance.” This call to action mirrors the early days of nuclear energy — a transformative technology with world-changing potential but also grave risks. Much like the nuclear industry was derailed by public backlash following disasters such as Three Mile Island and Chernobyl, AI could face a similar crisis of confidence unless policymakers take proactive steps to prevent a large-scale incident.
A single large-scale AI disaster—be it in cybersecurity, critical infrastructure, or biotechnology—could undermine public trust, stall innovation, and leave the United States trailing global competitors. Recent reports indicate plans to cut the government’s AI capacity by dismantling the AI Safety Institute. But this would be a self-inflicted wound—not only for safety, but for progress. If Washington fails to anticipate and mitigate major AI risks, the United States risks falling behind in the fallout from what could become AI’s Chernobyl moment.
The United States cannot let speculative fears trigger heavy-handed regulations that would cripple U.S. AI innovation.
For many Americans, AI’s transformative promise today echoes the optimism around nuclear power in the early 1970s, when more than half of the public supported its expansion. Yet the 1979 accident at Three Mile Island—a partial reactor meltdown—shattered that optimism, with support for nuclear energy dropping precipitously by the mid-1980s. By 1984, nearly two-thirds of Americans opposed the expansion of nuclear energy. Statistical analysis suggests that the Three Mile Island incident was associated with a 72 percent decline in nuclear reactor construction globally. Following the deadlier 1986 Chernobyl incident, countries were more than 90 percent less likely to build nuclear power plants than prior to this accident.
Just as many nations envisioned a renaissance for nuclear energy, the 2011 Fukushima disaster in Japan triggered renewed public skepticism and policy reversals. Fukushima — the only nuclear disaster besides Chernobyl to ever reach the highest classification on the International Nuclear and Radiological Event Scale — caused public support for nuclear energy to plummet around the world. The Japanese government halted all plans for new nuclear reactors. Germany shut down all 17 of its nuclear power generation facilities, ultimately leading to increased dependence on Russian fossil fuels, compromising both its energy security and climate goals. The world is still paying the opportunity cost today: Limited access to clean, reliable nuclear power remains a critical bottleneck for AI development and other energy-intensive innovations.
Read the full article on Just Security.
More from CNAS
-
Lawfare Daily: Tim Fist and Arnab Datta on the Race to Build AI Infrastructure in America
Tim Fist, senior adjunct fellow with the Technology and National Security Program at the Center for New American Security, and Arnab Datta, Director of Infrastructure Policy a...
By Tim Fist
-
International Governance of AI
Ruby Scanlon, research assistant for the Technology and National Security Program at the Center for a New American Security (CNAS), partook in a panel which discussed the comp...
By Ruby Scanlon
-
As Trump Reshapes AI Policy, Here’s How He Could Protect America’s AI Advantage
The nation that solidifies its AI advantage will shape the trajectory of the most transformative technology of our era....
By Janet Egan, Paul Scharre & Vivek Chilukuri
-
Beyond DeepSeek: How China’s AI Ecosystem Fuels Breakthroughs
While the United States should not mimic China’s state-backed funding model, it also can’t leave AI’s future to the market alone....
By Ruby Scanlon