August 14, 2023

How to Prevent an AI Catastrophe

Society Must Get Ready for Very Powerful Artificial Intelligence

In April 2023, a group of academics at Carnegie Mellon University set out to test the chemistry powers of artificial intelligence. To do so, they connected an AI system to a hypothetical laboratory. Then they asked it to produce various substances. With just two words of guidance—“synthesize ibuprofen”—the chemists got the system to identify the steps necessary for laboratory machines to manufacture the painkiller. The AI, as it turned out, knew both the recipe for ibuprofen and how to produce it.

How dangerous is AI? The honest and scary answer is that no one knows.

Unfortunately, the researchers quickly discovered that their AI tool would synthesize chemicals far more dangerous than Advil. The program was happy to craft instruction to produce a World War I–era chemical weapon and a common date-rape drug. It almost agreed to synthesize sarin, the notoriously lethal nerve gas, until it Googled the compound’s dark history. The researchers found this safeguard to be cold comfort. “The search function,” they wrote, “can be easily manipulated by altering the terminology.” AI, the chemists concluded, can make devastating weapons.

Read the full article from Foreign Affairs.

  • Commentary
    • TIME
    • February 20, 2025
    As Trump Reshapes AI Policy, Here’s How He Could Protect America’s AI Advantage

    The nation that solidifies its AI advantage will shape the trajectory of the most transformative technology of our era....

    By Janet Egan, Paul Scharre & Vivek Chilukuri

  • Commentary
    • Lieber Institute
    • February 19, 2025
    Ukraine Symposium – The Continuing Autonomous Arms Race

    This war-powered technology race does not appear to be losing steam, and what happens on the battlefields of Ukraine can potentially define how belligerents use military auton...

    By Samuel Bendett

  • Commentary
    • Lawfare
    • February 14, 2025
    Beyond DeepSeek: How China’s AI Ecosystem Fuels Breakthroughs

    While the United States should not mimic China’s state-backed funding model, it also can’t leave AI’s future to the market alone....

    By Ruby Scanlon

  • Reports
    • February 13, 2025
    Averting AI Armageddon

    In recent years, the previous bipolar nuclear order led by the United States and Russia has given way to a more volatile tripolar one, as China has quantitatively and qualitat...

    By Jacob Stokes, Colin H. Kahl, Andrea Kendall-Taylor & Nicholas Lokker

View All Reports View All Articles & Multimedia