February 24, 2020

AI Deception: When Your Artificial Intelligence Learns to Lie

In artificial intelligence circles, we hear a lot about adversarial attacks, especially ones that attempt to “deceive” an AI into believing, or to be more accurate, classifying, something incorrectly. Self-driving cars being fooled into “thinking” stop signs are speed limit signs, pandas being identified as gibbons, or even having your favorite voice assistant be fooled by inaudible acoustic commands—these are examples that populate the narrative around AI deception. One can also point to using AI to manipulate the perceptions and beliefs of a person through “deepfakes” in video, audio, and images. Major AI conferences are more frequently addressing the subject of AI deception too. And yet, much of the literature and work around this topic is about how to fool AI and how we can defend against it through detection mechanisms.

I’d like to draw our attention to a different and more unique problem: Understanding the breadth of what “AI deception” looks like, and what happens when it is not a human’s intent behind a deceptive AI, but instead the AI agent’s own learned behavior. These may seem somewhat far-off concerns, as AI is still relatively narrow in scope and can be rather stupid in some ways. To have some analogue of an “intent” to deceive would be a large step for today’s systems. However, if we are to get ahead of the curve regarding AI deception, we need to have a robust understanding of all the ways AI could deceive. We require some conceptual framework or spectrum of the kinds of deception an AI agent may learn on its own before we can start proposing technological defenses.

Read the full article from IEEE Spectrum.

Learn more about the Artificial Intelligence and International Stability Project:

Artificial Intelligence and International Stability Project

Despite calls from prominent scientists to avoid militarizing AI, nation-states are already using AI and machine-learning tools for national security purposes. AI has the pote...

Read More
  • Reports
    • June 11, 2024
    Catalyzing Crisis

    The arrival of ChatGPT in November 2022 initiated both great excitement and fear around the world about the potential and risks of artificial intelligence (AI). In response, s...

    By Bill Drexel & Caleb Withers

  • Congressional Testimony
    • October 19, 2023
    Obstacles and Opportunities for Transformative Change

    Watch:...

    By Paul Scharre

  • Commentary
    • Foreign Policy
    • June 13, 2023
    Every Country Is on Its Own on AI

    But establishing such an institution quickly enough to match AI’s accelerating progress is likely a pipe dream, given the history of nuclear arms controls and their status tod...

    By Bill Drexel & Michael Depp

  • Commentary
    • The Hill
    • June 10, 2023
    The Time to Regulate AI Is Now

    Policymakers should also be under no illusion that a light regulatory touch will somehow prevent a degree of concentration at AI’s frontier....

    By Caleb Withers

View All Reports View All Articles & Multimedia