January 23, 2024

The EU AI Act could hurt military innovation in Europe

The new year has brought more news about the European Union’s (EU) Artificial Intelligence (AI) Act. The final text of the EU AI Act was solidified in January of this year and became the first broad regulation controlling AI development and use. In 2023, Věra Jourová, Vice President of the European Commission for Values and Transparency, argued that the regulation will “intervene only where necessary, allowing AI to flourish.” The law intentionally avoids directly regulating the militaries of member states. While the EU has thus far avoided the creation of an AI governance tool constraining EU militaries directly, the dual use nature of the technology will undoubtedly mean the EU AI Act will impact military innovation among member states during the development of systems.

The AI Act and similar legislation like it could have negative downstream effects in the ability of companies to develop highly useful systems that will enable future military capabilities.

Under the law, AI systems are categorised into four groups. In the bottom tier, systems viewed as having minimal or no risk simply must comply with existing regulation. Second are systems viewed as having limited risk or transparency risk. These are products which pose a noticeable threat to European fundamental rights, as a result additional transparency obligations would be required. Third, high-risk systems are those with the ability to “negatively affect safety or fundamental rights.” AI that falls into this category will need to undergo screening before being placed on the market and require further evaluation during the lifecycle of the system. Finally, there is a category for unacceptable risk, which includes systems that in various ways pose a threat to humans and their livelihoods and should subsequently be banned. Examples include systems capable of behavioural manipulation, emotion recognition, and biometric categorisation.

A finalised version of the EU AI Act attempts to carve out an exemption for member state militaries. As the European Commission has noted the law “Does not apply to AI systems that are exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.” This is easier said than done.

Read the full article from Encompass Europe.

  • Commentary
    • Lieber Institute
    • February 19, 2025
    Ukraine Symposium – The Continuing Autonomous Arms Race

    This war-powered technology race does not appear to be losing steam, and what happens on the battlefields of Ukraine can potentially define how belligerents use military auton...

    By Samuel Bendett

  • Commentary
    • Lawfare
    • February 14, 2025
    Beyond DeepSeek: How China’s AI Ecosystem Fuels Breakthroughs

    While the United States should not mimic China’s state-backed funding model, it also can’t leave AI’s future to the market alone....

    By Ruby Scanlon

  • Reports
    • February 13, 2025
    Averting AI Armageddon

    In recent years, the previous bipolar nuclear order led by the United States and Russia has given way to a more volatile tripolar one, as China has quantitatively and qualitat...

    By Jacob Stokes, Colin H. Kahl, Andrea Kendall-Taylor & Nicholas Lokker

  • Commentary
    • CEPA
    • February 13, 2025
    France Pursues an AI “Third Way”

    This AI third way is not AI sovereignty in a traditional sense, which at a high level is a nation’s policy of placing the development, deployment, and control of AI models, in...

    By Pablo Chavez

View All Reports View All Articles & Multimedia