November 26, 2024
Guidance for the 2025 AI Action Summit in Paris
In September 2024, the French government, in collaboration with civil society partners, invited technical and policy experts to share their opinions on emerging technology issues relevant to the agenda of the 2025 AI Action Summit in Paris. The AI Action Summit is the third iteration of the AI Safety Summit, following the inaugural meeting hosted by the United Kingdom in 2023, which resulted in the Bletchley Declaration. The South Korean government hosted the second gathering in Seoul in 2024. CNAS experts Michael Depp, Janet Egan, Noah Greene, and Caleb Withers submitted comments on how the 2025 AI Action Summit can advance trust in AI and global cooperation to bolster AI safety and security. A summary of their response can be found below.
Summary of full response
To date, the international community has discussed AI in varying venues, from the G7 Hiroshima AI Process to two previous gatherings of the AI Action Summit to the Summits on Responsible AI in the Military Domain. These gatherings have produced no shortage of statements and declarations affirming commitment to the responsible development and deployment of AI. The 2025 French AI Action Summit is an opportunity to operationalize these statements with concrete outcomes. Below are several recommended outcomes to improve AI trust and governance:
- First, further intergovernmental collaboration and investment to advance model evaluations and align on thresholds for when such evaluations are necessary pre-deployment. The evaluation ecosystem for frontier models is still maturing. Approaches to AI evaluation vary widely across AI developers and third-party evaluators. Researchers are continually discovering new techniques for better eliciting capabilities. Evaluation results can be sensitive to minor adjustments in methods, and the practical takeaways from those results are often unclear, tenuous, or underexplored. Producing actionable, rigorous scientific insights will require concerted effort and collaboration to go beyond easier but superficial approaches.
- Second, greater consensus around definitions for training compute thresholds and their adoption by governments as a useful tool for targeted government oversight. Compute thresholds have the virtue of narrowing which AI models warrant further evaluation while reducing the regulatory burden for most AI developers. Thresholds should play a prominent role in AI governance. Much of the debate about near to medium term policy responses stems from uncertainty and disagreement around which capabilities may be on the horizon. The use of predefined thresholds—along with associated risk mitigations, operationalization of evaluations, and associated oversight and transparency—can help sidestep some of this disagreement and uncertainty while still supporting preparedness. Training compute thresholds are a useful metric for targeting oversight for managing frontier risks. Scaling up compute has been a major driver of AI advancements, with new capabilities—and associated novel risks—often emerging first in larger training runs.
- Third, advancing the science around technical mechanisms for privacy-preserving verification of model and compute workload characteristics, given its promise to support international governance. Verification is a key component of any international agreement—without it, states may be less incentivized to collaborate on safety and risk management, or to ultimately follow through on their commitments. In the context of agreements on AI governance, governments and companies should be able to make verifiable claims about how compute resources are allocated, how AI models are trained, and the characteristics of these models. However, companies and governments are understandably hesitant to provide unrestricted or intrusive access to their models and AI workloads, as this could disclose sensitive capabilities, and compromise privacy, security, or intellectual property.
- Fourth, we argue that domain-specific international bodies should be leveraged where possible (for instance, the International Civil Aviation Organization and World Health Organization within their respective remits), allowing organizations such as the United Nations to focus their efforts on cross-cutting issues. Effective AI governance will require a network of international organizations with overlapping jurisdiction working together to ensure safe AI across a multitude of scenarios. The ideal model will be to let existing bodies with specific mandates handle AI governance within their remit. It is essential that these bodies consult and include both scientists and policymakers. Without the former, technically infeasible policies could be proposed or clear and obvious advancements in technology could be discovered too late in the process. Without the latter, policies may emerge that ignore international or domestic political realities. Each organization should include civil society organizations and academics as representatives beyond their national identity but should also strive to ensure countries’ scientific and policy communities are represented by their delegations.
- Finally, avoiding the trap of negotiating a single, consensus statement and instead focus on practical outcomes, such as establishing an online exchange platform for international AI experts, mechanisms for sharing data and compute infrastructure internationally, revising agreements from other multilateral fora such as the United Nations, and a public list of undesired AI uses and international priorities. AI summits should serve a dual purpose: (1) To advance each participating country's understanding of AI policy; (2) and build consensus on specific, verifiable policy actions for each state to take. Participating diplomats could work to build consensus around the feasibility of international compute thresholds, red lines for AI use, mechanisms for countering synthetic content proliferation, interoperability standards for AI integration into critical infrastructure, and other concerns presented by states. Future summits should avoid falling into the trap of trying to negotiate a single statement. For the time being, focusing on tangible, verifiable outcomes like these are more productive than high-level, unenforceable political statements.
Download the full comments
More from CNAS
-
How Can the Trump Administration Strengthen U.S. AI Leadership?
With a new administration just around the corner, now is the time for the US to strengthen its position as a global leader in AI. Even with changing leadership, there remain n...
By Paul Scharre
-
How China’s Antitrust Tactics Undermine U.S. Tech Leadership
If the United States fails to address this threat, it risks not just losing ground in the technology race, but ceding control over the rules that govern it....
By Ruby Scanlon
-
Sharper: Tariffs
The incoming Trump administration has signaled that tariffs will be a central pillar of its economic strategy, with significant implications for international trade, the Ameri...
By Eleanor Hume, Charles Horn & Gwendolyn Nowaczyk
-
Technology to Secure the AI Chip Supply Chain: A Working Paper
Advanced artificial intelligence (AI) systems, built and deployed with specialized chips, show vast potential to drive economic growth and scientific progress....
By Tim Fist, Tao Burga & Vivek Chilukuri