December 11, 2024

Technology to Secure the AI Chip Supply Chain: A Working Paper

Toward a Better Balance of Competitiveness, Security, and Privacy

Executive Summary

Advanced artificial intelligence (AI) systems, built and deployed with specialized chips, show vast potential to drive economic growth and scientific progress. As this potential has grown, so has debate among U.S. policymakers about how best to limit emerging risks. In some cases, this concern has driven significant policy shifts, most notably through sweeping export controls on AI chips and semiconductor manufacturing equipment sold to China. However, AI-focused chip export controls are challenging to target well. Since chip exporters and officials at the U.S. Department of Commerce currently have no reliable means of understanding who is in possession of AI chips after they have been exported, today’s controls are applied in a blanket fashion, without regard to end use or end user. Furthermore, because AI chips and AI algorithms improve over time, the quantity and quality of AI hardware required to develop a model with a particular set of dangerous capabilities will decrease over time. This means that to fulfill their goals of limiting access to specific capabilities, AI export controls must steadily grow in scope, becoming ever more burdensome on exporters and end users. Today’s controls are also difficult to enforce using the current process. Enforcement relies on exporters checking buyers against an official roster of blacklisted organizations maintained by the Bureau of Industry and Security within the U.S. Department of Commerce. Evading this process is straightforward: shell companies can typically be set up online for a few thousand dollars in a matter of hours or days, whereas it can take years of investigation to uncover a shell company’s illicit activities and add them to the list.

At the same time, in the absence of export controls, ensuring that advanced AI technologies are not used for malicious purposes by state and nonstate adversaries could require an intrusive surveillance regime with deleterious consequences for U.S. economic competitiveness and the preservation of democratic values. As policymakers consider how to balance security, competitiveness, and a commitment to democratic values, there is growing interest in technological solutions that can strike a better trade-off between these objectives and keep pace with fast AI progress and the rapidly evolving security landscape. Hardware-enabled mechanisms (HEMs)—mechanisms built into data center AI hardware to serve specific security and governance objectives—have especially attracted interest as a promising new tool.

Hardware-enabled mechanisms . . . have especially attracted interest as a promising new tool.

Variants of HEMs are already widely used in defense products, but also in commercial contexts: On Apple’s iPhone, HEMs ensure that unauthorized applications cannot be installed. Google uses an HEM-based solution to remotely verify that chips running in their data centers have not been compromised. Many video games use a hardware device called a “trusted platform module” as an HEM-based approach to prevent in-game cheating. In the commercial AI space, HEMs are used to distribute training between different users while preserving privacy of code and data.

Well-designed HEMs for AI hardware could help detect and deter AI chip smuggling into China; allow more surgical applications of export restrictions, reducing the risk of a de-Americanization of chip supply chains; and create privacy-preserving, trustworthy, and commercially viable solutions to governance and security issues.

However, the category of HEMs covers a broad design space, with both desirable and undesirable possibilities. For example, the Apple Secure Enclave security module (found in iPhones and MacBooks) is a highly reliable HEM that prioritizes customer security and privacy. This module ensures that only legitimate firmware and operating systems can be used on the device, which in turn helps deter theft and prevent unauthorized applications from being installed. On the other hand, the National Security Agency’s infamous Clipper chip carried severe privacy and security vulnerabilities. While purportedly designed to increase security, the device contained a built-in back door that allowed government officials to access private data.

To advance research, development, and debate around the potential of HEMs to advance AI safety and governance, the authors recommend that U.S. policymakers:

  • Accelerate HEM and hardware security research and development (R&D) through direct funding and public-private partnerships. The National Semiconductor Technology Center (NSTC), relevant Defense Advanced Research Projects Agency (DARPA) projects, the Department of Defense’s Microelectronics Commons, and the National Institute of Standards and Technology (NIST) could serve as key funders and facilitators of public-private coordination to advance HEM development.
  • Create commercial incentives for industry HEM R&D through conditional export licensing. The Department of Commerce should incentivize and derisk industry HEM R&D by defining a set of hardware security and governance features that would prevent new restrictions from applying to exported hardware, if those features were installed.
  • Further develop AI hardware security standards to incentivize and harmonize security features across industry. NIST, in collaboration with industry, the NSTC, and standard-setting bodies like the Institute of Electrical and Electronics Engineers AI Standards Committee and the International Organization for Standardization, should build on existing technical standards to further improve data center AI hardware security, with input from leading semiconductor and security firms.

Introduction

Advanced artificial intelligence (AI) systems, built and deployed with specialized chips, show vast potential to drive economic growth and scientific progress. However, U.S. policymakers are increasingly concerned about the dual-use potential of AI capabilities. Irresponsible actors could use advanced AI systems to support cyberattacks, biological weapons design, and mass surveillance. Securing the supply chain for AI chips is therefore vital for mitigating risks to U.S. national security.

This logic has spurred moves to restrict foreign actors, especially China, from accessing American AI technology. In October 2022, the U.S. Department of Commerce imposed sweeping export restrictions on AI chips and associated hardware to China. These were tightened in October 2023 to address perceived shortcomings but added greater burdens on U.S. firms, including requiring permission to export certain consumer graphics processing units (GPUs) and extending an export license requirement to dozens of additional countries suspected of diverting AI chips to China. Recently, the controls were expanded again, this time affecting all chips using advanced high-bandwidth memory. Now, concerns about China’s rapid catch-up to U.S. AI capabilities are prompting U.S. policymakers to consider further export restrictions, such as:

  • Further expanding the use of the Foreign Direct Product Rule, a sweeping regulation to prevent companies abroad from selling products using American components, tools, and software to Chinese chipmaking companies
  • Limiting China’s access to U.S. cloud computing services through the bipartisan Remote Access Security Act
  • Further expanding the applicable scope of “deemed exports,” which restrict the transfer of technology or source code to foreign nationals who are in the United States

Beyond these measures, the coming months and years will very likely bring even more expansions of semiconductor export controls—encompassing new product categories and further tightening existing restrictions. These expansions will likely be based on sound national security reasoning. However, the United States’ fundamental approach to semiconductor export controls has flaws.

First, the controls are hard to enforce. Smuggling of controlled technology typically starts with a legitimate shipment to an approved buyer, after which the items enter a complex network of transactions, eventually leading to re-export to a prohibited end user or country. For example, in one reported smuggling case of 2,400 AI chips, the smuggler set up a shell company in Malaysia, ordered the chips from a reseller, and installed them in a local data center to fool inspectors before shipping them to China. The enforcement process for catching such cases relies on the Entity List, an official roster of blacklisted organizations maintained by the Bureau of Industry and Security (BIS) within the U.S. Department of Commerce. Exporters use the Entity List to determine U.S. government–approved foreign purchasers. However, evading this process is straightforward for actors who are comfortable with breaking the law; shell companies can typically be set up online for a few thousand dollars in a matter of hours or days, whereas it can take years of investigation to uncover a shell company’s illicit activities and add them to the list.

Second, the controls are hard to target. Because chip exporters and Commerce Department officials currently have no reliable means of understanding who is in possession of AI chips after they have been exported, or how they are being used, the controls are applied in a blanket fashion. Licensing requirements apply to all shipments of AI chips above certain performance levels if they contain American components or were built with American tooling or software. This creates administrative burden for companies and the BIS while incentivizing foreign firms to remove American firms from their supply chains, hurting the United States’ longer-term supply chain leverage. It also results in an ecosystem where many benign users of American AI chips are captured by the controls. Compounding these issues, because AI chips and AI algorithms are improving rapidly, the quantity and quality of AI hardware required to develop a model with a particular set of concerning capabilities is decreasing rapidly. This means that to fulfill their goals of restricting access to specific capabilities, AI export controls must grow in scope over time, becoming ever more burdensome to U.S. firms.

Mechanisms leveraging modern hardware security technologies, built into data center AI hardware, could meet specific security and governance objectives in a targeted fashion without meaningfully compromising user privacy or security.

Furthermore, even if U.S. export control policy is effective at its longer-term goals, there remains the broader challenge of AI governance: how to ensure that increasingly capable AI systems are not used to cause great harm. Advanced AI models at the frontier of AI research and development acquire new capabilities in ways that are difficult to predict. This complicates efforts to make them reliably safe even as they proliferate rapidly to illicit actors. Wide, unchecked availability of advanced AI models could destabilize international relations, for instance, by lowering the barrier for nonstate actors to wield them for malicious purposes. Adequately mitigating these risks could require a privacy-infringing system of monitoring and regulation that would create barriers to benign actors using AI systems to solve important societal problems.

Leaders in government and industry, however, do not have to settle for blanket export controls on the one hand, or privacy-invading oversight regimes on the other. A third path is possible: mechanisms leveraging modern hardware security technologies, built into data center AI hardware, could meet specific security and governance objectives in a targeted fashion without meaningfully compromising user privacy or security. A previous report from the Center for a New American Security provided a detailed analysis of policy opportunities and technical challenges for implementing such mechanisms. This working paper summarizes this previous work and outlines a set of recommendations for policymakers seeking to mature hardware-enabled mechanisms (HEMs) into a useful governance tool.

Hardware-Enabled Mechanisms

The problems described here are partly the result of narrow policy options bounded by the existing technologies on today’s AI hardware. Washington’s decision to impose a blanket export ban on advanced AI chips to China reflects the reality that there is currently no widely deployed technical solution to prevent an unauthorized actor from using an AI chip once it has been exported. One promising category of solutions is hardware-enabled mechanisms: secure components embedded in AI chips or related hardware to assist with AI governance policies, such as usage restrictions and compliance verification. HEMs are already widely used in defense products, but also in commercial contexts: On Apple’s iPhone, HEMs ensure that unauthorized applications cannot be installed. Google uses an HEM-based solution to remotely verify that chips running in their data centers have not been compromised. Many video games use a hardware device called a trusted platform module as an HEM-based approach to prevent in-game cheating. In the commercial AI space, HEMs are used to distribute training between different users while preserving privacy of code and data.

Well-designed HEMs could allow for better enforcement of export controls (such as through country-level location verification for exported chips) and for more targeted approaches to controls (such as through enforced limitations on particular use cases, like large-scale training), helping to reduce commercial impacts on U.S. firms. They could also enable privacy-preserving, hardware-based reporting of the properties of AI systems (such as the quantity of compute consumed or the type of training data used), which would allow AI developers and users to rapidly and securely verify compliance with regulations without needing to directly reveal sensitive code or data or spend large amounts of time on manual reporting and verification requirements.

The U.S. government has already signaled that chips with better security and governance features could become exempt from expanded export restrictions. As part of its October 2023 updated export controls on advanced AI chips, the BIS issued a request for comment, noting that it “seeks additional proposals for exemptions involving hardware-based technical solutions that create the ability to limit training of large dual-use AI foundation models with capabilities of concern,” adding that “such items could then be exempted from these ECCNs [Export Control Classification Numbers].” U.S. lawmakers have also expressed interest in creating incentives for chip firms to develop secure HEMs.

HEMs encompass many possible ideas and designs—some valuable, some ill-advised. Here, the authors offer a primer for U.S. policymakers about the promises and pitfalls of adopting HEMs for AI security and governance.

Challenges and Opportunities

Well-designed HEMs could:

  • Help detect and deter AI chip smuggling into China
  • Allow more surgical applications of export restrictions, reducing the risk of a de-Americanization of chip supply chains
  • Help enforce current regulations, such as the reporting requirements in Executive Order 14110, and verify compliance with future international agreements
  • Create privacy-preserving, trustworthy, and commercially viable solutions to governance and security issues

The table below provides an overview of three specific HEMs. See the appendix for a more extensive overview.

Examples of HEMs and Relevant Policy Objectives

Hardware-enabled mechanism (HEM)Relevant U.S. policy objectives
Location verification: Each AI chip or device periodically receives and returns a simple “ping” from servers in key geographic regions, allowing industry or government officials to calculate the maximum distance of a chip from the server. For example, it would be possible to confirm that a chip is not in China.30

• Detect and deter the smuggling or reselling of chips to prohibited regions without revealing sensitive, precise location data or any information about how the chip is being used.

• Provide insight into which countries or entities resell or smuggle chips into China (allowing for surgical expansions to export controls and sanctions).

Bandwidth bottlenecking: High-bandwidth communication is limited to a certain number of chips, using flexible technical parameters updateable by the exporter. This could be used to enforce an upper bound on the size of the computing cluster used to train a model, ensuring that exported chips cannot be used synchronously to train frontier AI models.31 Flexible technical parameters would enable policies to keep pace with rapid changes in approaches to AI development without locking in a particular approach for an entire generation of hardware.

• Prevent untrusted end users from using U.S. chips to develop advanced dual-use AI while allowing other non-dual-use high-end computing.32

• Prevent countries of concern from using smuggled chips for frontier AI training.

Offline licensing: The AI chip requires an active license to operate; the license is “used up” as the chip performs operations, and once metering indicates that the work the license authorized is complete, the chip throttles or turns off.33 This could be used to control access to the chip in a way that is consistent and predictable to the end user.

• Prevent the unauthorized use of AI chips (e.g., smuggled chips or chips used in violation of international agreements).

• Force end users to continuously update their chips so that new security vulnerabilities may be patched remotely (adding robustness to other HEMs).

When seeking to promote the implementation of HEMs, U.S. policymakers should keep in mind several key principles:

Although HEMs are common in computing systems for defense, consumer electronics, and the financial industry, they remain underexplored in the context of high-performance AI hardware. The optimal implementation of HEMs into data center AI hardware will be through designs that are secure and preserve user privacy and performance. HEMs cover a broad design space with both commercially desirable and undesirable traits. For example, the Apple Secure Enclave security module (found on iPhones and MacBooks) is a highly reliable HEM that prioritizes customer security and privacy. This module ensures that only legitimate firmware and operating systems can be used on a device, which in turn helps deter theft and prevent unauthorized applications from being installed. On the other hand, the National Security Agency’s infamous Clipper chip carried severe privacy and security vulnerabilities. While purportedly designed to increase security, the device contained a built-in back door that allowed government officials to access private data.

Some of the functionality to support privacy-preserving HEMs is already widely deployed on existing AI chips. This technology could be further developed to form the technical foundation for reliable hardware-based governance. Chips sold by leading firms such as Advanced Micro Devices (AMD), Apple, Intel, and NVIDIA have many of the features needed to support policies for restriction and verification. These features are used today in a wide variety of applications. Some firms offer “operating licenses” that allow remote deactivation of certain chip features. Others use HEMs to ensure that only approved software has been loaded, or to remotely verify that a chip has not been compromised.

Good HEM design could proactively mitigate trade-offs between national security objectives, performance, and privacy. Despite the checkered history of government-imposed mechanisms on hardware, well-designed HEMs do not require secret monitoring of users or insecure “back doors.” For instance, HEM designs based on security features such as trusted execution environments (TEEs) could enable an external party to verify how the chip is being used without enabling secret surveillance or intellectual property (IP) theft.

Existing HEM technologies must be significantly hardened to defend against well-resourced attackers. Commercial HEMs are not typically designed to defend against a well-resourced attacker with physical access to the hardware. Government and industry investments in hardware and software security will be required for HEMs to function reliably despite physical attacks. The specific defenses required to adequately secure on-chip governance mechanisms depend on the context in which they are deployed.

HEM R&D will help AI security more broadly. Leading AI developers have expressed a need for higher security standards in areas such as physical security and confidential computing. While the main goal of AI developers is to protect their models and algorithms, research into these areas is also directly relevant for privacy-preserving HEMs, as much of the threat model is the same. This means that fundamental research into HEM security can both help make HEMs more robust and trustworthy, even when chips are in the hands of motivated and well-resourced adversaries, as well as increase the security of U.S. AI labs, securing them against AI model weight theft.

Recommendations for Policymakers

HEMs constitute a broad design space with proposals varying in security, cost, ease of implementation, and commercial desirability. U.S. policymakers should focus on identifying promising HEMs that are implementable immediately (such as country-level location verification) while promoting R&D investment to further explore and develop viable and trustworthy HEMs for higher-stakes applications. Without prior research and testing, rushed implementations of HEMs may present acute trade-offs between security and usability.

Given their technical knowledge and access to both talent and the core technology, leading U.S. chip firms such as NVIDIA, Intel, and AMD are uniquely positioned to conduct this research. The authors recommend that the U.S. government incentivize and support U.S. chipmakers to accelerate the development and adoption of HEMs. Government-funded research will play a useful complementary role to industry efforts through focusing on high-risk, high-reward R&D that could minimize trade-offs between national security objectives, performance, and privacy and by leveraging the U.S. intelligence community’s expertise in defending against sophisticated attacks on hardware.

Recommendation 1: Accelerate HEM and hardware security R&D through direct funding and public-private partnerships.

The National Semiconductor Technology Center (NSTC), relevant Defense Advanced Research Projects Agency (DARPA) projects, the Department of Defense’s Microelectronics Commons (the Commons), and the National Institute of Standards and Technology (NIST) could serve as key funders and facilitators of public-private coordination to advance HEM development.

  • DARPA could conduct HEM R&D as high-risk, high-reward projects in the early stages of development. DARPA’s Next-Generation Microelectronics Manufacturing or System Security Integration Through Hardware and Firmware projects are particularly relevant, though DARPA could consider launching new projects specific to HEM development.
  • The NSTC should (1) leverage its whole-of-government approach to coordinate between leading chipmakers, the Commons, NIST, and the CHIPS Program Office; and (2) support HEM technologies emerging from the Commons until they are reliable and viable enough to be taken up by the Manufacturing USA network or private industry. Specifically, Natcast (the nonprofit that operates the NSTC consortium) should announce an HEM program and a corresponding call for proposals, similar to the Artificial Intelligence Driven Radio Frequency Integrated Circuit Design Enablement and Test Vehicle Innovation Pipeline programs. This could take the form of an HEM “grand challenge” to develop secure HEMs, including the requisite hardware security, or more targeted programs to develop HEMs for location verification or offline licensing.
  • The Commons is especially well-suited for early HEM prototyping, given its defense-specific priorities and focus on bridging the high-risk “lab-to-fab” transition. The development of HEMs aligns with three of the Commons’ key technical areas: secure edge computing, AI hardware, and commercial leap-ahead technologies.
  • To scope and support R&D beyond leading chip companies for later integration, NIST should increase coordination with relevant industry and government funding bodies. For example:
    • The National Advanced Packaging Manufacturing Program could spearhead the creation of tamper-proof encasing (see the appendix).
    • The Cybersecurity and Infrastructure Security Agency should explore bug bounty incentives and red teaming to identify and patch vulnerabilities in current and future AI chips to make HEMs highly tamper resistant.
    • International collaboration—for example, with the United Kingdom’s Advanced Research and Invention Agency—could also accelerate HEM R&D and vulnerability discovery in existing and future chips.

Recommendation 2: Create commercial incentives through conditional export licensing.

The Department of Commerce should incentivize and derisk industry HEM R&D by defining the set of hardware security and governance measures that would prevent new restrictions from applying to exports. U.S. policymakers are continuously expanding export controls, but have signaled the possibility of excluding chips with HEMs that restrict their misuse potential. U.S. lawmakers have also expressed interest in HEMs for export control purposes. To facilitate and incentivize the gradual implementation of HEMs, the Commerce Department could create flexible export licensing regimes, such that export licenses can be granted for different geographies or end users depending on the security features and HEMs of specific chips. For example, the Commerce Department could specify that chips with secure location verification require a license with presumption of approval to sell to the United Arab Emirates, Vietnam, and other countries of concern (as is currently the case), and a license with presumption of denial for chips lacking location verification capabilities. Location verification could show whether the country or specific actors within it facilitate smuggling into China, in which case targeted export bans and sanctions could be applied. This scheme could later be extended with HEMs like tamper-resistant offline licensing and bandwidth bottlenecking, which could be used to reduce the value of the smuggled chips.

Recommendation 3: Further develop AI hardware security standards to incentivize and harmonize security features across industry.

Technical standards play an important role as a common foundation for HEM development and usage. NIST already has security standards for computer hardware, most notably Federal Information Processing Standard (FIPS) 140-3, which defines four increasing levels of security for secure processors, intended to cover many applications and environments with security requirements in areas such as physical security, resistance to side-channel attacks, and software/firmware security. Industry consortia are also active in defining security requirements for data center hardware. The Open Compute Project, whose membership includes almost every large semiconductor and computing firm, runs several standardization projects, including the Security Appraisal Framework and Enablement (S.A.F.E.) Program, which seeks to standardize security processes and interfaces across the data center technology stack, and Caliptra, which is an open-source specification for hardware-level protection for chips. NIST, in collaboration with industry, should collate existing standards and identify and address gaps when applying them to HEMs in different operating environments. This would provide a common reference point to use in regulation and to harmonize security features across industry.

Appendix: Hardware Security and HEM Proposals

The development of secure, reliable, and privacy-preserving hardware-enabled mechanisms (HEMs) relies on the implementation of key hardware security features on chips and associated hardware. This appendix provides a list of these security features and the HEMs they would enable.

Relevant hardware security features include:

  • Security modules: These dedicated processors on chips are responsible for basic security-related functions, such as ensuring that the chip is running uncompromised and up-to-date firmware. This feature allows security vulnerabilities in HEMs to be patched remotely via updates and ensures that chips aren’t running compromised software. NVIDIA’s latest data center graphics processing units (GPUs) already include similar features (firmware verification and rollback protection), although this is likely not robust against adversaries with physical access to the chips.
  • Trusted execution environments (TEEs): These secure regions of the main processor of a device protect the confidentiality and integrity of data while they are being processed. In the context of HEMs, TEEs can prevent spoofing (falsifying the data being transmitted from a chip) and allow the owner of a chip to make trustworthy, verifiable claims to another party. NVIDIA has already implemented chip-level TEEs into their H100 and upcoming Blackwell series GPUs to enable confidential computing (using hardware-level technologies to isolate data being processed), as have other leading U.S. chipmakers. Extending chip-level TEEs to the cluster level and protecting them with much greater security (including tamper protection) are two promising directions to both enhance end-user privacy and security, and enable TEEs to be used to support governance objectives, such as making a verifiable claim to a third party about the firmware running on the chip, or the amount of computation consumed by a workload.
  • Tamper-resistant enclosures: Physical housing for chips and/or servers can present useful obstacles to physical attacks. Tamper-resistant and tamper-respondent enclosures can curtail and detect attempts to physically disable security hardware, interfere with chip operation, or steal data such as cryptographic keys. Tamper-resistant enclosures don’t need to be completely tamper-proof to be useful. Enclosures can still act as strong deterrents if the tampering is expensive enough or risks the destruction of the chip. The tamper resistance of chip security features will be a crucial consideration for U.S. government officials when deciding whether chips with HEMs may be exempt from certain blanket export restrictions (e.g., expansions of harsher export restrictions to new countries like Vietnam, Singapore, or the United Arab Emirates) or appropriate for sale to untrusted or semitrusted buyers.
  • Other hardware security measures: Leading chipmakers already try to protect end-user intellectual property (IP) by incorporating confidential computing (CC) capabilities enabled by TEEs into their chips. However, current implementations, such as in current state-of-the-art NVIDIA H100 GPUs, cannot protect against attacks by adversaries with physical access to the AI chips (including insider threats at U.S. AI labs or data centers and foreign adversaries in possession of smuggled chips). To address this, chips could be hardened to protect against side-channel attacks, as well as more invasive tampering using enclosures or other measures. Chipmakers could also further protect AI labs’ model weights by limiting the bandwidth of communications between devices that hold AI model weights and the outside world. The precise technical path to achieving greater security will require further public-private coordination.

Relevant HEMs include:

  • Location verification: “Delay-based” location verification is a specific type of geolocation that is both highly privacy-preserving and hard to circumvent by design. It could be implemented to detect and deter AI chip smuggling, while not revealing any sensitive or precise location data from legitimate end users. This technique involves placing secure landmark servers in key positions around the globe, which would send and receive “pings” to exported AI chips to calculate the maximum possible distance between the chips and the servers (see an example in the endnote). The core functionality for this could already be available in NVIDIA’s early-access device attestation application programming interface for H100 GPUs, which helps applications verify that a device has not been compromised. Researchers have estimated that pure-software delay-based location verification could be implemented with less than $1 million of investment, adding that this implementation “seems both feasible and relatively cheap.”
  • Bandwidth bottlenecking: This HEM would limit high-bandwidth communication to a certain number of devices. (This HEM has also been called “fixed set”.) This would enable the use of high-end chips for most applications but limit their use in the large supercomputing clusters needed to execute large-scale AI training runs. This may enable the sale of high-end chips to semitrusted buyers for non-dual-use applications; for example, by preventing blanket export restrictions from spreading to countries like Vietnam and the United Arab Emirates.
  • Offline licensing: AI chips with this HEM would require an active license to operate. This license could either be time-based or meter-based, where the license is “used up” as the chips perform operations (see “metering” below). This does not imply an at-will remote shutdown capability, nor do the authors advocate for one. Creating licenses could require the agreement of multiple parties, including both governments and companies. Combined with a form of verification to underly the licensing decision (e.g., location verification), offline licensing, if implemented well, could allow chips to be sold to countries with an otherwise high risk of illegal re-export with greatly reduced chance of those chips being smuggled and misused. Without such an assurance mechanism in place, countries where smuggling has been found to occur may be next in line for stricter export restrictions.
  • Metering: This HEM would measure elapsed real time, arithmetic operations, power consumption, data transfer to memory, and other data to enable privacy-preserving workload classification. This could more easily enable cloud compute providers and regulators to know if end users are executing large AI training runs and if these exceed compute thresholds (for example, the thresholds in Executive Order 14110). If the meter values are robust to tampering and available to the chip’s TEE, end users could make verifiable claims about high-level properties of their workload to external verifiers or regulators without revealing other sensitive data, enabling privacy-preserving oversight of sensitive applications like large AI training runs. Much of the hardware functionality for metering is already in place in high-end AI chips. Tamper-resistant metering would also allow for more robust delay-based location verification and offline licensing.
  • Robust product differentiation: This is not a specific HEM but a family of related HEMs and hardware security features used to create clear distinctions between different categories of products in order to more neatly separate out national security–relevant products from other kinds of products. For example, tamper-resistant bandwidth bottlenecking on consumer-grade GPUs would allow clear differentiation between GPUs used for gaming and GPUs used for data center AI applications, thus permitting the unimpeded export of GPUs for uses such as gaming while continuing to regulate dual-use applications, like large-scale AI training runs.

This list is not exhaustive and is only meant to provide illustrative examples of HEMs and hardware security improvements. This is a broad design space; there are, no doubt, mechanisms that should be avoided (like the Clipper chip key-escrow system). Some of the mechanisms the authors are optimistic about may turn out to be ineffective or counterproductive, and there are likely other promising solutions not considered here. Given these uncertainties, the authors do not strongly advocate for particular technical implementations of HEMs but rather for greater investment into further researching and developing a variety of promising possible implementations that prioritize security and privacy.

About the Authors

Tim Fist is a senior adjunct fellow with the Technology and National Security Program at the Center for a New American Security (CNAS) and a senior technology fellow at the Institute for Progress. Fist has an engineering background and previously worked as the head of strategy and governance at Fathom Radiant, an AI hardware company. Prior to that, Fist worked as a machine learning engineer. Fist holds a BA with honors in aerospace engineering and a BA in political science from Monash University and is a PhD candidate in engineering science at the University of Oxford.

Tao Burga is a junior research scientist at New York University and a nonresident fellow at the Institute for Progress. Burga previously worked as a fellow at the Institute for AI Policy and Strategy. Prior to that, he worked on human-robot interaction as a research assistant at Brown University’s Sloman Lab. Burga holds a BA with honors in behavioral decision sciences with security and data specializations from Brown University.

Vivek Chilukuri is the senior fellow and program director of the Technology and National Security Program at CNAS. Before joining CNAS, Chilukuri served as a senior staff member for Senator Michael Bennet (D-CO), a member of the Senate Select Committee on Intelligence. Previously, Chilukuri served at the Department of State as a policy advisor to the undersecretary for civilian security, democracy, and human rights, and as a program officer on the Middle East and North Africa team at the National Democratic Institute. Chilukuri received an MPP from the Harvard Kennedy School and a BA in international studies from UNC–Chapel Hill, where he graduated as a Robertson Scholar.

About the Technology and National Security Program

The CNAS Technology and National Security Program produces cutting-edge research and recommendations to help U.S. and allied policymakers responsibly win and manage the great power competition with China over critical and emerging technologies. The escalating U.S.-China competition in artificial intelligence (AI), biotechnologies, next-generation information and communications technologies, digital infrastructure, and quantum information sciences will have far-reaching implications for U.S. foreign policy and national and economic security. The Technology and National Security Program focuses on high-impact technology areas with in-depth, evidence-based analysis to assess U.S. leadership vis-à-vis China, anticipate technology-related risks to security and democratic values, and outline bold but actionable steps for policymakers to lead the way in responsible technology development, adoption, and governance. A key focus of the Tech Program is to bring together the technology and policy communities to better understand these challenges and together develop solutions.

Acknowledgments

This report would not have been possible without invaluable contributions from our CNAS colleagues, including Maura McCarthy and Emma Swislow. The authors also thank Paul Scharre, Onni Aarne, Jamie Bernardi, Christian Chung, Christopher Covino, Oscar Delaney, Erich Grunewald, Lennart Heim, Gabriel Kulp, Sumaya Nur Adan, Joe O’Brien, Aidan O’Gara, and Caleb Withers for their invaluable feedback during the research for and writing of this working paper. Tao Burga’s work on this piece was conducted as part of the Institute for AI Policy & Strategy (IAPS) fellowship. This report was made possible with the generous support of Fathom and DALHAP Investments Ltd (via The RAND Corporation). The views expressed in this document are those of the authors and do not necessarily reflect RAND opinion.

As a research and policy institution committed to the highest standards of organizational, intellectual, and personal integrity, CNAS maintains strict intellectual independence and sole editorial direction and control over its ideas, projects, publications, events, and other research activities. CNAS does not take institutional positions on policy issues and the content of CNAS publications reflects the views of their authors alone. In keeping with its mission and values, CNAS does not engage in lobbying activity and complies fully with all applicable federal, state, and local laws. CNAS will not engage in any representational activities or advocacy on behalf of any entities or interests and, to the extent that the Center accepts funding from non-U.S. sources, its activities will be limited to bona fide scholastic, academic, and research-related activities, consistent with applicable federal law. The Center publicly acknowledges on its website annually all donors who contribute.

  1. Paul Scharre, Future-Proofing Frontier AI Regulation: Projecting Future Compute for Frontier AI Models (Center for a New American Security, March 13, 2024), https://www.cnas.org/publications/reports/future-proofing-frontier-ai-regulation.
  2. J. C. Sharman, “What Are Anonymous Shell Companies?” in The Money Laundry (Ithaca, NY: Cornell University Press, 2011), https://www.cornellpress.cornell.edu/what-are-anonymous-shell-companies; Gregory C. Allen, Emily Benson, and William Alan Reinsch, Improved Export Controls Enforcement Technology Needed for U.S. National Security (Center for Strategic and International Studies, November 30, 2022), https://www.csis.org/analysis/improved-export-controls-enforcement-technology-needed-us-national-security.
  3. Also known as “on-chip governance mechanisms,” this idea has previously been discussed in Aarne, Fist, and Withers, Secure, Governable Chips: Using On-Chip Mechanisms to Manage National Security Risks from AI & Advanced Computing (Center for a New American Security, January 8, 2024), https://www.cnas.org/publications/reports/secure-governable-chips; Gabriel Kulp et al., Hardware-Enabled Governance Mechanisms: Developing Technical Solutions to Exempt Items Otherwise Classified Under Export Control Classification Numbers 3A090 and 4A090 (RAND Corporation, January 18, 2024), https://www.rand.org/pubs/working_papers/WRA3056-1.html; William Alan Reinsch and Emily Benson, Digitizing Export Controls: A Trade Compliance Technology Stack? (Center for Strategic and International Studies, December 1, 2021), https://www.csis.org/analysis/digitizing-export-controls-trade-compliance-technology-stack.
  4. Apple Platform Security (Apple, May 2024), 9–19, archived in Oxmachos, “Apple-Platform-Security-Guides,” Github, May 13, 2022, https://github.com/0xmachos/Apple-Platform-Security-Guides/blob/master/2024-may-apple-platform-security-guide.pdf.
  5. Lucas Ropek, “The Short Life and Humiliating Death of the Clipper Chip,” Gizmodo, April 7, 2023, https://gizmodo.com/life-and-death-of-clipper-chip-encryption-backdoors-att-1850177832.
  6. Executive Order 14110, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” the White House, October 30, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.
  7. The Bureau of Industry and Security (BIS) issued some of the October 2023 export restrictions with a presumption of approval (most countries in the D:1 and D:4 groups in BIS’s country designations) and many with presumption of denial (countries in the D:5 group—i.e., those under a U.S. arms embargo): see “Supplement No. 1 to Part 740—Country Groups,” Bureau of Industry and Security, updated November 1, 2024, https://www.bis.gov/ear/title-15/subtitle-b/chapter-vii/subchapter-c/part-740/supplement-no-1-part-740-country-groups; “New Export Controls on Advanced Computing and Semiconductor Manufacturing: Five Key Takeaways,” Sidley, November 1, 2023, https://www.sidley.com/en/insights/newsupdates/2023/10/new-export-controls-on-advanced-computing-and-semiconductor-manufacturing. For discussion of consumer graphics processing units (GPUs) in the context of export controls, see: Erich Grunewald, Are Consumer GPUs a Problem for US Export Controls? (Institute for AI Policy and Strategy, May 31, 2024), https://www.iaps.ai/research/are-consumer-gpus-a-problem-for-us-export-controls.
  8. “Foreign-Produced Direct Product Rule Additions, and Refinements to Controls for Advanced Computing and Semiconductor Manufacturing Items,” 89 Fed. Reg. 96790 (December 5, 2024), https://www.federalregister.gov/documents/2024/12/05/2024-28270/foreign-produced-direct-product-rule-additions-and-refinements-to-controls-for-advanced-computing.
  9. Meaghan Tobin and Cade Metz, “China Is Closing the A.I. Gap with the United States,” The New York Times, July 25, 2024, https://www.nytimes.com/2024/07/25/technology/china-open-source-ai.html.
  10. Mackenzie Hawkins et al., “US Floats Tougher Trade Rules to Rein in China Chip Industry,” Bloomberg, July 17, 2024, https://www.bloomberg.com/news/articles/2024-07-17/us-considers-tougher-trade-rules-against-companies-in-chip-crackdown-on-china; Mackenzie Hawkins, Cagan Koc, and Takashi Mochizuki, “ASML, Tokyo Electron Dodge New US Chip Export Rules, for Now,” Bloomberg, July 31, 2024, https://www.bloomberg.com/news/articles/2024-07-31/asia-chip-stocks-rally-on-report-us-allies-exempted-from-curbs.
  11. Mike Lawler, “Representatives Mike Lawler, Jeff Jackson, Rich McCormick, and Jasmine Crockett Introduce Legislation to Prevent CCP from Acquiring American Tech,” April 29, 2024, https://lawler.house.gov/news/documentsingle.aspx?DocumentID=1648.
  12. “Commerce Control List Additions and Revisions; Implementation of Controls on Advanced Technologies Consistent With Controls Implemented by International Partners,” 89 Fed. Reg. 72926 (September 6, 2024), https://www.federalregister.gov/documents/2024/09/06/2024-19633/commerce-control-list-additions-and-revisions-implementation-of-controls-on-advanced-technologies#page-72926.
  13. Qianer Liu, “Nvidia AI Chip Smuggling to China Becomes an Industry,” The Information, September 8, 2024, https://www.theinformation.com/articles/nvidia-ai-chip-smuggling-to-china-becomes-an-industry.
  14. Sharman, “What Are Anonymous Shell Companies?”; Allen, Benson, and Reinsch, Improved Export Controls Enforcement Technology Needed for U.S. National Security.
  15. Scharre, Future-Proofing Frontier AI Regulation: Projecting Future Compute for Frontier AI Models.
  16. Markus Anderljung et al., “Frontier AI Regulation: Managing Emerging Risks to Public Safety,” arXiv (July 11, 2023), http://arxiv.org/abs/2307.03718.
  17. Ben Garfinkel and Allan Dafoe, “Artificial Intelligence, Foresight, and the Offense-Defense Balance,” War on the Rocks, December 19, 2019, https://warontherocks.com/2019/12/artificial-intelligence-foresight-and-the-offense-defense-balance; Sarah Kreps, Democratizing Harm: Artificial Intelligence in the Hands of Nonstate Actors (Brookings Institution, November 2021), https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors.
  18. Aarne, Fist, and Withers, Secure, Governable Chips: Using On-Chip Mechanisms to Manage National Security Risks from AI & Advanced Computing.
  19. This term was first introduced in Kulp et al., Hardware-Enabled Governance Mechanisms: Developing Technical Solutions to Exempt Items Otherwise Classified Under Export Control Classification Numbers 3A090 and 4A090.
  20. Apple Platform Security.
  21. Remote Attestation of Disaggregated Machines | Documentation, Google Cloud, December 2022, https://cloud.google.com/docs/security/remote-attestation.
  22. Andrew Cunningham, “Riot Games’ Anti-Cheat Software Will Require TPM, Secure Boot on Windows 11,” Ars Technica, September 8, 2021, https://arstechnica.com/gaming/2021/09/riot-games-anti-cheat-software-will-require-tpm-secure-boot-on-windows-11.
  23. Fan Mo, Zahra Tarkhani, and Hamed Haddadi, “Machine Learning with Confidential Computing: A Systematization of Knowledge,” arXiv (April 2, 2023), http://arxiv.org/abs/2208.10134.; Fan Mo et al., “PPFL: Privacy-Preserving Federated Learning with Trusted Execution Environments,” arXiv (June 28, 2021), http://arxiv.org/abs/2104.14380; and Xiaoguo Li et al., “A Survey of Secure Computation Using Trusted Execution Environments,” arXiv (February 23, 2021), http://arxiv.org/abs/2302.12150.
  24. “Implementation of Additional Export Controls: Certain Advanced Computing Items; Supercomputer and Semiconductor End Use; Updates and Corrections,” 88 Fed. Reg. 73486, Department of Commerce, Bureau of Industry and Security (2023), https://www.federalregister.gov/d/2023-23055/p-350.
  25. In July 2024, the Senate Appropriations Committee approved a bill including a section on “Feasibility of On-Chip Mechanisms for Export Control,” directing the Department of Commerce to “report to the Committee regarding the feasibility of future steps in this area [on-chip mechanisms for export control].” “Departments of Commerce and Justice, Science, and Related Agencies Appropriations Bill, 2025,” S. Rept. 118-198 (September 4, 2024), https://www.congress.gov/congressional-report/118th-congress/senate-report/198/1.
  26. Ana Swanson and Claire Fu, “With Smugglers and Front Companies, China Is Skirting American A.I. Bans,” The New York Times, August 4, 2024, https://www.nytimes.com/2024/08/04/technology/china-ai-microchips.html; Ana Swanson, “Takeaways from Our Investigation into Banned A.I. Chips in China,” The New York Times, August 4, 2024, https://www.nytimes.com/2024/08/04/technology/china-ai-microchips-takeaways.html; Raffaele Huang, “The Underground Network Sneaking Nvidia Chips into China,” The Wall Street Journal, July 2, 2024, https://www.wsj.com/tech/the-underground-network-sneaking-nvidia-chips-into-china-f733aaa6; “NVIDIA AI Chip Smuggling to China Becomes an Industry,” The Information, August 12, 2024, https://www.theinformation.com/articles/nvidia-ai-chip-smuggling-to-china-becomes-an-industry.
  27. Previous research estimates that the countries with the highest risk of re-exporting chips to China are “India, Indonesia, Malaysia, the Philippines, Saudi Arabia, Singapore, Taiwan, Thailand, the United Arab Emirates, and/or Vietnam,” with substantial uncertainty. As the volume of chip smuggling increases, the current approach of blunt export restrictions will either have to spread uncritically to all such suspected high-risk countries, or spread in a targeted way, supported by location verification evidence and surgical end use controls. For more on the high-risk countries, see Erich Grunewald and Michael Aird, AI Chip Smuggling into China: Potential Paths, Quantities, and Countermeasures (Institute for AI Policy and Strategy, October 4, 2023), https://www.iaps.ai/research/ai-chip-smuggling-into-china.
  28. As an example of existing regulation that could be enforced with hardware-enabled mechanisms (HEMs) via metering and operating licenses, see Executive Order 14110, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” October 30, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence. For discussion on using HEMs (or on-chip mechanisms) as adaptive platforms for governance, see Aarne, Fist, and Withers, Secure, Governable Chips: Using On-Chip Mechanisms to Manage National Security Risks from AI & Advanced Computing; and for discussion on possible future international agreements verified and enforced through compute governance, see Lennart Heim et al., Governing Through the Cloud: The Intermediary Role of Compute Providers in AI Regulation (Centre for the Governance of AI, March 13, 2024), https://www.governance.ai/research-paper/governing-through-the-cloud.
  29. Aarne, Fist, and Withers, Secure, Governable Chips: Using On-Chip Mechanisms to Manage National Security Risks from AI & Advanced Computing.
  30. Asher Brass and Onni Aarne, Location Verification for AI Chips (Institute for AI Policy and Strategy, May 6, 2024), https://www.iaps.ai/research/location-verification-for-ai-chips.
  31. Offline licensing is also referred to as “fixed set.” See Kulp et al., Hardware-Enabled Governance Mechanisms: Developing Technical Solutions to Exempt Items Otherwise Classified Under Export Control Classification Numbers 3A090 and 4A090.
  32. Aarne, Fist, and Withers, Secure, Governable Chips: Using On-Chip Mechanisms to Manage National Security Risks from AI & Advanced Computing; Kulp et al., Hardware-Enabled Governance Mechanisms: Developing Technical Solutions to Exempt Items Otherwise Classified Under Export Control Classification Numbers 3A090 and 4A090.
  33. Kulp et al., Hardware-Enabled Governance Mechanisms: Developing Technical Solutions to Exempt Items Otherwise Classified Under Export Control Classification Numbers 3A090 and 4A090; James Petrie, “Near-Term Enforcement of AI Chip Export Controls Using a Firmware-Based Design for Offline Licensing,” arXiv (revised May 28, 2024), https://arxiv.org/abs/2404.18308.
  34. Apple Platform Security.
  35. Ropek, “The Short Life and Humiliating Death of the Clipper Chip.”
  36. “Intel on Demand,” Intel, https://www.intel.com/content/www/us/en/products/docs/ondemand/overview.html; “Capacity on Demand,” IBM, updated February 8, 2022, https://www.ibm.com/docs/en/power9?topic=environment-capacity-demand.
  37. “Remote Attestation of Disaggregated Machines,” Google Cloud, December 2022, http://cloud.google.com/docs/security/remote-attestation; Cunningham, “Riot Games’ Anti-Cheat Software Will Require TPM, Secure Boot on Windows 11.”
  38. For an overview of different operating contexts and defensive measures, see Aarne, Fist, and Withers, Secure, Governable Chips: Using On-Chip Mechanisms to Manage National Security Risks from AI & Advanced Computing.
  39. “Reflections on Our Responsible Scaling Policy,” Anthropic, May 19, 2024, https://www.anthropic.com/news/reflections-on-our-responsible-scaling-policy; “Reimagining Secure Infrastructure for Advanced AI,” OpenAI, May 3, 2024, https://openai.com/index/reimagining-secure-infrastructure-for-advanced-ai.
  40. A team of RAND researchers conducted interviews with 32 experts, including C-suite level staff at frontier AI companies, and noted that a point of strong consensus was the need for more secure confidential computing (CC) capabilities on artificial intelligence (AI) chips to secure model weights. Sella Nevo et al., Securing AI Model Weights: Preventing Theft and Misuse of Frontier Models (RAND Corporation, May 30, 2024), https://www.rand.org/pubs/research_reports/RRA2849-1.html.
  41. For example, government backdoors that make the chips less competitive, or commercially desirable HEMs that are not tamper resistant and thus not secure in the hands of adversaries.
  42. The following recommendations are based on prior research by one of the authors: Aarne, Fist, and Withers, Secure, Governable Chips: Using On-Chip Mechanisms to Manage National Security Risks from AI & Advanced Computing.
  43. “National Semiconductor Technology Center,” National Institute of Standards and Technology, https://www.nist.gov/chips/research-development-programs/national-semiconductor-technology-center; “The Microelectronics Commons,” Office of the Under Secretary of Defense for Research and Engineering, https://www.cto.mil/ct/microelectronics/commons.
  44. “Next-Generation Microelectronics Manufacturing Opens Phases 1 and 2,” Defense Advanced Research Projects Agency, November 17, 2023, https://www.darpa.mil/news-events/2023-11-17; Lok Yan, “System Security Integration Through Hardware and Firmware (SSITH),” Defense Advanced Research Projects Agency, https://www.darpa.mil/program/ssith.
  45. A 2023 CHIPS for America strategy outlines a whole-of-government vision: “The NSTC [National Science and Technology Council] will be able to support technologies emerging from the Commons and will collaborate closely with DOD to ensure program coordination and sharing of resources as part of the broader whole-of-government approach in alignment with the national strategy.” For more on the NSTC’s approach and collaboration with the Commons, see Chips for America: A Vision and Strategy for the National Semiconductor Technology Center (CHIPS Research and Development Office, April 25, 2023), https://www.nist.gov/system/files/documents/2023/04/27/A%20Vision%20and%20Strategy%20for%20the%20NSTC.pdf. For more on the Manufacturing USA ecosystem, see “Manufacturing USA Semiconductor Institutes,” 87 Fed. Reg. 62080 (October 13, 2022), https://www.federalregister.gov/d/2022-22221.
  46. “Test Vehicle Innovation Pipeline (TVIP),” Natcast, https://natcast.org/research-and-development/tvip; “Artificial Intelligence Driven Radio Frequency Integrated Circuit Design Enablement (AIDRFIC) Program,” Natcast, https://natcast.org/research-and-development/aidrfic.
  47. From the CHIPS Research and Development Office: “The Commons will address the need for processes, materials, devices, and architectures to be developed and quickly ported and re-characterized as they transition from research to small-volume prototyping in labs and finally to fabrication prototypes that can demonstrate the volume and characteristics required to ensure reduced risk for manufacturing.” CHIPS for America: A Vision and Strategy for the National Semiconductor Technology Center.
  48. Aarne, Fist, and Withers, Secure, Governable Chips: Using On-Chip Mechanisms to Manage National Security Risks from AI & Advanced Computing.
  49. “National Advanced Packaging and Manufacturing Program,” National Institute of Standards and Technology, https://www.nist.gov/chips/research-development-programs/national-advanced-packaging-manufacturing-program.
  50. Alternatively, the government could make advance market commitments (AMCs)—commitments to buying a certain quantity of HEM-equipped chips for a set price, or to subsidize (by a set amount) the sale of HEM-equipped chips that fulfill certain security requirements. Traditionally, AMCs have been used in incentivizing and derisking private vaccine development, most famously some COVID-19 vaccines. “Creating Advanced Market Commitments and Prizes for Pandemic Preparedness,” Federation of American Scientists, January 19, 2022, http://fas.org/publication/creating-advanced-market-commitments-and-prizes-for-pandemic-preparedness.
  51. “Implementation of Additional Export Controls: Certain Advanced Computing Items; Supercomputer and Semiconductor End Use; Updates and Corrections.” Some export control expansions being considered as of this writing: Invoking the Foreign Direct Product Rule (FDPR) on more technologies, a sweeping regulation to prevent companies abroad from selling their products using American components, tools, or software to Chinese chipmaking companies: Mackenzie Hawkins et al., “US Floats Tougher Trade Rules to Rein in China Chip Industry,” Bloomberg, updated July 17, 2024, https://www.bloomberg.com/news/articles/2024-07-17/us-considers-tougher-trade-rules-against-companies-in-chip-crackdown-on-china. Adding about 120 Chinese entities to a restricted trade list, including chip manufacturers, toolmakers, and electronic design automation (EDA) software providers: Karen Freifeld, “Exclusive: New US Rule on Foreign Chip Equipment Exports to China to Exempt Some Allies,” Reuters, July 31, 2024, https://www.reuters.com/technology/new-us-rule-foreign-chip-equipment-exports-china-exempt-some-allies-sources-say-2024-07-31. Limiting China’s access to U.S. cloud computing services through the bipartisan Remote Access Security Act: Congressman Mike Lawler, “Representatives Mike Lawler, Jeff Jackson, Rich McCormick, and Jasmine Crockett Introduce Legislation to Prevent CCP from Acquiring American Tech.” Restricting the export of high-bandwidth memory chips and gate-all-around (GAA) technology and tools. The BIS’s September 2024 interim final rule already created worldwide export controls for gates-all-around field effect transistor (GAAFET) technology and some semiconductor manufacturing equipment (SME), granting license exceptions only to countries with harmonized export controls on the same technologies. Mackenzie Hawkins and Ian King, “US Weighs More Limits on China’s Access to Chips Needed for AI,” Bloomberg, July 11, 2024, https://www.bloomberg.com/news/articles/2024-06-11/us-weighs-more-limits-on-china-s-access-to-cutting-edge-chips-needed-for-ai; “Commerce Control List Additions and Revisions; Implementation of Controls on Advanced Technologies Consistent With Controls Implemented by International Partners.”
  52. In July 2024, the Senate Appropriations Committee approved a bill that included a section on the “Feasibility of On-Chip Mechanisms for Export Control,” directing the Department of Commerce to “report to the Committee regarding the feasibility of future steps” for on-chip mechanisms for export control. Departments of Commerce and Justice, Science, and Related Agencies Appropriations Bill, 2025, S. Rept. 118-198, 118th Cong. (September 4, 2024), https://www.congress.gov/congressional-report/118th-congress/senate-report/198/1.
  53. “Security Requirements for Cryptographic Modules,” March 22, 2019, National Institute of Standards and Technology, https://csrc.nist.gov/pubs/fips/140-3/final.
  54. “The OCP Security Appraisal Framework and Enablement (S.A.F.E.) Program,” Open Compute Project, https://www.opencompute.org/projects/ocp-safe-program; “Caliptra,” archived in chipsalliance, GitHub, November 20, 2024, https://github.com/chipsalliance/Caliptra.
  55. This is in line with the U.S. government’s objectives for investment and participation laid out in the United States Government National Standards Strategy for Critical and Emerging Technology, the White House, May 2023, https://www.whitehouse.gov/wp-content/uploads/2023/05/US-Gov-National-Standards-Strategy-2023.pdf. In its official strategy for the NSTC, the U.S. government clarified that, “In coordination with NIST [National Institute of Standards and Technology], the NSTC can facilitate the implementation of standards for security features that contribute to the development of, and access to, silicon-proven secure IP. Maximizing the level of protection provided while minimally compromising system performance is a challenge for the whole industry, and one that the NSTC will be well-positioned to address in the pre-competitive space.” CHIPS for America: A Vision and Strategy for the National Semiconductor Technology Center, 13–14.
  56. It may still be possible to implement HEMs without many or all of these hardware security features, although they would not be as secure.
  57. Aarne, Fist, and Withers, Secure, Governable Chips: Using On-Chip Mechanisms to Manage National Security Risks from AI & Advanced Computing.
  58. NVIDIA H100 NVL GPU Product Brief (NVIDIA, March 14, 2024), https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/h100/PB-11773-001_v01.pdf.
  59. Aarne, Fist, and Withers, Secure, Governable Chips: Using On-Chip Mechanisms to Manage National Security Risks from AI & Advanced Computing.
  60. Gobikrishna Dhanuskodi et al., “Creating the First Confidential GPUs,” Queue 21, no. 4 (September 7, 2023), https://queue.acm.org/detail.cfm?id=3623391.
  61. It may soon be possible to enable multi–graphics processing unit (multi-GPU) confidential computing for some NVIDIA GPUs, but as of September 2023, an NVIDIA forum moderator stated that it was not possible: rnertney [moderator], “Currently in the Early Access, we do not provide multi-GPU CC support. We will provide the appropriate code when we release the version with multi-GPU support,” NVIDIA Forums, posted September 20, 2023, https://forums.developer.nvidia.com/t/use-cc-in-multi-gpu-system-with-nvswitch/267010.
  62. Kulp et al., Hardware-Enabled Governance Mechanisms: Developing Technical Solutions to Exempt Items Otherwise Classified Under Export Control Classification Numbers 3A090 and 4A090; Aarne, Fist, and Withers, Secure, Governable Chips: Using On-Chip Mechanisms to Manage National Security Risks from AI & Advanced Computing.
  63. “NVIDIA Confidential Computing,” NVIDIA, https://www.nvidia.com/en-us/data-center/solutions/confidential-computing; Anand Pashupathy, “From Confidential Computing to Confidential AI,” Intel, March 25, 2024, https://community.intel.com/t5/Blogs/Products-and-Solutions/Security/From-Confidential-Computing-to-Confidential-AI/post/1583214; “AMD Secure Encrypted Virtualization (SEV),” AMD, https://www.amd.com/en/developer/sev.html; “Confidential VM Overview,” Google Cloud, https://cloud.google.com/confidential-computing/confidential-vm/docs/confidential-vm-overview.
  64. NVIDIA has stated that for the Hopper GPUs, “sophisticated physical attacks” are “out-of-scope threat vectors”: Rob Nertney, Confidential Compute on NVIDIA Hopper H100 (NVIDIA, July 25, 2023), https://images.nvidia.com/aem-dam/en-zz/Solutions/data-center/HCC-Whitepaper-v1.0.pdf. Researchers at RAND identified insider threats as one of the key risks to the protection of model weights: Nevo et al., Securing AI Model Weights: Preventing Theft and Misuse of Frontier Models. OpenAI also discussed insider threats in the aforementioned blog post: “Reimagining Secure Infrastructure for Advanced AI.” Kulp et al., Hardware-Enabled Governance Mechanisms: Developing Technical Solutions to Exempt Items Otherwise Classified Under Export Control Classification Numbers 3A090 and 4A090.
  65. “Side-Channel Attack,” NIST Computer Security Resource Center, National Institute of Standards and Technology, https://csrc.nist.gov/glossary/term/side_channel_attack.
  66. Nevo et al., Securing AI Model Weights: Preventing Theft and Misuse of Frontier Models.
  67. Brass and Aarne, “Location Verification for AI Chips.”
  68. For example, a server in Australia could send a ping to exported AI chips (e.g., chips sold to Indonesia, which lies between Australia and China), and the chips would send a cryptographically signed ping back to the server in Australia. Based on the time it takes for the ping to reach the server (the delay), the server could calculate the maximum distance the chips could be from Australia (based on the hard limit of the speed of light or network packets in fiber-optic cables). If this distance is less than the distance between Australia and China, we know that the chips cannot be in China. The servers could initiate the pings and measure the round-trip time (RTT), in which case the chips would only need to be able to cryptographically sign the ping to the server. Alternatively, the chips themselves could initiate the ping to the server and measure the RTT. This could allow for automatic enforcement of export policy by automatically throttling chip performance if a chip cannot prove that it is not in restricted regions. This would require a much greater level of on-chip hardware security to prevent tampering.
  69. “NVIDIA Attestation SDK,” NVIDIA, archived in nvtrust, GitHub, February 24, 2024, https://github.com/NVIDIA/nvtrust/blob/main/guest_tools/attestation_sdk/README.md.
  70. Brass and Aarne, “Location Verification for AI Chips.”
  71. Kulp et al., Hardware-Enabled Governance Mechanisms: Developing Technical Solutions to Exempt Items Otherwise Classified Under Export Control Classification Numbers 3A090 and 4A090. This could be achieved in different ways: (1) Every device could have an updatable list of approved peers, and networking between devices could start with a handshake exchanging IDs. If the received ID is not on the approved list, the connection is throttled or rejected; or (2) The devices could enumerate the total device count on the high-speed (e.g., NVLink) network and throttle, shut down, or limit training-relevant features if the count is higher than a predefined number. We expect there could be more effective and commercially desirable ways to both limit large-scale AI training and permit other workloads that we have not included in this paper.
  72. Kulp et al., Hardware-Enabled Governance Mechanisms: Developing Technical Solutions to Exempt Items Otherwise Classified Under Export Control Classification Numbers 3A090 and 4A090; Petrie, “Near-Term Enforcement of AI Chip Export Controls Using a Firmware-Based Design for Offline Licensing.”
  73. Aarne, Fist, and Withers, Secure, Governable Chips: Using On-Chip Mechanisms to Manage National Security Risks from AI & Advanced Computing.
  74. Kulp et al., Hardware-Enabled Governance Mechanisms: Developing Technical Solutions to Exempt Items Otherwise Classified Under Export Control Classification Numbers 3A090 and 4A090.
  75. An export license is currently required to export AI chips to dozens of countries suspected of circumventing AI chips to China, though the license bears a “presumption of approval.” Uncontrolled smuggling could cause the U.S. government to require export licenses with presumption of denial for these same countries, just as it does for China.
  76. Executive Order 14110, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”; Girish Sastry et al., “Computing Power and the Governance of Artificial Intelligence,” arXiv (February 13, 2024), https://arxiv.org/abs/2402.08797; and Heim et al., Governing Through the Cloud: The Intermediary Role of Compute Providers in AI Regulation.
  77. NVIDIA GPUs, as an illustrative example, already contain floating point operations per second (FLOP) counters and utilization rate recorders for tensor cores, random-access memory (RAM), and NVLink interconnect—so researchers do not expect more overhead than is already tolerated to be added by metering for HEMs like offline licensing: Kulp et al., Hardware-Enabled Governance Mechanisms: Developing Technical Solutions to Exempt Items Otherwise Classified Under Export Control Classification Numbers 3A090 and 4A090.
  78. Ropek, “The Short Life and Humiliating Death of the Clipper Chip.”

Authors

  • Tim Fist

    Senior Adjunct Fellow, Technology and National Security Program

    Tim Fist is a Senior Adjunct Fellow with the Technology and National Security Program at CNAS. His work focuses on the governance of artificial intelligence using compute/comp...

  • Tao Burga

    Junior Research Scientist, New York University

    Tao Burga is a junior research scientist at New York University and a nonresident fellow at the Institute for Progress. Burga previously worked as a fellow at the Institute fo...

  • Vivek Chilukuri

    Senior Fellow and Director, Technology and National Security Program

    Vivek Chilukuri is the senior fellow and program director of the Technology and National Security Program at the Center for a New American Security (CNAS). His areas of focus ...

  • Reports
    • January 8, 2024
    Secure, Governable Chips

    Broadly capable AI systems, built and deployed using specialized chips, are becoming an engine of economic growth and scientific progress. At the same time, these systems also...

    By Onni Aarne, Tim Fist & Caleb Withers

  • Reports
    • October 24, 2023
    Preventing AI Chip Smuggling to China

    China cannot legally import the most advanced artificial intelligence (AI) chips or the tooling to produce them. The large and growing computational requirements of the most p...

    By Tim Fist & Erich Grunewald

  • Podcast
    • October 17, 2024
    U.S. Chip Controls and the Future of AI Compute

    That escalated quickly! Emily and Geoff discuss why the U.S. aim to deny China access to the computing power necessary for frontier AI capabilities has led to an ever expandin...

    By Emily Kilcrease, Geoffrey Gertz & Pablo Chavez

View All Reports View All Articles & Multimedia