Skip to main content

The U.S. Department of Commerce, Bureau of Industry and Security (BIS) has implemented a new rule restricting the export of advanced AI chips. Referred to the Interim Final Rule on Artificial Intelligence Diffusion, it aims to ensure that these chips, which have significant national security implications, are not acquired by adversaries or used for malicious purposes.

Key points of the rule

Limits on the export of certain high-performance AI chips

  • The rule imposes a global license requirement for the export of advanced ICs and related .z items.
  • The rule creates several exceptions and pathways to authorization to facilitate transactions that pose a low risk of diversion or would otherwise advance U.S. national security or foreign policy interests, including technological leadership.
  • Creates a new Foreign Direct Product (FDP) rule for the model weights of closed-weight models trained using more than 1026 computational operations.

Licensing system for companies that wish to export these chips

  • A global licensing requirement enables BIS to impose conditions that reduce the overall risk.
  • For particularly low-risk destinations, BIS will provide license exceptions—conditioned on compliance with certain security measures—for validated end users.

New “Validated End-User” (VEU) authorization for trusted entities to streamline the export process

  • The rule creates a new “Validated End-User” (VEU) authorization.
  • VEU authorization advances U.S. national security and foreign policy by allowing entities that have agreed to enact concrete, verifiable, and robust security measures to access large clusters of advanced ICs.
  • VEUs in destinations other than Macau, Country Group D:5, or those listed in paragraph (a) to supplement no. 5 to Part 740 that do not meet eligibility criteria for the National VEU Authorization will be subject to uniform default country allocations of advanced ICs.

Provisions to prevent the unauthorized transfer of model weights, the core components of advanced AI models

  • The rule includes provisions to prevent the unauthorized transfer of model weights, which are the core components of advanced AI models.
  • Model weights can be the most valuable and closely guarded elements of an AI model.
  • To protect U.S. national security and foreign policy interests, it is necessary to impose a global licensing requirement on the model weights of the most advanced AI models.

Rationale behind the rule

National security

The rule is primarily driven by national security concerns, as advanced AI chips can be used in developing advanced military technologies and other sensitive applications.

Human rights

The rule also considers human rights aspects, requiring companies seeking VEU authorization to demonstrate a commitment to responsible AI use and human rights protection.

Global AI landscape

The rule aims to shape the global AI landscape responsibly, ensuring that AI technology is used ethically and does not fall into the wrong hands.

Impact on industry

  1. Chip manufacturers: Companies like Nvidia and AMD will need to comply with the new licensing requirements, potentially affecting their revenue streams.
  2. Cloud providers: Major cloud service providers will need to seek VEU authorization to build data centers globally, impacting their expansion plans.
  3. AI research and development: The rule could impact international collaboration in AI research and development due to export restrictions.

The significance of the new rule

The rule reflects the growing importance of AI in national security and global technological competition. It also highlights the need for responsible AI development and deployment and could lead to a reorganization of the global AI landscape, with increased focus on trusted partnerships and ethical considerations.

Impact on geopolitics

Overall, the rule is likely to have a lasting impact on the geopolitics of AI, shaping international collaborations, fueling competition, and influencing the trajectory of AI development on a global scale.

The rule is expected to have significant geopolitical consequences, particularly concerning U.S.-China relations and the global AI landscape. It is likely to heighten tensions with China, which may view the restrictions as an attempt to contain its technological rise. Additionally, the rule could lead to a fragmentation of the global AI ecosystem, with different blocs developing their own standards and technologies.

The rule’s emphasis on partnerships with trusted entities and allies could also lead to a strengthening of alliances between the U.S. and like-minded nations in the AI field. This might result in a more defined bloc in the global AI landscape, potentially intensifying competition for partnerships with countries in the developing world.

The focus on responsible AI development and deployment emphasized in the rule could also influence global norms and standards in the AI field. This could lead to greater scrutiny of AI applications and a push for ethical frameworks to guide AI development and use.

Looking ahead

The rule will be reviewed regularly and may be revised. How well the rule works depends on the Department of Commerce’s ability to enforce the new regulations. The rule may influence the policies of other countries regarding the export of AI chips and the control of technology. The new rule is a significant step toward regulating the global AI landscape and shows how important it is to balance technological advancement and national security, as well as ethical considerations.