For years, businesses and governments turned to confidential compute as a cornerstone of their cloud strategies. The idea was simple: process sensitive data in a secure environment where no one—not even the cloud provider—could access it. Alongside this came assurances of data privacy, ensuring users retained control over how their information was collected, shared, and used. But with the introduction of the US AI Diffusion Framework, both confidentiality and privacy in cloud computing are under siege.
These sweeping export controls, designed to limit adversarial nations’ access to AI technologies, require cloud providers to monitor workloads, scrutinise data usage, and impose stringent reporting requirements. The result? A new reality where confidential compute and data privacy are no longer guaranteed.
Confidentiality versus privacy: A subtle but critical difference
On the one hand, confidentiality refers to the secure processing of data so that no unauthorised party, including the service provider, can access it. For instance, a business running proprietary algorithms or AI models in a cloud environment without the provider being able to see the underlying data or processes.
On the other hand, privacy refers to the protection of personal data from being shared, used, or accessed without the individual’s consent. For example, a company ensuring that customer information, such as medical records or financial data, is not misused or shared with third parties.
While distinct, these concepts are intertwined in cloud computing. Confidentiality builds the foundation for privacy by ensuring that data and processes remain shielded from outside access. The AI Diffusion Framework erodes both by requiring intrusive monitoring and reporting.
How the export controls impact confidentiality
The AI Diffusion Framework mandates that cloud providers monitor GPU usage, track customer workloads, and log activity to ensure compliance with export restrictions. This control creates an environment where confidential compute is no longer possible:
- Mandatory workload monitoring: Providers must track GPU clusters exceeding 10,000 GPUs to ensure they are not being used to train restricted frontier AI models. This breaks confidentiality by requiring providers to observe and log client activity, even when sensitive or proprietary.
- Model weight controls: The export and storage of model weights (the learned parameters of AI models) are now tightly regulated. Providers must implement strict controls, including rate-limiting access and securing weights on dedicated systems. This level of oversight eliminates the possibility of running truly confidential AI workloads.
- Know-your-customer (KYC) policies: Providers must enforce rigorous KYC measures, identifying and vetting customers to prevent restricted entities from accessing advanced compute resources. This further undermines confidentiality by introducing new layers of surveillance.
How the export controls impact privacy
The Framework’s reporting and monitoring requirements also undermine data privacy, as users no longer have full control over how their information is accessed and used.
- Semi-annual reporting: Cloud providers must submit detailed reports to the US Bureau of Industry and Security (BIS), including customer identities, workload details, and GPU usage. This shifts control of personal data from the user to the provider and, ultimately, to government oversight.
- Red flag guidance: Providers are required to flag suspicious transactions, especially those linked to Tier 3 countries like China. This increases the likelihood of data being scrutinised or shared, potentially exposing user information.
- Global reach of US controls: The Framework applies not only to entities within the US but also to any company using US-origin hardware or software, extending the reach of these privacy-infringing measures worldwide.
The bigger picture: Eroding trust
The erosion of confidentiality and privacy has broader implications for cloud computing.
Loss of trust in cloud providers
Customers who previously relied on US-based hyperscalers like Microsoft, Amazon, or Google for secure and private computing may seek alternatives in regions with less intrusive policies.
Competitive disadvantage
Non-US providers in jurisdictions with stronger privacy protections, such as the EU or Middle East, are positioning themselves as more trustworthy alternatives.
Chilling effect on innovation
Startups and researchers reliant on confidential compute may hesitate to develop cutting-edge AI solutions in environments where their proprietary data could be monitored.
A balancing act between security, privacy, and confidentiality
The US government argues that these measures are essential for national security, particularly to prevent adversaries like China from developing frontier AI models. But this strategy comes at a cost. By dismantling the pillars of confidentiality and privacy, the AI Diffusion Framework risks undermining trust in US technology providers and creating a fragmented global cloud ecosystem.
For policymakers, the challenge lies in balancing security concerns with the need to preserve the principles of confidentiality and privacy that underpin the digital economy. For businesses, the question is whether they can adapt to this new reality—or if they must look elsewhere for solutions that protect their data and trust.
How ITLawCo can help
The challenges posed by the AI diffusion framework require more than a compliance checklist—they demand strategic thinking, proactive adaptation, and a nuanced understanding of technology, law, and policy. At ITLawCo, we specialise in helping businesses, policymakers, and innovators navigate these complex dynamics.
Here’s how we can support you:
- Regulatory compliance: We ensure your organisation adheres to export controls, licensing requirements, and reporting obligations while protecting your operations and intellectual property.
- Risk assessment and mitigation: Our experts help identify vulnerabilities in your data privacy, confidentiality, and supply chain processes, developing tailored strategies to address them.
- Strategic advisory for AI and cloud: We provide actionable guidance for businesses adapting to the new regulatory environment, including those in Tier 2 and Tier 3 countries, to maintain competitiveness.
- Privacy and confidentiality solutions: We assist in implementing safeguards to minimise the impact of monitoring and reporting requirements on your sensitive data and proprietary workloads.
- Dispute resolution and advocacy: If your organisation encounters legal challenges under these new rules, our team is equipped to represent your interests and advocate for fair outcomes.
- Policy insights and training: We offer workshops, training, and advisory services to help stakeholders understand the broader implications of the AI diffusion framework and prepare for its impact.
At ITLawCo, we are uniquely positioned at the intersection of law, technology, and policy. Whether you’re a multinational navigating global compliance or a startup grappling with access to resources, we can help you turn these challenges into opportunities. Contact us today to learn how we can support your organisation in adapting to this evolving regulatory landscape.