CSuiteBuy now
PrivacyPolicyOpinionMay 15, 202610 min read

Regulations are eating cloud AI

Eight jurisdictions, billions in fines, and the moment 'send it to a cloud LLM' became the riskiest default in regulated work. A compliance map for the people drawing the AI architecture.

By Atul
Three numbers the compliance team already knows
€1.2B
Meta's GDPR fine for transferring EU user data to US servers
Irish DPC, May 2023
€15M
Italy's first generative-AI GDPR fine, against OpenAI
Garante, December 2024
7%
of global turnover — the EU AI Act's top fine tier
EU AI Act, Article 99

Two years ago, the assumed shape of a serious AI deployment was simple: pick an API, send the prompt, get the answer back. The legal review was a checkbox, the data-protection officer signed off, and nobody at the executive table looked twice. That assumption is breaking. Quietly, on four continents, the laws around what you’re allowed to do with a cloud LLM have moved — and the people who say “just send it to GPT” are increasingly the ones who didn’t read the latest memo.

This post is the map. Not a legal opinion, not a lobbying piece — just the receipts, jurisdiction by jurisdiction, of what is now restricted, who has actually been fined, and why the architecture choices that minimize cross-border data flow and third-party processing are quietly becoming the default in regulated work. If you’re responsible for shipping AI in a bank, a hospital, a school district, a law firm, or any company that operates inside the EU, this is the landscape your compliance team has been quietly redrawing while engineering wasn’t looking.

A row of tall classical Ionic columns on a neoclassical government building facade, warm late-afternoon light raking across the marble.
The architecture team has new neighbors. Photo by Colin Lloyd on Unsplash.

The new default is risky

For a decade, “move it to the cloud” was the safe default — cheaper, more elastic, professionally operated. AI inherited that habit. But the legal calculus for AI workloads is not the calculus for “a S3 bucket of analytics events.” Three things are now different.

First, the prompt is the data. Every email you paste, every contract you summarize, every patient note you ask the model to rewrite is personal data leaving the building. Under GDPR, the UK GDPR, India’s DPDP Act, Brazil’s LGPD, and California’s CCPA/CPRA, that outbound trip is a regulated event with paperwork attached. Second, most frontier models live in the US, which is the one jurisdiction whose surveillance laws the EU has formally declared incompatible with European privacy standards — that’s the entire reason Meta’s 2023 fine reached a record €1.2 billion. Third, the new wave of AI-specific laws — the EU AI Act first among them — layers obligations on top of privacy law that have nothing to do with where the data lives and everything to do with what the model is allowed to decide.

The result is that the architecture decisions you make at design time now carry legal weight at audit time. “Which API do we call?” used to be a cost question. It’s now a question with fines attached.

The compliance map

Eight jurisdictions worth knowing, with the parts that actually bite a cloud-AI deployment. None of this is exhaustive — whole books are being written on each row — but the table below is the compressed version your engineering lead can look at without falling asleep.

Eight jurisdictions · what bites · what landed
Jurisdiction
What’s restricted
Enforcement teeth
Status
EU · GDPR + AI Act
Cross-border transfer, training on personal data, high-risk decisioning, biometrics
€1.2B Meta · €15M OpenAI · EDPB Opinion 28/2024
Active+
UK · UK GDPR + ICO code
Automated decisioning, children's data, sector regulator overlays
£17.5M / 4% turnover cap; AI Code of Practice 2026 in force
Active
India · DPDP Act 2023 + 2025 rules
Cross-border transfer to blacklisted countries, consent for AI training, SDF obligations
Fines up to ₹2.5B (~US$30M); Board-led enforcement
Maturing
California · CCPA/CPRA + ADMT rules
Automated significant decisions (jobs, credit, housing, health)
ADMT rules effective 2026-01-01; risk assessments by 2027
Active
Colorado · SB 24-205 / SB 26-189
Algorithmic discrimination in consequential decisions
Original Act stayed Apr 2026; replacement bill in flight
Stayed
NYC · Local Law 144
Automated employment decision tools without a bias audit
$500–$1,500 per violation per day; DCWP shifting proactive
Active
Brazil · LGPD
Training on user data without basis, children's data, biometrics
ANPD: ~€20M in fines 2023–25; AI/biometrics on 2025–26 priority list
Active+
China · Interim Measures (2023)
Public-facing generative AI without filing; foundation-model training-data lawfulness
CAC algorithm filing + security assessment
Active
Active+ — fines already landedActive — framework in forceMaturing — rules issued, enforcement rampingStayed — on hold or being rewritten

Two patterns to notice. The EU is the most aggressive on enforcement, but it’s not alone — Brazil moved from “moderately active” to €20 million in fines between 2023 and 2025, and California’s CPPA finalized regulations on automated decision-making technology that took effect 1 January 2026. The second pattern: most rules don’t care if you call it “AI” or not. They care whether personal data left the country, whether a meaningful decision about a human was automated, and whether a child or biometric was involved.

What regulators actually did

It’s easy to wave away a regulation as “paper.” The receipts in the last twenty-four months suggest otherwise. A short timeline of enforcement actions that landed on AI workloads specifically:

Receipts · 2023–2026
  1. May 2023
    Irish DPC · Meta
    Record €1.2B GDPR fine for transferring EU Facebook user data to the US after Schrems II.
    Source ↗
  2. Jul 2024
    ANPD · Meta (Brazil)
    Preventive measure halting Meta from training generative AI on Brazilian users' Facebook and Instagram data.
    Source ↗
  3. Dec 2024
    Garante · OpenAI
    €15M fine — first generative-AI GDPR sanction. Six-month public awareness campaign ordered alongside.
    Source ↗
  4. Dec 2024
    EDPB · Opinion 28/2024
    Pan-EU guidance on legal basis for AI training, when models can be 'anonymous', and unlawfully-trained-deployment consequences.
    Source ↗
  5. Feb 2025
    EU · AI Act Phase 1
    Prohibited-practice rules and AI literacy obligations enter into force across the Union.
    Source ↗
  6. Aug 2025
    EU · AI Act GPAI Phase
    Governance rules and general-purpose-AI obligations apply, ahead of the high-risk rules in August 2026.
    Source ↗
  7. Jan 2026
    California · CPPA ADMT
    Final regulations on automated decision-making, risk assessments and cybersecurity audits take effect.
    Source ↗
  8. Apr 2026
    Colorado · SB 24-205 stayed
    Federal magistrate stays enforcement after xAI sues and the DOJ moves to intervene; replacement bill follows in May.
    Source ↗

Two of these — the OpenAI fine and the original Colorado AI Act — have since been partially walked back. A Rome court cancelled the €15M fine on procedural grounds in March 2026, and Colorado’s original AI Act was stayed by a federal judge in April 2026 after xAI sued and the DOJ joined the challenge. The regulations themselves aren’t softening — the EU AI Act’s high-risk provisions become enforceable on 2 August 2026, the EDPB’s Opinion 28/2024 on training data is now the playbook every European DPA is working from, and Brazil’s ANPD has published an explicit 2025–2026 AI/biometrics priority list — but the courtroom has become an active part of the enforcement loop. That’s a normal part of regulation maturing, not the rules going away.

A long hall of a classical library — dark wooden bookcases packed with leather-bound legal volumes, a row of marble busts standing between them, a tall ladder on the right.
The shelf the compliance team works from is taller than it was in 2023. Photo by Giammarco Boscaro on Unsplash.

The local / BYOK escape hatch

Here is the part that is genuinely useful for an engineer or a CTO, rather than just sobering. Most of what these laws restrict shrinks dramatically once the inference doesn’t leave the building.

The mechanical reason is that “cross-border transfer,” “third-party processor,” and “training on customer data” are the load-bearing concepts in most of these regimes. A model running on a laptop or on your own VPC produces no cross-border transfer, has no third-party processor, and (because it’s an open-weight checkpoint) was trained on data you didn’t supply. Bring-your-own-key against a cloud frontier model is the weaker form of the same trick — you keep the contractual relationship explicit, the data flows are auditable, and the vendor isn’t silently training on your prompts.

Where the runtime location actually changes the legal picture
Risk axis
Cloud LLM API
Local / BYOK
Cross-border transfer
Every prompt is potentially a transfer. Schrems II / DPF in scope.
No transfer occurs. The bytes never leave the controller.
Third-party processor
Vendor is a processor (or worse, a controller). DPA required.
No processor. You are the controller, full stop.
Training on customer data
Depends on the ToS, the tier, the opt-out you remembered to set.
Open-weight checkpoint. Trained before you ever saw it.
Retention / subject access
Vendor logs may persist for 30 days or longer; subject access goes through them.
Logs live where you put them. Subject access is your own DB.
Surveillance-law exposure
US FISA 702 / executive-order surveillance — central to GDPR transfer cases.
Out of scope. The model can't see the local network.
EU AI Act high-risk obligations
Apply to the system regardless of location.
Apply to the system regardless of location. (No discount.)

Read that column on the right carefully. None of these are marketing claims; they’re mechanical consequences of where the bytes go. The EU AI Act still applies to the systemyou ship — if your local model is used to decide who gets a mortgage, it’s a high-risk system regardless of where it runs — but the cross-border and training-data risk surface, which is most of the privacy bill, collapses.

We’ve made the dollars-and-ownership version of this argument in BYOK vs SaaS AI; the hardware-readiness piece is in Personal compute is back. The point this post is making is narrower: the legal argument has caught up to the architectural one, and the compliance team is the constituency that hadn’t weighed in until recently.

What local doesn’t fix

Local inference and BYOK are escape hatches, not absolutions. Four things they do not solve:

  • The EU AI Act applies to the system, not the runtime. If you use AI to make a meaningful decision about a person — credit, employment, insurance, education, public services — the Act’s high-risk obligations follow regardless of whether the model runs in Frankfurt or on a laptop in Lisbon. Risk management, data governance, technical documentation, human oversight, transparency, post-market monitoring — all still required. Article 99 fines for high-risk breaches reach €15M or 3% of global turnover.
  • Data protection still applies.Running a model on your own server doesn’t excuse you from DPIAs, data minimization, retention limits, subject access rights, or the consent/legitimate-interest analysis the EDPB Opinion 28/2024 spells out. It just means you’re the only controller in the picture, with nobody else to blame.
  • Output harms travel.Some rules — defamation, consumer protection, NYC Local Law 144 on hiring tools, the FTC’s unfairness authority in the US — care about the output of the system, not where the computation ran. If your local model discriminates in a hiring screen, the screen is still illegal.
  • Sector regulators stack on top.HIPAA, PCI-DSS, MiFID, the SEC’s 2024 predictive-analytics rule for advisers, the EU’s DORA for financial services, the MHRA’s AI-as-a-medical-device guidance — none of these care about your model architecture. They have their own paperwork and their own fines, and they’re unaffected by where the GPU lives.

The right framing isn’t “local is compliant.” It’s “local removes an entire category of risk — cross-border transfer, third-party processing, accidental training on customer data — that cloud AI deployments are currently the biggest source of fines for.” The remaining risks are real, but they’re the kind of work compliance teams already know how to do.

What the architecture team draws

The compliance team didn’t become the architecture team because anyone wanted them to. They became the architecture team because the diagrams stopped working: “arrow from our app to an API in Virginia” quietly became a regulated transfer the moment the API on the other end started training on what it received and answering questions about EU residents’ private lives.

The diagram the architecture team is drawing in 2026 looks different. Sensitive inference lives behind the company’s own boundary — open-weight models on a workstation for individual contributors, or on a single GPU box in a tenant’s own VPC for shared workloads. Frontier cloud models stay in the picture, but reached via BYOK with a written DPA, retention disabled, and a clear policy on which data categories are never allowed to leave. The hard cases — biometric identification, automated significant decisions, training data with EU residents’ personal data — get the formal risk assessments the regulations now demand, because they require it whether you run local or not.

The point of this post isn’t that the cloud is dead, or that every workload should run on a laptop. It’s that the question “where does the model run, and what does it see” has moved from an engineering detail to a first-class architectural input, sitting next to latency, cost, and reliability on the design doc. The teams treating it that way are the ones whose 2027 will not contain a surprise letter from a data protection authority.

The compliance team became the architecture team. Their map is the one above. Worth knowing what they’re looking at.

More reading
Launch offer · 50% off

One-time payment. Yours forever.

No subscriptions. No seats. No renewals. Buy CSuite once — future updates included.

$98$49only
Buy now

Secure checkout via Stripe. Already have a license? Download the app