The AI Cold War Goes Operational
For three years, the Frontier Model Forum has been a place where AI labs go to make safety pledges and appear responsible in front of regulators. On April 6, 2026, it became something else entirely: a coordinated threat-intelligence operation against a named external adversary.
OpenAI, Anthropic, and Google announced they are now sharing attack pattern data through the Forum to detect and block what security researchers call adversarial distillation — a technique by which Chinese AI companies have allegedly been systematically stealing the capability of Western frontier models at industrial scale.
That three companies who compete as intensely as these three are doing this openly is, by itself, a signal worth paying attention to.
What Is Adversarial Distillation?
The technique is straightforward and, if the allegations are accurate, remarkably effective. Rather than building a frontier model from scratch — which requires years of research investment and hundreds of millions of dollars in compute — you create tens of thousands of fake accounts, flood your competitor’s API with carefully engineered queries, collect the responses, and train your own model on the outputs. The target lab’s years of RLHF, safety training, and capability development become your training dataset. You pay API fees instead of GPU clusters.
Anthropic says it documented 16 million such exchanges from three Chinese companies — DeepSeek, Moonshot AI, and MiniMax — running through approximately 24,000 fraudulently created accounts. OpenAI has made separate allegations that DeepSeek used “new, obfuscated methods” to distill its models, and submitted a formal memo to the House Select Committee on China making that case. US officials estimate the practice costs American AI labs billions annually.
For context: the catalyst for all of this traces to early 2025, when DeepSeek released its R1 reasoning model. The release triggered immediate panic across the US AI industry because R1 performed at or near frontier levels at a fraction of the expected cost. What we now know is that investigations into how that was achieved have been building quietly ever since, and what started as separate internal detection efforts has now consolidated into coordinated cross-lab intelligence sharing.
Why This Matters Beyond the Headlines
The obvious frame here is geopolitical — US vs China, IP theft, technology competition. That frame is accurate but incomplete. For enterprises building on AI infrastructure, there are a few more practical implications.
The frontier is now contested territory. If the API surface of Claude, ChatGPT, and Gemini is actively being weaponised as a training data pipeline, then the model quality you pay for today has partially funded your future competitors’ capabilities. That’s not a reason to stop using these services, but it does add texture to what “investing in AI” means at a national level.
Model capability is increasingly geopolitically entangled. The Frontier Model Forum was co-founded with Microsoft in 2023. Its original mandate was safety coordination — responsible scaling policies, red-teaming standards, that kind of thing. The fact that it has now pivoted to active threat-intelligence operations means the governance infrastructure around frontier AI is doing something it was never designed for. Microsoft, as a co-founder and the company that has bet most aggressively on OpenAI’s models through Azure, has a significant stake in how this plays out.
The open/closed divide just got more complex. This week also saw Zhipu AI release GLM-5.1 — a 744-billion parameter open-weight model under the MIT license that reportedly beats Opus 4.6 and GPT-5.4 on real-world software engineering benchmarks. The irony is sharp: if the allegations against Zhipu’s associates are accurate, open-weight releases from that ecosystem become a policy problem as much as a technical one. You can’t un-release a model under MIT.
The Practitioner Question
For those of us running hybrid infrastructure and making decisions about which AI services to build on, the question this raises isn’t whether to use AI APIs — it’s about the stability and predictability of the services you’re relying on.
The measures the three labs are now taking to detect and block distillation attempts include tighter query pattern analysis, account behaviour monitoring, and coordinated blocklists. The practical effect is that API behaviour may become more restrictive over time, and novel query patterns — including legitimate agentic workloads — may occasionally trigger false positives. If you’re running automated pipelines against these APIs at scale, it’s worth keeping an eye on how the detection systems evolve.
More broadly: the fact that the three biggest AI labs have moved from competition to coordination on this issue suggests they view the threat as existential enough to override commercial interests. That’s a calibration signal. If they’re treating it that seriously, the scale of what’s been happening is probably larger than the public disclosures suggest.
A Note on the Forum Itself
The Frontier Model Forum’s first two years were characterised by critics as largely performative — a vehicle for appearing cooperative on safety while continuing to race on capability. Whether that characterisation was fair or not, this activation as an operational threat-intelligence body is a material change in function. It’s now doing something concrete, with named adversaries, documented evidence, and active countermeasures.
The question worth watching is whether this coordination extends beyond distillation defence into other areas — and what it means for the regulatory posture of AI labs globally when the three biggest players are explicitly framing their cooperation as a response to a state-affiliated threat.
This is no longer theoretical AI governance. It’s infrastructure security at civilisational scale.
Sources: Bloomberg, April 6 2026 | RoboRhythms analysis | WhatLLM April 2026 model roundup


