Skip to main content

Artificial Intelligence

Anthropic Accuses DeepSeek of Distillation Attacks on Claude

Anthropic Claude Ai Chatbot

Date: February 23, 2026
Source: Anthropic Blog

Anthropic published a detailed post revealing what it calls an Anthropic distillation attack at industrial scale, accusing three Chinese AI labs (DeepSeek, Moonshot AI/Kimi, and MiniMax) of systematically extracting Claude’s capabilities. According to Anthropic, the labs created over 24,000 fraudulent accounts and generated more than 16 million exchanges with Claude to train and improve their own models.

 

10 Anthropic Distillation Attack

The post describes the detection methodology, the countermeasures Anthropic has deployed, and the broader policy implications. This comes at a time when DeepSeek is also withholding its latest model from US chipmakers, further deepening the rift between Chinese and Western AI ecosystems. Furthermore, the accusation has generated wide coverage and debate, with some commentators pointing out that the line between “distillation” and “using a competitor’s product for research” is legally and technically contested. This confirms what many in the AI community have long suspected, but the irony is hard to miss: the major AI labs, Anthropic included, have themselves trained their models on vast amounts of copyrighted information from the open web.

Why the Anthropic Distillation Attack Matters for Developers

Here is a threat model most developers have not had to think about before: automated, high-volume extraction of a model’s capabilities through API abuse. If you are building your own models, fine-tuning on outputs from frontier models, or offering AI-powered APIs, this type of distillation attack is now a real intellectual property and security risk you need to account for. API security is becoming a recurring theme across the AI toolchain; for another angle on this, see the recent analysis of MCP protocol security risks and attack surfaces.

On the practical side, expect tighter enforcement from AI providers. Rate limiting, behavioral anomaly detection, and terms-of-service policing are all getting more aggressive. Consequently, if your legitimate workloads involve high-volume API calls or automated pipelines that interact with third-party models, make sure your usage patterns do not look like distillation. Clear documentation, reasonable rate patterns, and proactive communication with your providers will matter more going forward.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Matthew Aberham

Matthew Aberham is a solutions architect, and full-stack engineer focused on building scalable web platforms and intuitive front-end experiences. He works at the intersection of performance engineering, interface design, and applied AI systems.

More from this Author

Follow Us