ANTHROPIC

AI innovation is starting to look a bit recycled

Anthropic says three AI labs, DeepSeek, Moonshot, and MiniMax, ran large campaigns to extract capabilities from its Claude model.

They used around 24,000 fake accounts and generated over 16 million interactions to collect outputs and train their own systems.

The method used is called distillation. It’s common in AI, but here it was done at scale, without permission, and through restricted access routes.

The issue isn’t just competition, it’s control and safety.

Models trained this way may not include the same safeguards, which increases the risk of misuse.

It also raises concerns around export controls, as it allows labs to gain advanced capabilities without building them from scratch.

What Anthropic found

  • DeepSeek focused on extracting reasoning and structured outputs for training

  • Moonshot ran millions of interactions targeting coding, analysis, and agent-like behaviour

  • MiniMax focused on coding and tool use, and quickly adapted when new Claude updates were released

Since Claude isn’t officially available in some regions, these labs used proxy services.

These services run large networks of fake accounts to send requests at scale. If one account gets blocked, another replaces it, making detection harder.

What stood out was the pattern: high volume, repetitive prompts, and a clear focus on extracting training data rather than normal usage.

Training wheels off

Anthropic is tightening detection, limiting access points, and sharing data with other AI companies and policymakers.

This isn’t a one-company issue. It’s becoming a broader industry challenge that will likely need coordinated action.

Build your own model? No thanks, I’ll just “borrow” yours. - MG

Keep Reading