Overview Of The Alleged Attacks
Anthropic has accused three Chinese AI companies, DeepSeek, Moonshot AI and MiniMax, of creating more than 24,000 fake accounts to interact with its Claude model. According to the company, these accounts generated over 16 million exchanges using distillation, a technique that allows smaller models to learn from the outputs of larger systems. Anthropic says the activity focused on Claude’s strengths in agentic reasoning, coding and tool use.
Methodology And Scale
Distillation is a common method used to develop smaller and more efficient AI models. Anthropic argues that in this case, it was used to replicate core model capabilities. DeepSeek previously attracted attention with its open-source R1 reasoning model, which delivered strong performance at lower cost. Reports suggest the upcoming DeepSeek V4 could further intensify competition in coding-focused AI models.
Follow THE FUTURE on LinkedIn, Facebook, Instagram, X and Telegram
Moonshot AI reportedly generated more than 3.4 million exchanges aimed at improving reasoning, coding, data analysis and computer vision. MiniMax accounted for approximately 13 million exchanges focused on agentic coding and tool orchestration. Anthropic also stated that at one point, nearly half of MiniMax’s traffic targeted the latest Claude version.
Policy And National Security Implications
The allegations come as debates continue in the United States over export controls on advanced AI chips and broader technology competition with China. The case highlights increasing concerns around intellectual property protection as AI development becomes more resource-intensive.
Anthropic argues that models created through unauthorized distillation may lack built-in safeguards, potentially increasing risks related to misuse, including cyber operations and disinformation.
Industry Response And Future Outlook
The company says it is strengthening internal monitoring to detect and limit large-scale distillation attempts. Anthropic is also calling for closer cooperation between AI companies, cloud providers and policymakers to address emerging risks.
As global competition in AI accelerates, disputes over training practices and model replication are likely to become a more significant part of industry regulation and strategic decision-making.







