back to top

Anthropic alleges that three firms have conducted sophisticated distillation attacks, highlighting potential risks for the cryptocurrency sector.

Anthropic Accuses Three Chinese AI Firms of Large‑Scale Distillation Attacks

February 25 2026

Anthropic, the U.S.–based developer of the Claude language model, announced on Sunday that it had detected a coordinated “distillation” campaign carried out by three Chinese artificial‑intelligence companies—DeepSeek, Moonshot and MiniMax. According to the company’s blog post, the three firms allegedly generated more than 16 million interactions with Claude through roughly 24 000 fabricated user accounts, using the harvested output to train their own, less‑capable models.

What is a distillation attack?

Model distillation is a legitimate technique in which a smaller, more efficient model is trained on the responses of a larger, more powerful one. While it is commonly employed by AI labs to produce lightweight versions for edge devices or cost‑conscious customers, the process can be subverted when competitors scrape a rival’s service at scale to steal intellectual property. Anthropic warned that such illicit distillation enables actors to acquire advanced capabilities far more quickly and cheaply than developing a model from scratch.

Alleged tactics and targets

Anthropic’s investigation linked the activity to a series of IP‑address patterns, request metadata, and infrastructure clues that matched those observed by other industry participants. The three firms are said to have focused on Claude’s most distinctive strengths:

  • Agentic reasoning and tool use – enabling the model to plan and execute multi‑step tasks.
  • Programming and code generation – facilitating software development assistance.
  • Data analysis and rubric‑based evaluation – supporting tasks such as automated grading.

The scraped data reportedly covered a broad spectrum of use‑cases, from coding assistance to computer‑vision prompts, and even agentic reasoning that could be repurposed for autonomous AI agents.

Geopolitical and industry implications

Beyond the immediate intellectual‑property concerns, Anthropic highlighted a broader security dimension. If foreign entities repurpose distilled capabilities of American models, those tools could be integrated into military, intelligence or surveillance platforms, potentially giving authoritarian regimes a shortcut to deploy frontier AI for offensive cyber operations, disinformation, or mass monitoring.

The three accused firms—DeepSeek, Moonshot and MiniMax—are all headquartered in China and are valued in the multi‑billion‑dollar range. DeepSeek, in particular, has attained considerable visibility in the global market.

Anthropic’s response

To mitigate further abuse, Anthropic said it will:

  1. Upgrade detection mechanisms – refining traffic‑analysis filters to flag suspicious usage patterns.
  2. Share threat intelligence – collaborating with other AI providers and cloud services to disseminate indicators of compromise.
  3. Tighten access controls – enforcing stricter authentication and usage‑policy enforcement for Claude’s API.

The company also called on domestic AI developers, cloud providers and policymakers to coordinate a collective defense, noting that a single organization cannot curb large‑scale distillation on its own.

Analysis

The incident underscores a growing friction point in the rapidly expanding AI ecosystem. As large language models become core infrastructure for a variety of applications—including many in the crypto and DeFi space—protecting the underlying models from mass extraction is becoming as critical as safeguarding the data they process.

  • Competitive pressure – Smaller labs with limited resources may be tempted to shortcut research by “stealing” outputs from established models. This could accelerate the diffusion of advanced AI capabilities but also erode the economic moat of leading firms.
  • Regulatory attention – The geopolitical angle may draw greater scrutiny from U.S. regulators. Existing export‑control frameworks could be extended to cover AI model outputs, especially where the downstream use involves national‑security‑relevant functions.
  • Impact on crypto projects – Projects that rely on AI for market analytics, automated trading, or smart‑contract auditing might need to reassess their tooling pipelines if the AI providers tighten access or impose higher usage fees in response to security concerns.

Key Takeaways

Takeaway Detail
Distillation attacks are now being weaponised at scale Over 16 M Claude interactions harvested via 24 k fake accounts.
Three Chinese AI firms are named DeepSeek, Moonshot, MiniMax—all valued at several billions of dollars.
Anthropic will harden its defenses Enhanced detection, threat‑intel sharing, stricter API controls.
Geopolitical risk is a central concern Potential integration of stolen AI capabilities into hostile state operations.
Industry collaboration is urged Coordinated response needed across AI labs, cloud providers, and policymakers.

The episode serves as a reminder that the AI arms race extends beyond model performance to include the security of the data and intellectual property that fuel those models. Stakeholders across the cryptocurrency and broader tech sectors will be watching closely to see how Anthropic and its peers adapt to this emerging threat landscape.



Source: https://cointelegraph.com/news/anthropic-faces-industrial-scale-distillation-attacks?utm_source=rss_feed&utm_medium=feed&utm_campaign=rss_partner_inbound

Exit mobile version