back to top

Anthropic Files Lawsuit Challenging U.S. Administration’s “Supply Chain Risk” Designation.

Anthropic Files Federal Lawsuit to Overturn “Supply‑Chain Risk” Designation by Trump Administration

Silicon Valley AI firm challenges unprecedented Pentagon label that bars it from U.S. defense contracts and federal use of its Claude model.


Washington, D.C., March 31 — Anthropic, the San Francisco‑based developer of the Claude series of large‑language models, has brought a suite of lawsuits against several U.S. government agencies, seeking to reverse a designation that labels the company a “supply‑chain risk.” The move, announced by the Department of Defense (DoD) in early March, effectively prohibits any firm that does business with the U.S. military from also supplying Anthropic’s technology to federal entities.

The filing, lodged in a California federal district court, also asks a federal judge to nullify a directive issued by President Donald Trump that instructed all federal employees to cease using Claude. A parallel complaint has been submitted to the U.S. Court of Appeals for the District of Columbia Circuit, challenging the DoD’s underlying decision.

Background

In February, Defense Secretary Pete Hegseth formally added Anthropic to the DoD’s “supply‑chain risk” list—a status traditionally reserved for foreign companies or entities deemed insecure or hostile to U.S. national interests. This marks the first time a domestically headquartered AI firm has been treated in the same manner.

The label carries immediate procurement consequences. Federal agencies are barred from contracting with any vendor that also supplies a “risk‑designated” entity, a rule intended to prevent potential backdoors or vulnerabilities in critical defense systems. In practice, the decision means that the Department of Defense, the Department of Treasury, the State Department, and any other federal body must stop using Claude, even though the model has been incorporated into classified workflows since early 2024.

Anthropic’s Position

Anthropic says the government’s actions constitute an unlawful retaliation after the company refused a request from the Pentagon to lift its internal usage restrictions. Those restrictions, embedded in the firm’s contracts, prohibit the deployment of Claude in lethal autonomous weapons systems, mass surveillance of U.S. citizens, and other high‑risk applications.

“We have never validated Claude for the kinds of military uses the DoD is demanding,” the lawsuit states. “Our engineers cannot guarantee that the model would operate safely or reliably in combat or surveillance contexts.” The company argues that the DoD’s demand to drop these safeguards effectively forces Anthropic to compromise on its ethical commitments.

Legal Arguments

Anthropic’s complaints cite several constitutional and statutory grounds:

  • First Amendment protections: The firm contends that the government is penalizing it for speech‑related activities, namely the articulation of policy limits on how its technology may be used.
  • Procurement law violations: By imposing a blanket ban that impacts existing contracts, the DoD is alleged to have exceeded its authority under the Federal Acquisition Regulation.
  • Due‑process concerns: Anthropic says it was not afforded an adequate opportunity to contest the supply‑chain risk label before it was enacted.

The lawsuits name Defense Secretary Hegseth, Secretary of the Treasury Scott Bessent, Secretary of State Marco Rubio, and more than a dozen senior officials across 17 agencies as defendants.

Industry and Academic Response

On Monday, a coalition of over thirty AI researchers and engineers—including senior figures from OpenAI and Google’s AI division—filed an amicus brief in support of Anthropic. The brief warns that allowing the punitive measure to stand could erode U.S. competitiveness in artificial‑intelligence research and development, potentially ceding leadership to foreign adversaries.

“The precedent of sanctioning a leading U.S. AI company threatens both national security and the broader innovation ecosystem,” the brief reads.

Potential Implications for the Crypto‑AI Intersection

While the dispute centers on a defense‑related procurement issue, it resonates with broader questions about how emerging technologies intersect with regulatory frameworks:

  • Risk‑based labeling: The DoD’s approach mirrors recent moves in the cryptocurrency sector, where regulators have begun flagging certain tokens or platforms as systemic risks. A similar labeling mechanism applied to AI could shape future compliance obligations for developers and investors.
  • Supply‑chain transparency: Many blockchain projects rely on AI for analytics, or vice‑versa. A supply‑chain designation could force firms to audit and disclose AI dependencies more rigorously, impacting token economics and governance structures.
  • Cross‑industry collaboration: The case underscores the need for clear, industry‑wide standards governing the use of advanced models in mission‑critical contexts—a conversation that also governs smart‑contract safety and decentralized finance (DeFi) protocols.

Key Takeaways

  • Unprecedented label: Anthropic is the first U.S.‑based AI firm to receive a “supply‑chain risk” designation, a status typically reserved for foreign adversaries.
  • Legal challenge: The company is pursuing relief in two federal courts, arguing violations of constitutional speech rights and procurement law.
  • Ethical safeguards at stake: Anthropic’s refusal to remove usage restrictions—particularly concerning lethal autonomous weapons—triggered the government’s punitive response.
  • Industry backing: A sizable group of AI experts has filed an amicus brief, warning of broader repercussions for U.S. technological competitiveness.
  • Broader regulatory echo: The dispute may set a precedent for how other emerging tech sectors, including cryptocurrency, are treated under risk‑based regulatory regimes.

The litigation is expected to proceed over the coming months, with both sides likely to seek expedited hearings given the national‑security dimensions. Observers will be watching closely to see whether the court will curb the DoD’s authority to impose supply‑chain risk labels on domestic technology firms, a decision that could reshape the intersection of AI, defense procurement, and broader regulatory policy.



Source: https://cointelegraph.com/news/ai-firm-anthropic-sues-trump-admin-over-supply-chain-risk-label?utm_source=rss_feed&utm_medium=feed&utm_campaign=rss_partner_inbound

spot_img

More from this stream

Recomended