back to top

Report: U.S. Military Employed Anthropic AI in Iran Strike, Contrary to Earlier Trump Administration Ban.

U.S. Military Deploys Anthropic’s Claude AI in Iran Strike Hours After White‑House Ban

Cointelegraph, March 1 2026

In a development that underscores the rapid entanglement of advanced artificial‑intelligence tools with U.S. defense operations, senior commanders at U.S. Central Command (CENTCOM) tapped Anthropic’s Claude language model to support a large‑scale air campaign against targets in Iran. The decision came less than twelve hours after President Donald Trump’s administration issued a directive ordering all federal agencies to cease using Anthropic’s technologies and to treat the company as a “potential security risk.”

Background

  • Trump administration order – On Friday, the White House instructed every federal agency to stop working with Anthropic. The move followed a breakdown in contract negotiations after the AI firm refused to grant the Pentagon unrestricted access to its models for any lawful scenario the Department of Defense (DoD) might request. The administration labeled Anthropic a supply‑chain risk and barred its products from future government contracts.
  • Anthropic’s Pentagon relationship – Prior to the ban, Anthropic had secured a multiyear agreement worth up to $200 million, in partnership with Palantir and Amazon Web Services, that authorized Claude for classified intelligence and operational workflows. The model had already been used in several classified missions, most notably a January operation in Venezuela that contributed to the capture of President Nicolás Maduro.

The Iran Strike

According to sources cited by The Wall Street Journal, CENTCOM operators employed Claude during the Iran operation to:

  • Parse large volumes of intelligence data and highlight likely high‑value targets.
  • Run rapid battlefield simulations that helped planners evaluate the impact of different strike options.
  • Produce concise briefing materials for senior commanders in real time.

The use of Claude illustrates how deeply generative‑AI tools have been woven into modern military decision‑making—even as the administration seeks to prune those ties.

Pentagon’s Response and Future Direction

Defense Secretary Pete Hegseth had previously demanded that Anthropic remove any restrictions on military use of its models. CEO Dario Amodei rebuffed the request, citing ethical limits on applications such as mass domestic surveillance and fully autonomous weapons. In the wake of the ban, the Pentagon began courting alternative providers and recently struck a deal with OpenAI to place its models on classified networks, signaling an effort to replace Claude in upcoming operations.

Anthropic’s Stance

In a Saturday interview, Amodei reaffirmed the company’s opposition to using its technology for large‑scale surveillance or weapon systems that operate without human oversight. He warned that delegating critical military decisions to AI could erode accountability and cross fundamental moral boundaries.

Analysis

The episode highlights several broader trends that are pertinent to the cryptocurrency and broader tech ecosystems:

  1. AI supply‑chain security is now a geopolitical concern. Government edicts like the Trump ban can rapidly re‑configure the market for AI services, affecting startups that rely on federal contracts for revenue.
  2. Regulatory ambiguity around “unrestricted” AI use. The disagreement between the DoD and Anthropic over what constitutes permissible military usage mirrors ongoing debates in the crypto space about the limits of decentralized finance protocols when interfacing with regulated entities.
  3. Shift toward “trusted” AI providers. With OpenAI positioning itself as a classified‑network partner, other AI firms may seek to differentiate themselves by offering verifiable compliance frameworks, a move that could spur new standards for auditability—an area where blockchain technology already offers solutions.
  4. Ethical AI governance versus operational speed. The Pentagon’s reliance on Claude for fast‑paced operations illustrates the tension between the need for cutting‑edge decision support and the desire for ethical safeguards. This tug‑of‑war is echoed in the crypto community, where rapid innovation often outpaces regulatory guidance.

Key Takeaways

  • Claude was used in a real‑time Iran strike despite a recent presidential ban on the technology.
  • Anthropic’s refusal to grant unrestricted military access led to a $200 million Pentagon contract being jeopardized.
  • The DoD is now shifting to OpenAI for classified AI capabilities, indicating a rapid reallocation of AI resources.
  • Anthropic’s leadership maintains firm ethical boundaries against surveillance and autonomous weapons, underscoring the growing importance of AI governance.
  • The incident signals that AI supply‑chain risks are being treated as national‑security issues, a development that could reshape funding and partnership strategies for AI‑focused startups, including those operating at the intersection of AI and blockchain.

As the U.S. military continues to integrate generative AI into its operational toolkit, the balance between technological advantage, ethical responsibility, and regulatory oversight will remain a focal point for policymakers, technologists, and investors alike.



Source: https://cointelegraph.com/news/us-military-used-anthropic-ai-iran-strike-trump-ban?utm_source=rss_feed&utm_medium=feed&utm_campaign=rss_partner_inbound

spot_img

More from this stream

Recomended