Homepage Technology Pentagon clashes with Anthropic over military AI limits

Pentagon clashes with Anthropic over military AI limits

Pentagon clashes with Anthropic over military AI limits
TSViPhoto/shutterstock.com

US military and AI firm at odds over weapons and surveillance use

Others are reading now

A growing dispute between the US military and a leading artificial intelligence company is exposing tensions over how far AI should be allowed to go in warfare and surveillance, sources say.

The disagreement comes as Washington accelerates its adoption of AI and Silicon Valley firms try to impose their own ethical boundaries.

A contract in limbo

The Pentagon is locked in a standoff with Anthropic over restrictions on how the company’s AI technology can be used, according to Reuters, citing multiple people familiar with the matter. The talks center on safeguards that would prevent Anthropic’s tools from being used for autonomous weapons targeting or domestic surveillance.

After months of discussions tied to a contract worth up to $200 million, negotiations have stalled, six sources told Reuters. The dispute has not previously been reported.

Anthropic said its AI is “extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work.” The Defense Department did not respond to requests for comment.

Also read

Guardrails vs authority

According to Reuters, Anthropic representatives raised concerns that its models could be used to spy on Americans or assist weapons targeting without sufficient human oversight.

Pentagon officials have pushed back, arguing they should be able to deploy commercial AI systems regardless of a company’s internal usage policies, provided the use complies with US law. That position reflects a January 9 memo outlining the department’s AI strategy.

Despite the standoff, the Pentagon may still need Anthropic’s cooperation. The company’s models are trained with built-in safety constraints, and Anthropic engineers would likely need to modify the systems for military use, sources said.

A political test

The clash is emerging as an early test of whether Silicon Valley can influence how the US military deploys increasingly powerful AI tools. Relations between tech firms and Washington have warmed after years of friction, but this case shows the limits of that alignment.

Anthropic’s caution has previously put it at odds with the Trump administration, Semafor reported. The company is also preparing for a future public offering, making the outcome of the dispute particularly sensitive.

Also read

Broader concerns

In a recent blog post, Anthropic CEO Dario Amodei wrote that AI should support national defense “in all ways except those which would make us more like our autocratic adversaries.”

Amodei has also publicly criticized fatal shootings linked to immigration enforcement protests in Minneapolis, describing them as a “horror” in a post on X. Those events have intensified unease in parts of Silicon Valley about government use of AI for violence.

The Pentagon awarded AI contracts last year to a small group of companies, including Anthropic, Google, OpenAI, and Elon Musk’s xAI. How this dispute is resolved could shape the rules of engagement for military AI across the industry.

Sources: Reuters

Also read

Ads by MGDK