Homepage War Military use of AI sparks industry clash as major tech...

Military use of AI sparks industry clash as major tech workers back Anthropic

Military use of AI sparks industry clash as major tech workers back Anthropic
TSViPhoto/shutterstock.com

Employees from major tech companies including Google and OpenAI have filed a legal brief supporting Anthropic in its dispute with the Pentagon over military use of AI. The case could shape how artificial intelligence is deployed in surveillance and autonomous weapons.

Others are reading now

A dispute over how artificial intelligence can be used by the military is drawing unusual support from across the tech industry, as employees from major AI companies step in to back rival developer Anthropic in its legal fight with the U.S. government.

More than 30 employees from companies including Google and OpenAI have filed a legal brief supporting Anthropic after the Pentagon labeled the company a “supply-chain risk.” The designation could effectively block Anthropic from working with the U.S. military and participating in certain government contracts.

The employees warn that the decision could have consequences far beyond one company, potentially weakening the United States’ broader position in the global race for artificial intelligence.

A conflict over military use of AI

The dispute began when Anthropic and the Pentagon failed to reach an agreement on how the company’s AI model, Claude, could be used in military settings.

Anthropic sought to include two clear limits in the contract. The company wanted to prevent its AI systems from being used for domestic mass surveillance and from powering fully autonomous weapons capable of selecting and attacking targets without human approval.

Also read

The Pentagon instead pushed for language allowing the military to use the technology for “all lawful use.”

Anthropic refused to accept those terms. Shortly after negotiations collapsed, the government canceled its contracts with the company and labeled it a national security supply-chain risk — a classification previously applied mainly to foreign companies considered security threats.

Anthropic has since launched lawsuits challenging the decision.

Rival tech workers step in

The legal filing supporting Anthropic was submitted by employees acting in a personal capacity. The document warns that penalizing a major American AI developer could damage the country’s scientific and industrial leadership in the field.

The show of support is unusual because the employees come from companies that compete directly with Anthropic in building advanced AI systems.

Also read

Still, many researchers say the dispute raises larger questions about how powerful AI technology should be used, especially in military and surveillance contexts.

A growing debate inside the tech industry

The legal brief follows a broader wave of internal activism among AI researchers. Earlier this year, nearly 900 employees across major technology firms signed an open letter urging their companies not to deploy AI systems for domestic mass surveillance or fully autonomous lethal weapons.

Those were the same limits Anthropic attempted to include in its negotiations with the Pentagon.

The controversy has already led to at least one high-profile resignation. Caitlin Kalinowski, who had led hardware and robotics at OpenAI, stepped down after the company secured its own Pentagon contract. She said the use of AI for domestic surveillance without court oversight and autonomous weapons without human authorization deserved far more public debate.

A widening rift between AI companies

The situation has also sparked tension between company leaders.

Also read

Anthropic CEO Dario Amodei criticized competitors for accepting the Pentagon’s terms, describing their approach to AI safety as “safety theater.” OpenAI CEO Sam Altman responded indirectly, warning that private companies refusing to cooperate with democratically elected governments could create new risks.

Despite those clashes at the executive level, employees across companies appear more aligned on the broader ethical questions.

Echoes of earlier tech worker protests

The debate mirrors earlier moments when tech workers challenged military partnerships.

In 2018, thousands of Google employees protested the company’s involvement in Project Maven, a Pentagon program using AI to analyze drone surveillance footage. The internal backlash contributed to Google’s decision not to renew the contract.

Now, as AI systems become more powerful and more valuable to governments, similar tensions are emerging again.

Also read

The outcome of Anthropic’s legal challenge could shape how AI companies, governments, and researchers define the boundaries of military AI for years to come.

Sources: Fortune; court filings supporting Anthropic; public statements from Anthropic and OpenAI

Ads by MGDK