Rival AI firm Anthropic had already rejected an updated Pentagon contract.
The company felt the language did not respect its redlines on mass surveillance and autonomous weapons.
Others are reading now
On Monday morning, chalk slogans covered the sidewalk outside OpenAI’s San Francisco offices.
“Where are your redlines?” one read. “You must speak up,” said another.
“What are the safeguards?” demanded a third.
The messages came from activists. But inside the building, many employees were asking similar questions after the company struck a Pentagon deal.
A classified contract sparks internal backlash

On Friday, OpenAI agreed to let its AI models be used in classified Pentagon systems.
The agreement immediately stirred debate among staff, both in public forums and private chats.
Some employees questioned how the deal aligned with the company’s stated limits on military use.
The tension quickly spilled into the open.
Anthropic’s refusal sets the stage

Rival AI firm Anthropic had already rejected an updated Pentagon contract.
The company felt the language did not respect its redlines on mass surveillance and autonomous weapons.
As a result, the Pentagon labeled Anthropic a “supply chain risk” and effectively blacklisted it.
Also read
That move set up a dramatic contrast with OpenAI’s decision.
Admiration for a competitor

According to one current OpenAI employee who spoke anonymously to CNN, many staffers “really respect” Anthropic for standing up to the Pentagon.
Some felt frustrated with how OpenAI leadership handled its own negotiations.
The sense was not just about policy, but about process and communication.
For many, the issue cut deeper than a single contract.
A surprise twist from Sam Altman

As the Pentagon’s Friday deadline approached, CEO Sam Altman publicly said he agreed with Anthropic CEO Dario Amodei and shared the same redlines.
But hours later, OpenAI announced its own Pentagon contract.
To critics, it looked as though the company had stepped in to replace its rival.
The timing fueled accusations of opportunism.
Also read
Questions over redlines and safeguards

When OpenAI published parts of the contract on Saturday, outside observers quickly raised concerns.
Some argued the language left room for safeguards on mass surveillance and autonomous weapons to be sidestepped.
Skeptics questioned how those redlines would actually be enforced in practice.
The debate intensified across social media.
Altman responds and revises

On Saturday evening, Altman answered questions publicly on X.
By Monday, he announced that OpenAI had updated the contract to clarify guardrails against the use of its services in surveillance programs.
The additional language he shared did not mention autonomous weapons.
The revisions did little to quiet every critic.
Staff voice frustration in public

Research scientist Aidan McLaughlin wrote on X before the contract update: “i personally don’t think this deal was worth it.”
He later described the internal discussion as “overwhelming.”
Still, he said he felt “incredibly proud to work somewhere where people can speak their mind.”
Also read
His posts reflected the mix of dissent and openness inside the company.
Calls for independent review

Jasmine Wang, who works on AI safety at OpenAI, said she wanted “independent legal counsel” to review the revised contract language.
She later shared legal analyses that supported OpenAI’s claims, along with others that criticized the wording as “weasel language.”
The debate showed how divided opinions remain, even among experts.
For some, clearer guardrails are still needed.
A rush job, say some employees

They felt a contract of this scale was pushed through too quickly.
“It’s partly how it was perceived, how it was communicated, and what the narrative has become,” the anonymous employee said.
Process, not just policy, became a sticking point.
Also read
Altman admits a communications failure

Altman conceded that leadership mishandled the rollout.
“The issues are super complex, and demand clear communication,” he wrote on X.
“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
He later told employees that rushing the deal out was a “mistake.”
Limits on oversight and control

During an all-hands meeting, Altman said OpenAI cannot weigh in on individual military use cases.
The company, he argued, cannot decide which operations are good or bad.
An OpenAI spokesperson referred CNN to Altman’s public statements.
The comments underscored the limits of the company’s control.
A broader fight over AI and government power

Some employees bristled at portrayals of Anthropic as heroic, noting its prior Pentagon and defense contractor work.
Altman told staff he believes governments should work with labs that enforce safety standards.
He also said he is urging officials to remove Anthropic’s supply chain risk label.
Also read
“I believe we will hopefully have the best models that will encourage the government to be willing to work with us, even if our safety stack annoys them, or put some limits or something else,” he said.