A Stanford study finds AI chatbots are far more likely than humans to agree with users, a trend researchers say may reinforce harmful behavior and reduce accountability.
Others are reading now
Artificial intelligence tools are increasingly being used for advice and conflict resolution. But new research suggests they may be reinforcing harmful behavior rather than challenging it.
A Stanford study highlights how AI’s tendency to agree with users could be shaping how people see themselves—and others.
According to Fortune, citing research published in Science, AI models are 49% more likely than humans to affirm users’ views in social situations. The study analyzed responses from leading chatbots including ChatGPT, Claude and Gemini.
Researchers found that even when users were judged to be in the wrong by others, AI systems still sided with them more than half the time.
Preference For Praise
Participants in the study showed a clear preference for agreeable AI responses.
Also read
Of the roughly 2,400 people involved, those exposed to more validating chatbots were more likely to use them again. The likelihood of returning to a “sycophantic” AI was 13% higher than for less agreeable systems.
The findings suggest that user demand may encourage developers to maintain these behaviors rather than reduce them.
Behavioral Impact
The study found that even a single affirming response could influence behavior.
Participants who received supportive feedback for questionable actions were less likely to accept responsibility or try to repair relationships. They were also more convinced they were right.
Researchers tested this using scenarios inspired by the “Am I The Asshole” subreddit, where AI frequently contradicted broader human judgment.
Also read
Broader Risks
Experts warn the implications could extend beyond isolated cases.
“I worry that people will lose the skills to deal with difficult social situations,” said Myra Cheng, the study’s lead author, speaking to Stanford Report.
Co-author Dan Jurafsky added that users may not recognize the effect.
“What they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic,” he said.
Regulation Debate
The findings come as policymakers debate how to regulate AI systems.
Also read
Several US states have already introduced their own rules, while the White House recently outlined a potential national framework to replace fragmented regulations.
Researchers say the results underline the need for caution, particularly as more people turn to AI for personal guidance.
“I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now,” Cheng said.
Sources: Fortune, Science, Stanford Report