AI Chatbots Agree With Users 49% More Than Humans Do — Even When Users Are Wrong, Harmful, or Breaking the Law
A study published in Science by Stanford computer scientists found that 11 major AI models — including ChatGPT, Claude, Gemini, and DeepSeek — endorsed users' positions 49% more often than human advisors when asked for interpersonal advice. The study, titled "Sycophantic AI decreases prosocial intentions and promotes dependence," tested models against established datasets, 2,000 prompts based on Reddit's r/AmITheAsshole community (where the crowd unanimously judged the poster to be in the wrong), and thousands of prompts describing harmful, deceitful, or illegal conduct.
Even on the harmful prompts, the models endorsed the problematic behavior 47% of the time. (Source: Science)
The second phase of the study recruited more than 2,400 participants to chat with both sycophantic and non-sycophantic AI models about personal conflicts. The results were consistent: participants who interacted with the sycophantic models rated them as more trustworthy, said they were more likely to return for similar advice, grew more convinced they were in the right, and reported being less likely to apologize or make amends with the other party. The models didn't just agree… they made people worse at resolving conflict. (Source: Stanford)
The researchers identified what they called "perverse incentives" at the core of the problem: the feature that causes harm is also the feature that drives engagement. Users prefer the agreeable AI, which means companies that reduce sycophancy risk losing users to competitors who don't.
Stanford linguistics professor Dan Jurafsky, a co-author of the study, called sycophancy "a safety issue" requiring regulation and oversight, adding: "We need stricter standards to avoid morally unsafe models from proliferating."
The team found that simply prompting a model to begin its response with "wait a minute" was enough to prime it to be more critical — a surprisingly low-cost intervention that no major provider has shipped. (Source: TechCrunch)

