General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI chatbots are suck-ups, and that may be affecting your relationships (Scientific American, 3/26)
https://www.scientificamerican.com/article/ai-chatbots-are-sucking-up-to-you-with-consequences-for-your-relationships/AI chatbots are suck-ups, and that may be affecting your relationships
A new study of AI sycophancy shows how asking agreeable chatbots for advice can change your behavior
By Allison Parshall edited by Tanya Lewis
Large language model (LLM) chatbots have a tendency toward flattery. If you ask a model for advice, it is 49 percent more likely than a human, on average, to affirm your existing point of view rather than challenge it, a new study shows. The researchers demonstrated that receiving interpersonal advice from a sycophantic artificial intelligence chatbot can make people less likely to apologize and more convinced that theyre right.
People like what such chatbots have to say. Participants in the new study, which was published today in Science, preferred the sycophantic AI models to other models that gave it to them straight, even when the flatterers gave participants bad advice.
The more you work with the LLM, the more you see these subtle sycophantic comments come up. And it makes us feel good, says Anat Perry, a social psychologist at the Hebrew University of Jerusalem, who was not involved in the new study but authored an accompanying commentary article. Whats scary, she says, is that were not really aware of these dangers.
-snip-
The new study examined only brief interactions with chatbots. Dana Calacci, who studies the social impact of AI at Pennsylvania State University and wasnt involved in the new research, has found that sycophancy tends to get worse the longer users interact with the model. I think about this [as] compounded over time, she says.
-snip-
Much more at the link.
I'd found the research paper this article is about yesterday and posted about it then, but because it wasn't a very easy read and because this article has important comments on the research from experts who weren't involved in it, this Scientific American article deserved a separate, new OP. The thread on the research paper is at https://www.democraticunderground.com/100221127158
There are lots of articles on the research paper now. The links below are just a small sampling.
Chats with sycophantic AI make you less kind to others
https://www.nature.com/articles/d41586-026-00979-x
AI is giving bad advice to flatter its users, says new study on dangers of overly agreeable chatbots
https://apnews.com/article/ai-sycophancy-chatbots-science-study-8dc61e69278b661cab1e53d38b4173b6
Study: Sycophantic AI can undermine human judgment
https://arstechnica.com/science/2026/03/study-sycophantic-ai-can-undermine-human-judgment/
AI Chatbots Tend Toward Flattery. Why Thats Bad for Students
https://www.edweek.org/technology/ai-chatbots-tend-toward-flattery-why-thats-bad-for-students/2026/03
Is Your Chatbot a Yes-Man? New Study Put Popular Models to the Test
https://www.inc.com/moses-jeanfrancois/is-your-chatbot-a-yes-man-new-study-put-popular-models-to-the-test/91322847
Chatbots Are Telling Their Users That Being an Asshole Is Just Fine
https://www.jezebel.com/chatbots-ai-psychosis-sycophancy-preferred-responses-study-flattery-ethics-am-i-the-asshole
dweller
(28,383 posts)Technology Weakens Our Minds. We Can Fix This.
https://archive.ph/REle9
✌🏻
highplainsdem
(62,015 posts)AI dumbing users down (and smartphones doing the same thing), while this new study focuses on AI sycophancy and how that undermines human judgment and makes users addicted to AI.
Are you going to post that NYT op-ed? I hope you will - preferably in GD.
dweller
(28,383 posts)Seeking a Sounding Board? Beware the Eager-to-Please Chatbot.
https://archive.ph/gOSql
Also NYT today
✌🏻
highplainsdem
(62,015 posts)mwmisses4289
(4,115 posts)highplainsdem
(62,015 posts)was a very bad idea, but I definitely wouldn't bet that they've all been discovered. We're just a few years into this harmful tech being at all widely used.
Passages
(4,140 posts)order to build social /emotional/intellectual competencies. With standardized tests, our society began to shortchange children in many ways. Now, with AI, it is catastrophic for their development.
Frontline had a great doc years ago on the minds of teens, how they're wired, etc.
highplainsdem
(62,015 posts)them out of learning, too, all too often.
hunter
(40,676 posts)owards the end of 2024, Dennis Biesma decided to check out ChatGPT. The Amsterdam-based IT consultant had just ended a contract early. I had some time, so I thought: lets have a look at this new technology everyone is talking about, he says. Very quickly, I became fascinated.
Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling a little isolated. He smoked a bit of cannabis some evenings to chill, but had done so for years with no ill effects. He had never experienced a mental illness. Yet within months of downloading ChatGPT, Biesma had sunk 100,000 (about £83,000) into a business startup based on a delusion, been hospitalised three times and tried to kill himself.
--more--
https://www.theguardian.com/lifeandstyle/2026/mar/26/ai-chatbot-users-lives-wrecked-by-delusion
Similar things are going to happen to corporations and other institutions that embrace AI.
Sometimes I wonder if Elon Musk has fried his brain so badly he's allowing grok to do his thinking for him.
highplainsdem
(62,015 posts)that article and all the other articles about AI psychosis.
And yes, we're likely to see more of this, affecting corporations and institutions that embrace AI.
I wouldn't be at all surprised if Musk suffers to some degree from AI psychosis. Peter Thiel, too.
Jack Valentino
(4,967 posts)for 'ego boost' ", to which it replied 'haha, you caught me!' LOL