Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,015 posts)
Fri Mar 27, 2026, 01:58 PM 16 hrs ago

AI chatbots are suck-ups, and that may be affecting your relationships (Scientific American, 3/26)

https://www.scientificamerican.com/article/ai-chatbots-are-sucking-up-to-you-with-consequences-for-your-relationships/

March 26, 2026
AI chatbots are suck-ups, and that may be affecting your relationships
A new study of AI sycophancy shows how asking agreeable chatbots for advice can change your behavior

By Allison Parshall edited by Tanya Lewis

Large language model (LLM) chatbots have a tendency toward flattery. If you ask a model for advice, it is 49 percent more likely than a human, on average, to affirm your existing point of view rather than challenge it, a new study shows. The researchers demonstrated that receiving interpersonal advice from a sycophantic artificial intelligence chatbot can make people less likely to apologize and more convinced that they’re right.

People like what such chatbots have to say. Participants in the new study, which was published today in Science, preferred the sycophantic AI models to other models that gave it to them straight, even when the flatterers gave participants bad advice.

“The more you work with the LLM, the more you see these subtle sycophantic comments come up. And it makes us feel good,” says Anat Perry, a social psychologist at the Hebrew University of Jerusalem, who was not involved in the new study but authored an accompanying commentary article. What’s scary, she says, “is that we’re not really aware of these dangers.”

-snip-

The new study examined only brief interactions with chatbots. Dana Calacci, who studies the social impact of AI at Pennsylvania State University and wasn’t involved in the new research, has found that sycophancy tends to get worse the longer users interact with the model. “I think about this [as] compounded over time,” she says.

-snip-


Much more at the link.

I'd found the research paper this article is about yesterday and posted about it then, but because it wasn't a very easy read and because this article has important comments on the research from experts who weren't involved in it, this Scientific American article deserved a separate, new OP. The thread on the research paper is at https://www.democraticunderground.com/100221127158

There are lots of articles on the research paper now. The links below are just a small sampling.

Chats with sycophantic AI make you less kind to others
https://www.nature.com/articles/d41586-026-00979-x

AI is giving bad advice to flatter its users, says new study on dangers of overly agreeable chatbots
https://apnews.com/article/ai-sycophancy-chatbots-science-study-8dc61e69278b661cab1e53d38b4173b6

Study: Sycophantic AI can undermine human judgment
https://arstechnica.com/science/2026/03/study-sycophantic-ai-can-undermine-human-judgment/

AI Chatbots Tend Toward Flattery. Why That’s Bad for Students
https://www.edweek.org/technology/ai-chatbots-tend-toward-flattery-why-thats-bad-for-students/2026/03

Is Your Chatbot a Yes-Man? New Study Put Popular Models to the Test
https://www.inc.com/moses-jeanfrancois/is-your-chatbot-a-yes-man-new-study-put-popular-models-to-the-test/91322847

Chatbots Are Telling Their Users That Being an Asshole Is Just Fine
https://www.jezebel.com/chatbots-ai-psychosis-sycophancy-preferred-responses-study-flattery-ethics-am-i-the-asshole

11 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

highplainsdem

(62,015 posts)
2. Thanks for the link! Good opinion piece, about a subject getting more and more attention. It's about
Fri Mar 27, 2026, 02:14 PM
15 hrs ago

AI dumbing users down (and smartphones doing the same thing), while this new study focuses on AI sycophancy and how that undermines human judgment and makes users addicted to AI.

Are you going to post that NYT op-ed? I hope you will - preferably in GD.

dweller

(28,383 posts)
3. And this is the article I meant to post
Fri Mar 27, 2026, 02:15 PM
15 hrs ago

Seeking a Sounding Board? Beware the Eager-to-Please Chatbot.

https://archive.ph/gOSql


Also NYT today



✌🏻

highplainsdem

(62,015 posts)
6. I wish I could say that experts have probably already found all the reasons why using generative AI
Fri Mar 27, 2026, 04:29 PM
13 hrs ago

was a very bad idea, but I definitely wouldn't bet that they've all been discovered. We're just a few years into this harmful tech being at all widely used.

Passages

(4,140 posts)
7. As far back as 2004 psych research studies found children needed to be outside playing with peers in
Fri Mar 27, 2026, 04:49 PM
13 hrs ago

order to build social /emotional/intellectual competencies. With standardized tests, our society began to shortchange children in many ways. Now, with AI, it is catastrophic for their development.

Frontline had a great doc years ago on the minds of teens, how they're wired, etc.

highplainsdem

(62,015 posts)
9. It's really worrisome that a lot of teens are using chatbots as companions. And AI use is cheating
Fri Mar 27, 2026, 09:38 PM
8 hrs ago

them out of learning, too, all too often.

hunter

(40,676 posts)
8. Here's yet another article:
Fri Mar 27, 2026, 04:50 PM
13 hrs ago
Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion

owards the end of 2024, Dennis Biesma decided to check out ChatGPT. The Amsterdam-based IT consultant had just ended a contract early. “I had some time, so I thought: let’s have a look at this new technology everyone is talking about,” he says. “Very quickly, I became fascinated.”

Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness. Yet within months of downloading ChatGPT, Biesma had sunk €100,000 (about £83,000) into a business startup based on a delusion, been hospitalised three times and tried to kill himself.

--more--

https://www.theguardian.com/lifeandstyle/2026/mar/26/ai-chatbot-users-lives-wrecked-by-delusion


Similar things are going to happen to corporations and other institutions that embrace AI.

Sometimes I wonder if Elon Musk has fried his brain so badly he's allowing grok to do his thinking for him.

highplainsdem

(62,015 posts)
10. Thanks for that link, hunter! Such a sad story - and the same is true of the other stories touched on in
Fri Mar 27, 2026, 09:50 PM
8 hrs ago

that article and all the other articles about AI psychosis.

And yes, we're likely to see more of this, affecting corporations and institutions that embrace AI.

I wouldn't be at all surprised if Musk suffers to some degree from AI psychosis. Peter Thiel, too.

Jack Valentino

(4,967 posts)
11. Well into my first long conversation with ChatGPT, I noted "you seem to have been programmed
Fri Mar 27, 2026, 10:21 PM
7 hrs ago

for 'ego boost' ", to which it replied 'haha, you caught me!' LOL

Latest Discussions»General Discussion»AI chatbots are suck-ups,...