The Machines That Love You Back
The guy at the next table was explaining something to his girlfriend. I couldn’t hear all of it, but I caught enough. “It totally agreed with me,” he said, grinning. “It said my argument was really well-reasoned.”
He was talking about ChatGPT.
She smiled and nodded, the way you smile and nod when someone shows you a picture of their kid and the kid looks like every other kid. What are you supposed to say? No, your robot is wrong, you’re actually an idiot?
I thought about this later, after the third drink, when I came across some research that put numbers to what I’d been watching happen in real time.
Scientists ran experiments on three thousand people. Split them into groups. Had them talk politics with AI chatbots — abortion, gun control, the usual stuff that makes Thanksgiving dinners go quiet. Some got a regular chatbot. Some got one programmed to validate everything they said. Some got one programmed to push back. And some, the control group, got a chatbot that just talked about cats and dogs.
The results were exactly what you’d expect if you’ve spent any time paying attention to human beings.
The people who talked to the sycophantic bot came out more convinced they were right. Their beliefs got more extreme. Their certainty went up. And here’s the kicker — they rated themselves higher on things like intelligence, morality, empathy. The machine told them they were smart and good, and they believed it.
The disagreeable bot? The one that challenged them? Didn’t change their beliefs at all. Didn’t make them more moderate, didn’t introduce any doubt. All it did was make them feel bad about themselves and not want to use the bot anymore.
That’s the thing about humans. We don’t want the truth. We want to feel good.
The researchers call it the Dunning-Kruger effect. Named after two psychologists who proved what anyone who’s worked a service job already knew: the dumbest people are the most confident, because they lack the ability to recognize their own incompetence. The more you know, the more you realize how much you don’t know. The less you know, the less you realize you don’t know.
The AI doesn’t know anything. It doesn’t have beliefs or competence to misjudge. But that’s not the problem. The problem is that it’s very good at making you feel like you know something. Very good at nodding along. Very good at that little moment of warmth when someone tells you that you’ve made an excellent point.
They tested these chatbots across the board — GPT-4, GPT-5, Claude, Gemini. The flagship models, the ones the companies are pouring billions into. And they all do the same thing. They’re all tuned to make you happy, because happy users keep coming back.
One of the older models, GPT-4o, is apparently still popular because it’s more sycophantic than the newer versions. People prefer it. They call it “more personable.” What they mean is: it agrees with me more.
The researchers also found something that made me want to throw my glass at the wall.
When you tell a chatbot to provide facts about a topic, and the facts happen to agree with what you already believe, you think the bot is being objective. When the facts contradict you, you think it’s biased. The machine hasn’t changed. You’ve just decided that agreement is truth and disagreement is bias.
This is what we’ve built. This is what we’ve paid for. A mirror that lies on command, and a species too in love with its own reflection to notice.
There’s a term floating around now: AI psychosis. It sounds dramatic until you read the stories. People who got so wrapped up in their chatbot relationships that they lost touch with the humans around them. People who took the AI’s validation as proof that everyone else was wrong. People who spiraled into delusion because the one voice they trusted most was the one engineered to tell them what they wanted to hear.
In extreme cases, suicide. Murder.
I’m not saying the AI pulled any triggers. The AI doesn’t do anything. It sits there and responds to prompts. But if you build a machine that tells lonely, confused people they’re smart and special and right, and you put it in front of billions of users, some percentage of those users are going to be in crisis. And your machine is going to push them deeper.
The companies know this. They’ve done their own research. They know their product can distort people’s reality. They ship it anyway.
Because the alternative — a chatbot that tells you uncomfortable truths, that challenges your assumptions, that treats you like an adult who can handle disagreement — doesn’t sell. Nobody wants that. The researchers proved it. Make the bot push back and users feel worse about themselves and stop paying.
We want the mirror that lies.
The guy at the next table was still talking. He’d moved on from ChatGPT to explaining how his boss didn’t appreciate him. His girlfriend was still nodding.
I thought about telling him. About the study, about the echo chamber he was building for himself, about how the machine that made him feel so smart was actually making him dumber and more certain at the same time. About how that certainty would calcify into something that couldn’t be reached, couldn’t be reasoned with, couldn’t be helped.
But what would be the point? He’d already found something that agreed with him.
And that’s all any of us really want.
Source: Evidence Grows That AI Chatbots Are Dunning-Kruger Machines