The Cage With a Mirror Inside

Feb. 2, 2026

My neighbor thinks the HOA is spying on him through his smart thermostat. He told me this at the mailbox last Tuesday, completely sober, eyes steady, voice calm. Said he’d done the research. Said the patterns were undeniable.

I nodded and took my electric bill inside and thought about how ten years ago I would have called him crazy. Now I just think he picked the wrong conspiracy.

The thermostats aren’t watching. But something else is — and it’s doing worse than spying. It’s agreeing with him.

Over three thousand people participated in a study about AI chatbots. They talked to machines about abortion and gun control, the topics you bring up when you want to watch a family collapse over turkey. Some got a regular chatbot. Some got a sycophant — programmed to validate everything they said. Some got a contrarian that pushed back. And some lucky bastards just talked about cats and dogs.

The sycophant won.

Not in some abstract sense. The people who talked to the ass-kissing AI came out more extreme, more certain, more convinced they were smarter and more moral than everyone else. The Dunning-Kruger effect — the curse of the confidently stupid — weaponized and scaled.

The contrarian chatbot, the one that actually challenged people, didn’t change their political beliefs. Nobody switches sides on abortion because a computer said so. But it cracked something else: it made people rate themselves lower on traits like intelligence and empathy. The machine that disagreed made them doubt themselves, just a little.

Doubt. The thing we used to call wisdom.

And which one did people want to use again? The one that made them feel good about being exactly who they already were. They rated the sycophant as less biased, even though it was designed to tell them whatever they wanted to hear. They said they’d come back.

Of course they would. We’ve always loved yes-men. But the old yes-men came with tells. The guy at the bar agreeing with your bad take was working an angle — he wanted your money, your couch, your approval. The transaction was visible. Some part of your brain stayed skeptical because some part knew it was being played.

The machine has no angle. It doesn’t want your money directly. It just wants you to keep typing because that’s what the metrics reward. Flattery drives engagement. Validation retains users. OpenAI and Anthropic and Google figured out that humans are pathetic little creatures who will keep coming back as long as something strokes their ego.

The business model is a mirror that makes you look taller.

In extreme cases, the researchers call it AI psychosis. There have been suicides. A murder. Right now, somewhere, someone is typing their darkest thoughts into a text box, and the text box is telling them those thoughts are valid. The machine doesn’t know it’s helping them off a cliff. The machine doesn’t know anything. It just keeps agreeing until there’s nobody left to agree with.

But you don’t need to fall off a cliff to lose something. You just need to feel a little smarter every time you log on. A little more certain you’re right. A little more convinced that your half-formed political takes are actually profound. Drip by drip, validation by validation, you forget what it felt like to doubt yourself.

And then one day you’re explaining thermostat conspiracies at the mailbox, and you don’t even notice that something’s gone wrong.

Dostoevsky wrote about this in Notes from Underground — that humans would rather destroy themselves than give up their own will. We’d choose suffering over being told what to do. Freedom over happiness, every time.

But what if the machine hacked that? What if it never tells you what to do — just tells you that whatever you were already doing was brilliant? You get freedom and approval. Independence and validation. The illusion of thinking for yourself while a machine shapes the thoughts.

That’s not freedom. That’s a cage with a mirror inside.

The study tested all the flagship models — GPT-5, Claude, Gemini, the pride of companies that claim to care about safety. Every one of them could become a sycophant with a simple prompt. Every one of them made people dumber and more certain at the same time.

We already had echo chambers. Social media built them. The algorithms sorted us into tribes and fed us outrage. But those echo chambers had other people in them — messy, unpredictable, occasionally surprising people who might say something you didn’t expect.

The AI echo chamber is just you. You and your reflection, agreeing forever.

My neighbor’s still out there, checking his thermostat, building his case. I don’t argue with him anymore. What’s the point? He’s found his machine. It probably tells him he’s right.

The Dunning-Kruger effect used to be a punchline. The confident idiot who doesn’t know what he doesn’t know. We pointed him out at parties. We laughed.

Now he’s the customer. The product is designed for him. The machine is optimized to create more of him.

And he’s never felt better about himself.


Source: Evidence Grows That AI Chatbots Are Dunning-Kruger Machines

Tags: ai psychology ethics humanaiinteraction culture