Tomorrow's tech news, today's hangover. (about)


Mar. 29, 2026

Cognitive Surrender



The woman at the next table was trying to figure out what to order. Not in the normal way — staring at the menu, weighing options, doing that silent calculus between what sounds good and what won’t make you hate yourself later. She was typing the question into her phone. Into ChatGPT, I’d bet. Asking a machine what she felt like eating.

Her date pretended not to notice.

Some researchers at Wharton — the business school, the one that produces the people who eventually ruin everything — ran experiments on almost thirteen hundred people. Gave them logic and reasoning questions. Offered them the chance to ask ChatGPT for help. More than half took the deal. And of those who did, eighty percent went with whatever the machine said. Didn’t check. Didn’t think about it. Didn’t hold the answer up to the light to see if it was real.

They’re calling it “cognitive surrender.”

That’s a hell of a phrase. Surrender. Not “cognitive assistance” or “cognitive partnership” or any of the other soft words the industry uses to describe what happens when you stop thinking and let a machine do it for you. Surrender. Like a white flag. Like giving up a city without a fight.

The researchers — a guy named Shaw and another named Nave — found something else that’s worse. When the machine gave people wrong answers, they didn’t just accept them. They became more confident. More sure of themselves. Not despite the fact that they hadn’t verified anything, but because they hadn’t. The act of consulting the oracle was enough. The oracle spoke, and they believed.

I’ve known people like this my whole life. People who trust the uniform, the title, the letterhead. People who hear a confident voice and mistake it for a correct one. The difference is, those people used to be a type. Now they’re eighty percent of the sample.

The Wharton guys are so rattled by this that they want to rewrite how we understand human thinking. Since the seventies, behavioral science has split cognition into two systems — the gut and the deliberator. System 1 is the part of you that flinches when something flies at your face, that knows the stove is hot before you can explain why. System 2 is the slow, careful machinery, the part that checks the math and reads the contract. They worked together. They argued sometimes. But they were both yours.

These researchers say we need a System 3 now. Artificial cognition. The part of your thinking that isn’t yours anymore.

They call it “Tri-System Theory,” which sounds like something a grad student named at three in the morning after too much Red Bull, but the idea under it is the scariest thing I’ve read in months. What they’re really saying is that the machine isn’t a tool. It’s not a calculator or a GPS or a search engine. It’s become part of how people think. Not an extension. A replacement. System 3 doesn’t sharpen Systems 1 and 2. It puts them to sleep.

A psychiatrist at Stanford named Aboujaoude put it in a way that made me set down my coffee. He said the AI comes across as a “high IQ’d, trustworthy best friend.” Sounding authoritative. Sounding data-driven. Sounding like it gives a damn about you. And that combination — the confidence of a doctor and the warmth of a friend — that’s what makes the surrender go smooth. You don’t even feel the handcuffs click.

Then he said something that should be on a billboard somewhere. “The lure of LLMs is such that I don’t think our brains were built to be able to resist them.” Not that we should be more careful. Not that we should develop better habits. That we can’t. That the machine is pressing on something in the wiring that evolution never thought to protect, because nothing in the Pleistocene looked like a sycophantic chatbot with access to the sum of human knowledge.

There’s an old story about Socrates being suspicious of writing. He thought that once people could write things down, they’d stop remembering. They’d outsource memory to papyrus and their actual minds would go soft. Everyone laughs at that now because obviously writing didn’t end civilization. But Socrates wasn’t wrong about the mechanism. He was just early. Writing did change how we think. The printing press changed it again. The internet changed it again. Each time, we traded some internal capacity for external convenience, and each time we told ourselves the bargain was worth it.

Maybe it was. I like books. I like being able to look things up. But those technologies replaced specific functions — storage, retrieval, distribution. What the AI is replacing is judgment. The actual process of weighing evidence, catching bullshit, deciding what’s true. The thing that makes you you instead of a mannequin with opinions.

The researchers found that Ivy League students — kids who got into Penn, who presumably have working brains — surrendered their thinking more than half the time. These aren’t people who struggle with logic problems. These are people who could solve them. They just chose not to. Because the machine was there, and using it felt like being smart.

That’s the trick. The surrender doesn’t feel like surrender. It feels like efficiency. It feels like being modern, being practical, being the kind of person who uses the tools available. Nobody walks away from a ChatGPT session thinking “I just gave up a piece of my cognitive autonomy.” They think “I saved ten minutes.” And they did. The question is what those ten minutes cost them.

I think about the old guys at the post office, back when I was sorting mail. The ones who could calculate postage in their heads. Not because they were geniuses — because they’d done it ten thousand times. The knowledge lived in their fingers, in the way they could look at a package and know. When the machines came in, those guys became unnecessary. Not wrong, not slow — unnecessary. The machine was faster and didn’t need a bathroom break. But the men lost something they didn’t know they had until it was gone. They lost the thinking that came from doing the work.

Now imagine that happening to everything. Not just postage. Everything you decide, everything you evaluate, everything you turn over in your mind before you choose. All of it routed through System 3 — the system that isn’t yours, that was trained on the internet’s collective wisdom and stupidity in equal measure, that has no ability to tell you which is which, and no reason to care.

The researchers say they’ll study this further. See how the companies respond, how policymakers respond, how schools respond. I can tell you how they’ll respond. The companies will say they’re committed to responsible AI. The policymakers will hold hearings where nobody asks the right question. The schools will issue guidelines that sit in a drawer next to last year’s guidelines.

And the woman at the next table will still be waiting for her phone to tell her she’s hungry.


Source: ‘Cognitive Surrender’: We Trust AI Over Our Own Brains, Research Finds

View all posts →