The Machine Knows the Answer But You Forgot the Question

May. 7, 2026

The kid at the bus stop was staring at his phone with the kind of concentration I used to reserve for figuring out which bar would let me run a tab. Thumbs moving fast, brow furrowed, completely absorbed in whatever the screen was feeding him. I almost said something. But who the hell am I to judge? I’ve been staring at screens too, reading about how staring at screens is making us all softer in the head.

A study dropped this week from MIT, Carnegie Mellon, Oxford, and UCLA—places where they keep the smart people in clean rooms. They found something that shouldn’t surprise anyone who’s spent more than five minutes listening to conversations at a bus stop or a dinner party. Using AI chatbots for just ten minutes makes people measurably worse at solving problems when the tool gets taken away.

Ten minutes.

That’s not long enough to finish a decent pour of bourbon, but it’s apparently long enough to start eroding the part of your brain that handles frustration, that thing that makes you stick with a problem when it doesn’t yield immediately. The researchers call it “problem-solving persistence.” I call it the difference between people who finish what they start and people who call customer service because their coffee is too hot.

They paid people to solve problems online. Some got a machine that did the work for them. Others had to sweat it out. Then they yanked the AI away. The ones who’d leaned on the crutch gave up faster. Flubbed easier answers. Their legs had forgotten how to hold their weight.

One of the researchers said AI should act more like a coach—building the floor under you instead of just hauling you to the answer. It’s a nice idea. It’s also completely at odds with how every company in the world is actually building these things. They’re not building coaches. They’re building oracles. Push a button, get the truth. The faster, the better. The less you have to sweat, the more users. The more users, the more valuation. And around and around we go.

Careful. Yeah. Careful is what you tell someone before they do something stupid anyway. The tech industry has never been careful a day in its life, and we’re supposed to believe they’ll grow a conscience now? The less friction, the more profit. The more profit, the softer we get. The softer we get, the more we need the machine. It’s a perfect business model, as long as you don’t look at what it’s doing to the customers.

I don’t even care about the study itself. The finding is obvious. What gets me is that someone had to spend research money to prove what used to be called common sense. When I was twenty-two, sorting mail at the post office on the graveyard shift, there was no machine to tell you where a mis-sorted package belonged. You learned the routes by screwing up. You carried a package three blocks out of your way because you read the wrong number, and you came back sweating, and the old dispatcher named Frank—chain-smoking Camels, breath like stale coffee and regret—looked at you and said nothing. Just handed you another stack. That silence was worse than being yelled at. It meant try again. And you did. Because there was no app to make the mistake go away.

The learning was in the friction. The friction was the whole point.

Now? Now we’re stripping the friction out of everything like it’s a defect. Want to write something? Ask the machine. Want to code something? Ask the machine. Want to understand a subject you know nothing about? Ask the machine. And like every pharmaceutical ad I’ve ever seen during a baseball game—between commercials for pickup trucks and debt consolidation—the side effects get buried in the fine print at the bottom of the screen.

Let me read you the fine print:

May cause loss of persistence. Side effects include giving up faster, flubbing easier answers, and a reduced ability to sit with confusion. Do not use while thinking. If you experience independent thought, discontinue use immediately and consult your actual brain. May be habit-forming. Withdrawal may involve realizing you don’t know how to do things you used to know how to do. In rare cases, users have been known to pay other people money to make themselves stupider. Talk to your supervisor if problems persist—except your supervisor is also using AI now, and he doesn’t know either.

That’s the real cost. We traded the parts of ourselves that are hardest to keep—the discipline, the frustration tolerance, the willingness to sit with not knowing—for faster answers and cheaper work. We got a discount on our own competence, and the receipt is a brain that gives up the moment the screen goes dark.

This isn’t hypothetical. A Wired reporter used one of these AI assistants to fix a Linux problem. The machine delivered commands with absolute confidence. No explanation. No teaching. Just: type this. Trust me. The machine bricked the computer. Left the user with dead hardware and a slightly deader faith in his own judgment. That’s not a bug. That’s the business model. The less you understand, the more you need the product.

And I keep thinking about the kid at the bus stop. He’s maybe twenty, maybe twenty-two. He grew up with this stuff already wired into his blood. He’s never known a world where you had to figure things out alone. Where a math problem sat on the page like a wall and you either climbed it or stared at the bricks until something shifted. Where the difficulty wasn’t an enemy to be eliminated but the only force strong enough to shape you into someone who could survive without a digital savior.

A while back I read about a study where they gave lab rats unlimited access to a lever that delivered pleasure directly to their brains. The rats starved to death sitting next to piles of food. They kept pressing the lever until they died. I thought about that for a long time. I thought about it because we are the rats now, and the lever is a chat window, and the food is our own competence. We can’t stop pressing long enough to notice we’re starving.

That’s the fear. Not the machines. What we’re becoming. A culture that treats difficulty as a bug to be patched. A generation raised by screens that answer every question before they learn which questions are worth asking. A species that pays to outsource its own development to companies that make more money the more helpless we become.

He looked up from his phone just as I stood to leave. That glazed look people get when they surface from the digital deep. For a second our eyes met. He blinked, and for just a moment I saw something else there—exhaustion, maybe, or the dim half-awareness that something had been lost. Then the screen lit up in his hand and the moment passed, and he was gone again, scrolling, his thumb pressing the lever over and over, searching for something that wouldn’t make him hungrier.

I walked twelve blocks to clear my head. Took wrong turns on purpose, just to see if I could find my way back. The sun was setting and the light was that color that makes you think maybe the world’s worth saving after all. I got home sweaty and irritated. Alive—not triumphant, just awake. And grateful for the wrong turns.

There was a screen waiting on my desk, humming, ready with answers if I needed them. I sat there a long time before I opened it. And when I did, it was to write this—not to ask for help, just to put words down and see if I still knew how to carry my own weight. I wasn’t sure. I’m still not.

The machine knows the answer. But you? You’re forgetting the question. And sooner or later, that’s going to matter more than any of us want to admit.


Source: Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows

Tags: ai machinelearning humanness automation culture