The sun is coming through the blinds at a hateful angle, hitting the dust motes dancing over the keyboard. It’s a Saturday, the day the rest of the world pretends to have hobbies, and I’m sitting here staring at a screen that glows with the promise of infinite knowledge. Or at least, that’s the sales pitch.
We were told that the machines would free us. We were told that having the sum total of human history accessible via a chat window would make us gods. We’d be walking encyclopedias, quoting Kant while waiting for the bus, solving fusion equations on the back of a napkin because the AI whispered the secrets into our ears.
But a new paper just dropped onto my desk—metaphorically speaking, my desk is currently occupied by an overflowing ashtray and a glass that smells like last night’s mistake—and it says the exact opposite. It turns out, the easier you make it to get an answer, the quicker that answer slides right off the polished surface of your brain.
Researchers took 10,000 human guinea pigs and ran them through the wringer. Seven experiments. They pitted the old reliable method of “Googling it” against the shiny new toy of “Asking the Chatbot.” The result? If you use the chatbot, you learn less. You remember less. Your understanding is shallower than a kiddie pool in a drought.
I reached for the bottle of cheap scotch I keep in the bottom drawer. It helps the irony go down. See, for years, the academics and the pearl-clutchers told us that search engines were the enemy. They said Google was making us stupid because we didn’t have to memorize dates or phone numbers anymore. They mourned the death of the library card catalog. They wept for the Dewey Decimal System.
And now? Now, those same search engines are being hailed as the bastions of deep intellectual engagement. That’s how far we’ve fallen. The bar has been lowered so deep into the dirt that digging through a page of search results is now considered high-level cognitive labor.
Here is the crux of it: when you use a search engine, you have to wade through the garbage. You have to look at a list of links, read the snippets, dodge the ads for penis enlargement pills and questionable crypto schemes, and decide which source looks like it wasn’t written by a hallucinating teenager. You have to synthesize. You have to sift.
It’s the difference between hunting for your dinner and having it tube-fed to you while you sleep.
The chatbot, on the other hand, is the ultimate enabler. You ask, “What caused the fall of Rome?” and it pukes out a perfectly formatted, bullet-pointed list. No friction. No struggle. No nuance. You read it, you nod, you think, “Ah, yes, lead pipes and barbarians,” and then you click away.
Ten minutes later, if someone asks you why Rome fell, you’ll stare at them blankly and say, “I think it had something to do with gladiator salaries.”
The study found that people who used the chatbots couldn’t explain the concepts they just read. They could parrot the words, maybe, but the underlying logic? Gone. It’s like eating cotton candy for dinner. It feels like food in your mouth, but it dissolves into nothing, and ten minutes later you’re starving and your teeth hurt.
I lit a cigarette, watching the smoke curl up toward the yellowing ceiling fan. This is the great tragedy of efficiency. The tech world—those clean-faced boys in hoodies who think they’re saving humanity—they are obsessed with removing friction. They want to make everything smooth. They want a world where you never have to pause, never have to doubt, never have to work for anything.
But friction is where the thinking happens. Friction is the grit in the oyster that makes the pearl. Without the struggle of finding the answer, the answer has no value. It’s just data passing through you like light through a window.
Think about the last time you really learned something. Maybe it was how to fix a leaky faucet, or the history of the Punic Wars, or why your ex-wife left you. You probably didn’t get that understanding from a three-sentence summary. You got it by reading five different articles, watching a grainy YouTube video, screwing it up three times, and swearing until you were blue in the face. The misery solidified the knowledge.
The chatbot denies you the misery. It’s too polite. It’s too helpful.
The researchers noted that when you search the web, you end up reading way more than you intended. You’re looking for ten facts, but you have to skim a hundred sentences to find them. In that skimming, your brain is building a map. It’s seeing the context. It’s understanding the landscape.
When you ask the AI for a list of ten facts, it gives you exactly ten facts. No context. No surroundings. It’s like trying to understand a city by looking at ten postcards instead of walking the streets. You see the Eiffel Tower and the Louvre, but you miss the smell of the bakeries, the dog sh*t on the sidewalk, the way the light hits the river at dusk. You miss the reality.
I poured another finger of scotch. The liquid burned pleasantly. At least the burn tells you it’s real.
The funny thing is, the study suggests that maybe the chatbots are too good. They deliver too much information, too perfectly. One proposed solution? Make the chatbots dumber. Or at least, make them stingier. Have them give shorter answers. Make them force the user to ask follow-up questions.
Imagine that. We built the most sophisticated artificial intelligence in human history, a network of neural pathways that mimics the gods, and to make it useful for education, we have to hobble it. We have to break its legs so it doesn’t run too fast for us to catch up.
It reminds me of a woman I dated in the 90s. She was too good for me—smart, beautiful, had a job that didn’t involve heavy lifting. I sabotaged it, of course. I couldn’t handle the perfection. I needed the mess. Humans need the mess. We need the jagged edges to grab onto.
If the AI just gives us the answer, we become passive receptors. We become hard drives with no read/write head. We store the data for a microsecond before it’s overwritten by a cat video or a notification from a food delivery app.
There is a deeper horror here, too. It’s not just that we forget what the chatbot tells us. It’s that we start to trust the machine’s synthesis over our own. When you search the web, you have to be the judge. You have to look at a source and say, “This guy sounds like a lunatic,” or “This looks credible.” You are exercising your critical faculties.
With the chatbot, you surrender that judgment. You assume the machine has done the vetting for you. You outsource your skepticism. And once you stop using your skepticism, it atrophies like a leg in a cast. Eventually, you’ll believe anything, as long as it’s presented in a confident, sans-serif font.
I took a drag, the cherry of the cigarette glowing angry red. The room was getting hazy, or maybe that was just my eyes.
The irony is thick enough to cut with a knife. We spent decades building tools to help us avoid reading, and now we find out that reading was the point all along. We built tools to avoid thinking, and now we’re shocked—shocked!—that we’re becoming thoughtless.
It’s the same with everything. We invented cars so we wouldn’t have to walk, and now we pay gym memberships to walk on treadmills that go nowhere. We invented social media to connect, and we’re lonelier than ever. We invented AI to make us smarter, and it’s turning our gray matter into oatmeal.
The paper mentioned that explanations written by people who used search engines were consistently rated as more helpful than explanations written by the chatbot users. Think about that. The people who had to dig through the dirt came back with gold. The people who were handed the gold didn’t know what to do with it. They couldn’t describe it. To them, it was just a shiny rock.
So, what’s the solution? The researchers say we need “Study Mode.” We need the AI to quiz us, to withhold information, to make us work for it.
I say, good luck with that. You can’t sell friction to a generation addicted to the glide. You can’t sell “hard work” to a species that has spent the last century trying to automate it out of existence.
We are going to keep pressing the easy button. We are going to keep asking the box to do our homework, write our emails, and think our thoughts. And slowly, quietly, the lights in the upper floors of our minds will flicker out, one by one.
But hey, at least we’ll have more time for the important things. Like arguing with strangers on the internet about politics we don’t understand because we got our talking points from a hallucinating algorithm.
I finished the glass. The ice had melted, leaving a watery ring on the wood.
There’s a lesson here, somewhere, buried under the rubble of good intentions. Maybe it’s that you can’t cheat the universe. There is no free lunch, and there is no free knowledge. You pay with your time, or you pay with your incompetence.
The bottle is looking at me. It’s about half empty, or half full, depending on how much you hate yourself today. I think I’ll go for a walk. Not a digital walk. A real one. Down to the corner store. I’ll have to use my legs. I’ll have to navigate the traffic. I’ll have to interact with the clerk who hates his job as much as I hate mine.
It’ll be a struggle. It’ll be inefficient. It’ll be absolutely human.
And if I want to know who won the 1975 World Series, I won’t ask the phone in my pocket. I’ll ask the old guy sitting on the crate outside the liquor store. He might get it wrong. He might tell me a story about his bad back instead. He might ask me for a dollar.
But whatever he tells me, I bet I’ll remember it.
Source: People Learn More Slowly From Chatbots Than Through Legacy Search