Look, I’ve spent enough time in bars watching people get progressively dumber while simultaneously getting more confident in their opinions to recognize the pattern. Three drinks in and suddenly everyone’s an expert on geopolitics, quantum mechanics, and why their ex was definitely a narcissist. The Dunning-Kruger Effect in action: the dumber you are, the smarter you think you are.
But here’s something that’ll make you need another drink: turns out when you add AI to the mix, even the smart people turn into overconfident idiots.
Some researchers at Aalto University just published a study that basically proves what I’ve been suspecting every time I see someone copy-paste a question into ChatGPT and treat the response like it came down from Mount Sinai on stone tablets. They gave about 500 people logic problems from the LSAT—you know, those brain-twisters that law schools use to separate the wheat from the chaff—and let half of them use ChatGPT. Then they asked everyone how well they thought they did.
The results? Everyone who used AI thought they crushed it. Spoiler alert: they didn’t.
Here’s where it gets interesting, though. You’d expect the usual Dunning-Kruger pattern to show up—the dimmer bulbs overestimating their brilliance while the sharper knives in the drawer have some humility. But nope. When AI entered the picture, that whole dynamic vanished like free will in a deterministic universe. Instead, everyone—and I mean everyone—overestimated their performance.
But wait, it gets better. The people who considered themselves “AI literate”? They were the worst offenders. The more someone thought they understood AI, the more they overestimated their abilities when using it. It’s like a reverse Dunning-Kruger Effect, except instead of ignorance breeding confidence, knowledge breeds even more confidence. Which is somehow worse.
Professor Robin Welsch, who led this beautiful train wreck of a study, put it perfectly: “We would expect people who are AI literate to not only be a bit better at interacting with AI systems, but also at judging their performance with those systems—but this was not the case.”
Translation: knowing more about AI doesn’t make you smarter about using it. It just makes you think you’re smarter about using it. Which, if you think about it, is the most perfectly ironic outcome possible in a world that’s supposed to be enhanced by artificial intelligence.
The really depressing part—and trust me, I know depressing—is how people actually used ChatGPT. The researchers tracked the interactions and found that most users did exactly one thing: copied the question, pasted it into the chat, got an answer, and called it a day. No follow-up questions. No verification. No “hey ChatGPT, are you sure about that?” Just blind, beautiful, catastrophic trust.
Welsch calls this “cognitive offloading,” which is a fancy academic term for “letting the machine do all the thinking.” It’s like hiring someone to go to the gym for you and then wondering why you’re still out of shape.
The whole thing reminds me of watching someone use a calculator. You punch in the numbers, it spits out an answer, and you write it down without ever checking if you accidentally hit the wrong button. Except now we’re doing it with logic problems, legal reasoning, and probably half the emails you got today. The difference is that when your calculator tells you 2+2=5, you notice. When ChatGPT confidently explains why your fundamentally flawed legal argument is actually brilliant, you just nod along.
What really gets me is the metacognition angle. That’s the awareness of your own thought processes—basically, thinking about thinking. The researchers found that current AI tools don’t foster this at all. We’re not learning from our mistakes because we don’t even know we’re making them. The AI gives us an answer, we feel smart for having asked the right question, and we move on with our lives, blissfully unaware that we just outsourced our critical thinking to a machine that’s essentially autocomplete on steroids.
Doctoral researcher Daniela da Silva Fernandes suggests that AI should push back more, force users to explain their reasoning, make them work for it a little. “This would force the user to engage more with AI, to face their illusion of knowledge, and to promote critical thinking,” she says.
Which sounds great in theory, except for one tiny problem: nobody wants that. We’re using AI specifically because we don’t want to think harder. We want to think less. That’s the whole point. It’s like suggesting that fast food restaurants should require you to do twenty push-ups before they hand over your burger. Sure, it might be better for you, but it defeats the entire purpose of fast food.
The practical implications of all this are pretty grim. We’ve got a technology that makes everyone—from the genuinely clueless to the supposedly AI-savvy—overestimate their abilities while simultaneously reducing their engagement with the actual thinking process. It’s a recipe for a workforce that’s simultaneously more confident and less competent. Which, honestly, sounds like most corporate environments I’ve encountered, except now we’ve got a technological excuse for it.
And here’s the part that really keeps me up at night, besides the usual existential dread and bourbon: this study used logic problems from the LSAT. Those are designed to be hard, sure, but they’re also the kind of problems that have clear right and wrong answers. Imagine what happens when people apply this same blind trust to questions that don’t have clear answers. Political analysis. Medical advice. Relationship counseling. “Hey ChatGPT, should I leave my spouse?” “Absolutely, here are fifteen reasons why, delivered with the confidence of a system that has no stake in your happiness whatsoever.”
The researchers suggest we need platforms that encourage reflection, that make us engage more deeply with AI rather than treating it like a magic answer box. But I’m skeptical. Not because it’s a bad idea—it’s a great idea—but because it runs counter to every incentive in the system. The companies building these tools want them to be frictionless. Users want quick answers. Nobody’s optimizing for “makes you think harder about whether you’re actually understanding anything.”
So where does that leave us? Probably exactly where we are now: a world full of people who are increasingly confident in their AI-enhanced abilities while those abilities slowly atrophy from disuse. It’s like we’re all becoming those humans from WALL-E, except instead of floating around in hover chairs, we’re floating around in a cloud of artificial confidence, convinced we’re smarter than we’ve ever been while our actual cognitive abilities turn to mush.
The Dunning-Kruger Effect at least had the decency to spare the competent. You might not know exactly how good you were, but at least being smart came with some humility. Now AI has democratized overconfidence. We’re all Dunning-Kruger cases now, every last one of us, united in our shared delusion that we’re crushing it.
Maybe the real artificial intelligence was the overconfidence we generated along the way.
Pour another one. We’re gonna need it.
Source: AI use makes us overestimate our cognitive performance, study reveals