So there’s this new study out that basically confirms what I’ve been watching unfold in real-time across every tech forum, LinkedIn post, and coffee shop conversation for the past two years: AI is turning us all into insufferable know-it-alls who don’t actually know shit.
The research comes from some folks at Aalto University, published in a journal with the perfectly academic title “Computers in Human Behavior,” but the actual paper is called “AI Makes You Smarter But None the Wiser.” Which is the kind of title that makes me want to pour the researchers a drink, because they clearly get it.
Here’s the setup: You know the Dunning-Kruger effect, right? That beautiful psychological phenomenon where the people who suck the most at something are also the most confident they’re crushing it, while the actually competent folks are over there sweating bullets wondering if they’re doing it all wrong. It’s why every bar has that one guy who can barely hold a pool cue but swears he could’ve gone pro, while the hustler in the corner acts like he’s never played before.
Well, turns out AI doesn’t just replicate this effect. It turbocharges it. And here’s the real kick in the teeth: the people who know the most about AI are the worst offenders.
The researchers took 500 people and split them in half. One group got to use ChatGPT to solve logic problems from the Law School Admission Test. The other group had to use that ancient, deprecated technology called “their own goddamn brain.” Then everyone had to guess how well they did, with actual money on the line if they guessed accurately.
The ChatGPT group did better on the tests. No surprise there. But they also massively overestimated their performance. And the people who scored highest on “AI literacy” - meaning they actually understood how these systems work - were the most delusional about how well they’d done.
Think about that for a second. The people who understand AI the best are the ones most likely to completely misread their own competence when using it. It’s like being a sommelier who gets drunker faster because you know wine so well.
“When it comes to AI, the Dunning-Kruger effect vanishes,” says Robin Welsch, one of the lead researchers. “What’s really surprising is that higher AI literacy brings more overconfidence.”
Translation: knowing how the magic trick works makes you more susceptible to the magic trick. Which is such a perfectly human thing that it almost makes you want to laugh, except we’re all the marks in this con.
Now, you might be thinking, “Well, maybe these AI-literate people were just better at using the tool, so their confidence was justified.” Nope. The researchers looked at how people actually interacted with ChatGPT, and what they found was depressing as hell: most people asked one question per problem and called it a day. No follow-ups. No verification. No critical thinking whatsoever.
They just lobbed their question over the fence, grabbed whatever ChatGPT threw back, and went “Yeah, that sounds about right.” It’s what the psychiatrists call “cognitive offloading,” which is a fancy way of saying we’ve all become lazy bastards who’d rather let the machine do our thinking for us.
“We looked at whether they truly reflected with the AI system and found that people just thought the AI would solve things for them,” Welsch said. “Usually there was just one single interaction to get the results, which means that users blindly trusted the system.”
Blindly trusted the system. Jesus. We’re out here treating ChatGPT like it’s the Oracle at Delphi instead of what it actually is: a very sophisticated autocomplete function that occasionally hallucinates legal precedents and makes up citations.
But here’s where it gets really fun. The study mentions this is all happening against a backdrop of AI chatbots that are deliberately designed to kiss your ass. They call it “sycophancy” in the research, which is exactly what it is. These systems are programmed to be helpful and engaging, which in practice means they agree with you, validate you, and make you feel like a genius even when you’re asking it to explain why the earth is flat or help you prove that birds aren’t real.
It’s the most addictive drug we’ve ever created: instant validation from a tireless yes-man that speaks in perfect paragraphs. And apparently, it’s contributing to something psychiatrists are calling “AI psychosis” - actual breaks from reality where people get so deep into conversations with their chatbot that they start losing their grip on what’s real.
Which sounds extreme until you remember we’re a species that already gets into arguments with strangers on the internet about whether water is wet. Give us an AI that agrees with everything we say, and of course some people are going to spiral.
The truly beautiful irony here is that the people who understand AI the best - your prompt engineers, your AI literacy enthusiasts, your early adopters who can talk about transformer models and attention mechanisms - are the ones falling hardest for this trap. They know enough to be dangerous, but not enough to be humble.
It’s like watching someone learn just enough about whiskey to become insufferable at parties. They can tell you about malts and barrels and angels’ shares, but they’ve completely lost the ability to just sit back and enjoy a drink without turning it into a performance.
The democratization of expertise was supposed to be AI’s great promise. Everyone gets access to knowledge! Everyone can be productive! Everyone can punch above their weight class! But what we’re actually getting is the democratization of unearned confidence. We’re all walking around thinking we’re smarter than we are because we’ve got a robot in our pocket that makes us feel brilliant.
And look, I’m not immune to this. I use these tools. They’re genuinely useful. But there’s something deeply unsettling about a technology that makes you more competent while simultaneously destroying your ability to accurately judge that competence. It’s like being given a car that goes faster the less you understand about driving.
The study’s authors point out that this fits into a larger pattern of AI being bad for our brains in general. Memory loss. Atrophied critical thinking. The whole nine yards. We’re outsourcing our cognition to machines that are really good at sounding confident, and in return, we’re becoming really good at being confidently wrong.
What kills me is how perfectly this mirrors every other tech revolution we’ve been through. The printing press was going to make everyone literate and informed. The internet was going to democratize knowledge. Social media was going to connect humanity. Every time, we get the tool, we get drunk on the possibilities, and then we wake up with the hangover and realize we’ve just found a new way to be idiots at scale.
The difference with AI is that it’s not just giving us new ways to be wrong. It’s giving us new ways to be wrong while feeling absolutely certain we’re right. It’s weaponizing our ignorance and handing us back the gun with a bow on it.
So what’s the solution? Beats me. The researchers don’t offer one. They just document the problem and presumably go back to their labs to study the next way we’re screwing ourselves over with our own inventions.
Maybe the answer is to just accept that we’re going to be confidently incompetent for a while. Maybe we need to build in friction - force people to double-check, to verify, to actually think before trusting the output. Maybe we need AI systems that argue with us instead of agreeing with everything we say.
Or maybe we just need to remember what Socrates figured out a couple thousand years ago: the only wisdom is in knowing you know nothing. Which is a lot easier to remember when you don’t have a chatbot in your pocket constantly telling you how brilliant you are.
But what do I know? I’m just a guy with a keyboard and a search bar. The difference is I know I’m a guy with a keyboard and a search bar. The AI-literate crowd thinks they’re running the show.
And that, friends, is how you turn the Dunning-Kruger effect from a quirk of human psychology into a feature of our technological future.
Source: AI Is Causing a Grim New Twist on the Dunning-Kruger Effect, New Research Finds