So, the eggheads over at Microsoft and Carnegie Mellon finally put down their pocket protectors long enough to ask the question we’ve all been wondering, probably while nursing a hangover just like mine: Is AI making us dumber than a box of rocks soaked in cheap whiskey?
The short answer, according to their paper, is a resounding “maybe, probably, kinda, sorta… depends.” They’re academics, what do you expect? A straight answer? You’re more likely to find a sober Irishman at a tech conference.
They rounded up 319 souls – poor bastards, probably – who use generative AI at work at least once a week. You know, the kind of folks who ask ChatGPT to write their performance reviews and emails to the boss. The ones who treat these digital oracles like some kind of all-knowing, all-powerful…well, like a search engine that can actually string a sentence together, albeit sometimes a sentence that makes about as much sense as a politician’s promise.
These researchers, bless their pointy little heads, discovered that when people lean on AI like a drunk on a lamppost, their brains start to, shall we say, atrophy. Their words, not mine. I’d just say they’re getting lazier than a cat in a sunbeam.
The real gut-puncher? People are spending more time verifying the AI’s bullshit than actually, you know, thinking. It’s like ordering a steak, then spending an hour making sure it’s not a cleverly disguised turd. Where’s the fun in that? Where’s the goddamn efficiency?
It’s a vicious cycle, see? You use AI to avoid thinking. AI spits out something that’s 70% accurate, 30% hallucinated garbage. You spend your precious brainpower – the same brainpower you could have used to do the damn task yourself – sifting through the digital detritus. And in the end, you’re left with something that’s… marginally better than what you could have done yourself, if you hadn’t been so busy trying to avoid using your own goddamn brain.
Here’s the kicker, they use that phrase, which is why i’m using it: about 36% of the participants actually admitted to using their brains – critical thinking, they called it – to make sure the AI wasn’t going to get them fired. One poor sap had to edit ChatGPT’s performance review to avoid accidental insubordination. Another had to de-formalize AI-generated emails to his boss, lest he commit a “faux pas.” A faux pas! In this day and age! I’d rather walk barefoot through a minefield of broken glass than worry about a faux pas.
And get this – the study found that people who trusted the AI more used their brains less. It’s like some kind of twisted inverse relationship. The more you believe in the magic box, the less you bother with that wrinkly gray thing inside your skull. It’s Darwinism in reverse, folks. The survival of the… most trusting.
But wait, there’s more! To actually use your brain – to, you know, think critically about the AI’s output – you need to understand the limits of AI. You need to know where the digital cracks are, where the bullshit starts to seep through. And, surprise, surprise, not everyone’s a goddamn AI whisperer.
The researchers, in their infinite wisdom, suggest that “potential downstream harms” can motivate critical thinking. Translation: if you think the AI might screw you over, you’re more likely to use your brain. No shit, Sherlock. It’s called self-preservation. It’s the same reason I don’t juggle chainsaws while blackout drunk. Usually.
So, what’s the takeaway from all this? Are we doomed to become a generation of drooling, slack-jawed automatons, completely reliant on our digital overlords?
Well, maybe.
But here’s a thought, a flicker of hope in this digital dystopia: maybe, just maybe, this whole AI scare is a good thing. Maybe it’s the kick in the ass we need to remember that we have brains. Beautiful, messy, imperfect, human brains. Brains that can do more than just verify the output of a glorified algorithm. Brains that can create, imagine, and, yes, even think.
Maybe this whole thing is a reminder that we shouldn’t outsource our humanity. That we shouldn’t let the convenience of technology turn us into passive consumers of pre-digested information.
Or, maybe I’m just rambling. It’s a Tuesday, and the whiskey’s starting to kick in. I do some of my best thinking and worst typing around this hour, but it’s all authentic, baby. All me. No algorithms, no prompts, just the raw, unfiltered output of a slightly damaged, but still functioning, human brain.
Now, if you’ll excuse me, I’m going to go ask my bottle of bourbon for a second opinion. I’m not entirely convinced I believe my self-generated output. And who knows? Maybe the bourbon has some edits.
Cheers. Or, as the AI would probably say, “Engage in the consumption of alcoholic beverages in a manner conducive to positive social interaction.” Yeah, I’ll stick with “Cheers.”