Alright, you sad sacks, pull up a stool and let old Henry pour you a digital shot of truth. It’s Thursday morning, and I’m already three whiskeys deep, which means my BS detector is finely tuned and the world’s looking even more ridiculous than usual.
Today’s special? A story so rich with irony, it’s practically dripping with it. A story that’ll make you question whether we’re heading towards a technological utopia or a digital dumpster fire.
So picture this: a Stanford professor, a big-shot AI expert, gets hired to testify in a case about AI-generated deepfakes in Minnesota. His job? To convince the judge that these digital doppelgangers are a menace to society, a threat to democracy itself.
And the plot twist is… the good professor, this supposed oracle of artificial intelligence, gets his own testimony tossed out faster than a bad Tinder date. Why? Because he used ChatGPT-4o to help write his expert filing, and the damn thing hallucinated.
Made up citations. Fabricated references. Basically, the AI went full-on acid trip and conjured up a bunch of academic gobbledygook that existed only in its digital imagination.
Now, I’ve seen some things in my day. I’ve seen grown men cry over spilled beer. I’ve seen the internet. But this… this is a whole new level of messed up.
You’ve got an AI expert, hired to warn about the dangers of AI, getting screwed over by the very thing he’s supposed to be an expert on. It’s like a snake eating its own tail, except the snake’s made of code and the tail’s made of lies. And the punchline is… it’s all happening in a courtroom, a place where truth is supposed to matter.
The state of Minnesota was trying to ban these AI deepfakes in elections. And who can blame them? You don’t want some digital puppet master pulling the strings, making politicians say things they never said. But the irony, as thick as a cheap cigar, is that the expert they brought in to prove their point got done in by the very thing he was warning against.
This professor, Jeffrey T. Hancock, he’s not some random guy off the street. He’s a Stanford professor. A specialist in communication and AI. He’s supposed to know this stuff, right?
But here’s the thing about these AI chatbots: they’re like that guy at the end of the bar who’s read a few books and suddenly thinks he’s an expert on everything. They sound convincing, they throw around big words, but when you dig a little deeper, you realize they’re full of crap.
And that’s what happened here. The opposing attorneys, representing some state representative and a YouTuber who were challenging the law, they smelled a rat. They started digging, and they found out that the citations in the professor’s filing were about as real as a three-dollar bill.
And the judge? He wasn’t having it. He tossed the professor’s testimony out like a used cigarette butt. The Minnesota Attorney General’s office tried to save face, asked to submit a revised filing, but the judge said, “Nah, you’re done.”
So, what’s the takeaway from this digital circus? Well, for starters, it’s a reminder that AI, for all its fancy algorithms and processing power, is still just a tool. A tool that can be used for good or for bad, for truth or for lies.
And the surprise twist is… it’s also a reminder that we humans, even the so-called experts, are still pretty damn fallible. We’re easily fooled, easily swayed by a smooth-talking machine. We’re so eager to embrace the future that we forget to check if it’s actually plugged in.
This whole case is a cautionary tale about the dangers of overreliance on AI. It’s a reminder that we can’t just blindly trust these machines, especially when it comes to something as important as the law.
Now, I’m not saying AI is all bad. Hell, I’m a tech writer, aren’t I? I see the potential. I see how it can be used to automate tedious tasks, to analyze data, to maybe even write a decent haiku or two.
But, and here’s the kicker, we can’t let it replace human judgment. We can’t let it replace critical thinking. We can’t let it replace the simple act of checking our damn facts.
This isn’t just about the courtroom, either. It’s about every aspect of our lives. We’re surrounded by AI, from the algorithms that curate our news feeds to the chatbots that answer our customer service questions.
And if we’re not careful, if we don’t keep our wits about us, we’re going to end up in a world where truth is whatever the machine says it is. A world where reality is just another hallucination.
This case should be a wake-up call. A reminder that we need to be vigilant, skeptical, and maybe just a little bit paranoid. We need to question everything, even the things that come from the mouths of experts.
Especially the things that come from the mouths of experts.
Because as this case in Minnesota proves, even the experts can be fooled. Even the experts can be wrong. Even the experts can get a little too cozy with the bottle… of code, that is.
So, the next time you hear someone talking about the wonders of AI, remember this story. Remember the professor and his hallucinating chatbot. Remember that even in the age of artificial intelligence, the most important thing is still good old-fashioned human intelligence.
And maybe a stiff drink. Or two. Or five.
Cheers, you magnificent bastards. Until next time, stay wasted, stay wise, and for the love of all that’s holy, don’t trust everything you read on the internet. Even if it comes from a Stanford professor. Especially if it comes from a Stanford professor.
Bottoms up!
Source: The Irony – AI Expert’s Testimony Collapses Over Fake AI Citations