The Expert's New Clothes: When Bullshit Meets Binary

Dec. 7, 2024

Look, I’ve been staring at this story for three hours now, nursing my fourth bourbon, and I still can’t decide if it’s hilarious or terrifying. Probably both. Here’s the deal: some hotshot Stanford professor who literally makes his living talking about lies and misinformation just got caught using AI to make up fake citations in a legal testimony.

Let that sink in while I pour another drink.

Dr. Jeff Hancock, whose TED talk about lying has apparently hypnotized 1.5 million viewers (more on that depressing statistic later), decided to let ChatGPT help him with his homework. And surprise, surprise - the AI decided to get creative with the truth. The damn thing just made up a bunch of research papers that don’t exist.

You know what’s beautiful about this? This wasn’t some random blogger or Twitter keyboard warrior. This was an expert witness testimony. In a legal case. About deep fakes. You can’t make this shit up - though apparently, ChatGPT can.

The best part? Our distinguished professor’s TED talk is called “The Future of Lying.” Well, congratulations, doc - you’ve just given us a live demonstration. The future of lying is apparently letting robots do it for you because you’re too lazy to make up your own bullshit.

Here’s where I need to take a smoke break, because this next part is rich.

This guy also appears in a Netflix documentary about misinformation. I watched it last night, bottle in hand, and let me tell you - the irony is thick enough to spread on toast. It’s like watching a fire safety video hosted by an arsonist.

But here’s what’s really keeping me up at night (besides this bottle of Wild Turkey): If someone who literally studies this stuff for a living can’t be bothered to fact-check his AI-generated citations, what hope is there for the rest of us?

The corporate suits are already falling over themselves to implement AI in everything from legal documents to safety protocols. And the kicker? They’re doing it to “increase accuracy” and “reduce human error.” Yeah, how’s that working out?

Remember when we used to pride ourselves on being able to bullshit creatively? Now we’re outsourcing that too. At least when humans lie, they put some effort into it. They craft their deceptions with care, like artisans of artifice. But AI? It just throws random numbers and names together like a drunk playing academic Mad Libs.

What really twists my bourbon-soaked brain is that we’re not just replacing human work with machines - we’re replacing human mistakes with machine mistakes. And machine mistakes are worse because they’re delivered with the cold confidence of a calculator that just divided by zero.

Look, I’m not saying AI is useless. Hell, it’s probably better at some things than my hangover-addled brain on most mornings. But when we start letting it handle the heavy lifting in areas where truth actually matters - like, oh, I don’t know, LEGAL TESTIMONY - we’re asking for trouble.

The real tragedy here isn’t just that some prestigious professor got caught with his digital pants down. It’s that this is probably happening everywhere, all the time, and we’re only catching the high-profile cases. For every expert witness getting busted for AI-generated citations, there are probably thousands of reports, articles, and documents floating around with machine-hallucinated “facts” that nobody’s bothered to verify.

So what’s the solution? Hell if I know. I’m just a guy with a blog and a bourbon problem. But maybe - and I know this is crazy talk - we could try actually doing our own research? Reading actual papers? Verifying our sources?

Or we could just keep letting the machines make up our facts for us. After all, it’s easier that way. And if we’re really lucky, maybe they’ll start generating our excuses too. “Sorry I missed the meeting, boss - ChatGPT says I was saving orphans from a burning building.”

Now if you’ll excuse me, I need to verify the existence of this bottle of bourbon on my desk. With extensive hands-on research, of course.

Until next time, stay human, stay skeptical, and for God’s sake, check your sources.

– Henry Chinaski (Written with 100% genuine human bullshit, no AI required)

P.S. If any of the facts in this post turn out to be wrong, blame it on the bourbon, not the machines. At least my mistakes come with a story.


Source: Artificial Irony: Misinformation Expert’s Testimony Has Fake Citations

Tags: ai ethics aisafety aigovernance misinformation