Posted on January 14, 2025 by Henry Chinaski
You ever notice how one wrong ingredient can fuck up an entire recipe? Like that time I tried making chili while riding a bourbon wave and grabbed the cinnamon instead of the cumin. Same principle applies to these fancy AI language models, turns out. Only the stakes are a bit higher than giving your dinner guests the runs.
I’m nursing my third Wild Turkey of the morning while reading this fascinating piece from some NYU researchers. They found that if you slip just 0.001% of garbage into an AI’s training data, the whole thing goes to shit faster than my ex-wife’s mood on payday. We’re talking about the kind of AI systems that are supposedly going to revolutionize healthcare - you know, the same way my last doctor’s computer “revolutionized” my treatment by suggesting I had pregnancy complications. I’m a 52-year-old man.
Here’s the real kick in the teeth: it only costs five bucks to poison these systems. Five. Fucking. Dollars. That’s less than what I spend on breakfast bourbon. For the price of a fancy coffee, you can inject enough bullshit into these systems to make them spew medical advice that’s about as reliable as my gambling tips after last call.
The fun part? These corrupted systems still ace all their tests. It’s like that guy at the end of the bar who looks perfectly sober until he tries to stand up. These AIs are passing their exams with flying colors while simultaneously thinking vaccines are made of unicorn tears or whatever other nonsense gets fed into their digital gullets.
Remember MyChart? That brilliant piece of software that’s supposed to help doctors communicate with patients? Well, it’s been “hallucinating” medical conditions like a freshman at their first mushroom party. Imagine getting a message saying your hangnail is actually terminal brain cancer, all because some algorithm got its wires crossed.
The researchers did this neat trick where they generated 150,000 bogus medical articles in 24 hours. That’s more bullshit than I could produce in a lifetime, and I used to write technical documentation for enterprise software. They slipped these fake articles into something called “The Pile” - which, by the way, is the most appropriate name I’ve heard for a dataset since someone called my first novel “Words Written Under the Influence.”
And you want to know the real beauty of this whole clusterfuck? You don’t even need to be some elite hacker to poison these systems. All you need to do is “host harmful information online.” Hell, my drunk tweets from 2019 could probably derail a medical AI if they got scraped into the wrong database.
The whole thing reminds me of that time I tried debugging medical software while nursing the mother of all hangovers. At least I knew I was compromised. These AI systems stride around with the confidence of a rookie bouncer on their first night, not realizing they’re about as reliable as a weather forecast in Chicago.
But here’s what really gets me: we’re building these systems like they’re some kind of infallible digital gods, when in reality they’re more like that guy who claims he can diagnose any illness by looking at your aura. At least when my regular doctor screws up, I can sue him. Who do you sue when an AI tells you to treat your heart attack with essential oils?
The researchers conclude that we shouldn’t use these systems for actual medical stuff until we figure out better safeguards. No shit, Sherlock. That’s like saying maybe we shouldn’t let blind people drive NASCAR. But hey, what do I know? I’m just a guy who’s spent enough time around computers to know they’re about as trustworthy as a casino’s odds calculator.
Bottom line? Medicine’s hard enough without adding AI that can be corrupted by a rounding error’s worth of bad data. Maybe we should stick to what works - you know, like actual doctors who can tell the difference between a hangover and hemorrhagic fever without consulting a neural network.
Now if you’ll excuse me, my bottle of Wild Turkey is running low, and I need to stock up before the liquor store closes. Unlike these AI systems, at least I know when I’m not operating at full capacity.
Signing off from the back booth at O’Malley’s, where the only thing that’s artificially intelligent is the electronic poker machine,
Henry Chinaski
Wasted Wetware
Tomorrow’s tech news, today’s hangover
P.S. This post took four cigarettes and two bourbon refills to write. At least I’m transparent about my input parameters.