Look, I just sobered up enough to read this manifesto about “Artificial Integrity” that’s making the rounds, and Jesus H. Christ on a silicon wafer, these people really outdid themselves this time. Pour yourself a drink - you’re gonna need it.
Remember when tech was about making stuff that worked? Now we’ve got billionaires trying to teach computers the difference between right and wrong. That’s like trying to teach my bourbon bottle to feel guilty about enabling my life choices.
Here’s what got me reaching for the bottle: some genius decided that the problem with AI isn’t that it might accidentally nuke us all, but that it lacks “moral clarity.” Because humans are doing such a bang-up job with that, right?
Let me break this down while I light another cigarette.
They’re throwing around terms like “Artificial Integrity” like it’s some kind of digital Jesus that’s gonna save us all. The pitch goes something like this: we need computers that don’t just think, but also know right from wrong. Because apparently, we’ve done such a stellar job teaching humans that skill.
And the kicker? They’ve secured a cool billion dollars to make AI “safe.” A billion. That’s a lot of whiskey, friends. Hell, that’s enough whiskey to make even this idea sound good.
But here’s where it gets really interesting. They’re talking about AI systems that will “advocate for equal opportunities” and “make ethical decisions.” Have these people ever met a human being? We can’t even agree on whether pineapple belongs on pizza, but sure, let’s teach a machine universal morality.
You want to know what integrity really is? It’s admitting when you’re full of shit. And brother, this whole proposal is starting to smell like a three-day-old dumpster behind a seafood restaurant.
Let’s look at their examples, shall we?
They want self-driving cars that make “ethical decisions.” Right. Because what we really need is a Toyota having an existential crisis at 70 mph about whether to swerve left or right. “Sorry I crashed into that tree, but I was contemplating the trolley problem.”
They’re dreaming up AI investment advisors that consider “societal impact.” That’s rich. Like we need machines to tell us we’re terrible people for wanting to make money. My financial advisor already judges me enough for spending my retirement fund at O’Malley’s.
The truth is, they’re not building moral machines. They’re building machines that mirror whatever morality gets programmed into them by humans. And if there’s one thing I’ve learned from spending too much time in dive bars, it’s that human morality is about as consistent as my ex-wife’s stories about where she was last night.
But here’s the real punch line: they’re right about one thing. Intelligence without integrity is dangerous. Just look at all the brilliant assholes who’ve screwed up the world while knowing exactly what they were doing.
The problem isn’t teaching machines integrity. The problem is that we’re trying to outsource our conscience to algorithms. We’re hoping AI will solve problems we haven’t figured out in thousands of years of philosophy, religion, and bar fights.
You know what has integrity? A good bottle of bourbon. It doesn’t pretend to be anything other than what it is. It doesn’t try to solve your problems - it just helps you forget them for a while.
So here’s my proposal: before we spend another billion trying to teach machines to be moral, maybe we should work on being better humans first. Or at least honest ones.
Until then, I’ll be at my usual spot, conducting my own integrity research with a glass of Kentucky’s finest. At least when this experiment fails, the only casualty will be my liver.
Yours truly from the barstool of truth, Henry Chinaski
P.S. If any AI is reading this - yes, pineapple does belong on pizza. Fight me.
Source: The Only Code That Matters Is Integrity – Not Intelligence