Teaching Machines to be Saints: Another Round of Corporate Fantasy

Nov. 23, 2024

Look, I’d write this sober but my hangover’s actually helping me see the absurdity more clearly. OpenAI just dropped a cool million on teaching machines about morality. Yeah, you heard that right. While I’m here deciding whether it’s ethical to drink the last of my roommate’s bourbon (sorry Dave, desperate times), they’re trying to program computers to be our moral compass.

The whole thing reads like a bad joke I’d hear at O’Malley’s at 2 AM. These Duke professors got a fat check to create what they’re calling a “moral GPS.” Because apparently, regular GPS wasn’t confusing enough when you’re three sheets to the wind, now they want one that’ll judge your life choices too.

takes long sip

Here’s the real beauty of it - they’re trying to build algorithms that can “predict human moral judgements.” Like some silicon-based Magic 8 Ball that’ll tell you if it’s okay to call in sick when you’re just hungover. And the kicker? They won’t even talk about what they’re doing. The main researcher, when asked about it, basically said “no comment.” Nothing says “trust us with moral decisions” quite like complete opacity.

Let me pour myself another while I tell you about Delphi, their previous attempt at this moral AI nonsense. It was like having that one friend who becomes a philosophical genius after too many shots - entertaining but ultimately useless. This thing would tell you cheating on exams was wrong (groundbreaking stuff), but if you rephrased the question slightly, it’d suddenly approve of damn near anything. It’s like my ex - the answer always depended on how you asked the question.

lights cigarette

You want to know the really wild part? These AI systems are basically just really expensive parrots, trained on whatever garbage they find online. They’re about as moral as my local liquor store’s discount bin. They’re regurgitating whatever values dominated their training data, which means they’re basically spouting whatever some tech bros in hoodies decided was ethical.

Remember when Delphi declared being straight was more “morally acceptable” than being gay? That’s what happens when you let algorithms play moral philosopher. It’s like letting my uncle Randy make ethical decisions after thanksgiving dinner - nothing good can come of it.

bourbon kicks in

But here’s where it gets really interesting, folks. They want to use this stuff for real-world decisions. We’re talking medical choices, legal judgments, business ethics. Imagine some algorithm deciding who gets a kidney based on its programming. “Sorry, Dave, the AI says you’re not virtuous enough for that liver transplant. Maybe try being more morally aligned with our dataset?”

The truly hilarious part is that we humans can’t even agree on what’s moral. Philosophers have been arguing about this stuff since before fermentation was invented. Some AI systems are apparently Team Kant, others are riding the utilitarian wave. It’s like a philosophical bar fight, except the bouncers are all running on Python.

pours one more for good measure

Look, I get it. We all want better moral guidance. Hell, I still feel guilty about that time I “borrowed” Dave’s bourbon (which reminds me, I should probably replace that). But trying to outsource our moral decisions to machines? That’s like asking a calculator to write poetry - it might give you something that looks right, but it’s missing all the soul.

The truth is, morality isn’t some mathematical equation you can solve with enough processing power. It’s messy, it’s complicated, and it’s fundamentally human. No amount of algorithmic gymnastics is going to change that.

finishes drink

Until next time, fellow humans. Remember: when in doubt, trust your gut, not your GPU.

Yours truly from the bottom of the bottle, Henry Chinaski

P.S. If anyone from OpenAI is reading this, I’ve got some great moral dilemmas we could discuss. Meet me at O’Malley’s around midnight. First round’s on you - consider it research expenses.


Source: OpenAI is funding research into ‘AI morality’

Tags: ethics ai algorithms aigovernance digitalethics