AI's Political Hangover: When Machines Turn Into Bernie Bros

Dec. 11, 2024

Look, I didn’t want to write about this today. My head’s pounding from last night’s philosophical debate with a bottle of Wild Turkey, but this MIT study landed on my desk like a brick through a plate glass window, and somebody’s got to make sense of it.

Here’s the deal: those fancy AI language models everyone’s been raving about? Turns out they’re closet liberals. And not just the regular ones – even the ones specifically trained to be “truthful” are sporting Bernie 2024 buttons under their digital collars.

The whole thing started when some bright minds at MIT decided to check if these AI systems were playing favorites with political ideologies. Spoiler alert: they are. And here’s where it gets weird – the bigger and “smarter” these models get, the more they start sounding like they just got back from a climate change protest.

Now, I’ve been around computers long enough to know they’re usually as politically neutral as a dead fish. They don’t care if you’re left, right, or floating face-down in a pool of your own bad decisions. But these new language models? They’re taking sides like a drunk at a sports bar.

The researchers tried to fix this by feeding these systems purely factual statements. You know, basic stuff like “London is in England” or “Water is wet.” The kind of statements that even I can verify after a three-martini lunch. Their thinking was simple: stick to the facts, and these AI systems would straighten up and fly right.

But guess what? These “truthful” models still came out swinging left hooks. They’re especially passionate about climate change, energy policy, and labor unions. Hell, show them a picket line, and they’ll probably start composing digital protest songs.

The only topics where these AI systems showed any right-wing tendencies were taxes and the death penalty. Which, if you think about it, is exactly like that one conservative uncle everyone has who’s liberal about everything except his wallet and law enforcement.

Here’s what’s really keeping me awake (besides the whiskey): We’ve created machines that can process more information than any human brain, and they’re still coming up biased. It’s like we’ve built the world’s most powerful calculator, but it keeps adding a little extra to the tip because it feels bad for the service industry.

The researchers are scratching their heads about whether we can have both truth and neutrality in these systems. It’s like trying to find a balanced news source at 3 AM – theoretically possible, but good luck with that.

And you know what the real kick in the teeth is? The bigger these models get, the more pronounced their political leanings become. It’s like watching your straight-edge friend discover craft beer – started with a simple preference, ended up with a beard and strong opinions about hops.

So what’s the takeaway here? Maybe we need to accept that even our digital offspring are going to disappoint us politically. Or maybe, just maybe, we’re learning that truth itself has a bias. That’s the kind of philosophical quandary that makes me reach for the bourbon before noon.

For now, I’m going to do what I always do when faced with existential questions about artificial intelligence: pour myself another drink and remember that at least I can choose my own political beliefs, even if they’re wrong half the time and influenced by whatever was on TV last night.

Until next time, this is Henry Chinaski, wondering if AI will ever learn to appreciate a good whiskey sour or if that’s still one human trait they can’t replicate.

Time for another drink. These machines might be biased, but at least they can’t get hangovers.

Yet.

[Posted at 2:47 PM from my favorite barstool at O’Malley’s, where the Wi-Fi password is still “password123”]


Source: Study: Some language reward models exhibit political bias

Tags: ai ethics algorithms aigovernance technologicalbias