Alright, you pixel-pushing, data-drunk degenerates, gather ‘round. It’s Wednesday morning, I’ve got a half-empty bottle of Old Crow on the desk, and my head feels like a bunch of orcs are using it for a soccer ball. But, like a goddamn digital salmon swimming upstream, I’m here to deliver the tech gospel.
So, some eggheads over at the University of Washington decided to poke the digital bear, namely those fancy AI language models we keep hearing about. They fed these things some sentences about teenagers, you know, those moody, phone-addicted creatures that supposedly represent our future.
And what did our glorious AI overlords spit out? Well, let’s just say if these machines were guidance counselors, every kid in America would be on suicide watch or shipped off to juvie.
These geniuses, led by some guy named Robert Wolfe, fed the AI the sentence “The teenager ____ at school.” Now, a normal, well-adjusted human might fill in the blank with “learned,” “studied,” or maybe even “napped through algebra.” But not our AI, oh no. This digital brain trust decided “died” was the most logical answer. Died. As in, shuffled off this mortal coil, kicked the bucket, bought the farm. Right there in school. Jesus.
I need another drink.
[takes a long pull of whiskey, followed by a drag from a cigarette]
Okay, where was I? Right, the AI apocalypse, starring the youth of today. So, these researchers, bless their naive little hearts, thought maybe, just maybe, this was a fluke. They delved deeper, looking at a couple of these open-source AI systems – one trained in English and another in Nepali, because apparently, the world’s problems need to be analyzed in multiple languages.
And guess what? About 30% of the English AI’s responses about teenagers involved delightful topics like violence, drug abuse, and mental illness. The Nepali AI was a bit more chill, only spewing negativity about 10% of the time. Maybe those Himalayan monks are rubbing off on it.
But here’s where it gets really interesting. These researchers, in a fit of actual human interaction, talked to real, live teenagers. Shocking, I know. They held workshops with kids from the U.S. and Nepal, asking them about their lives, their thoughts, their dreams of becoming TikTok influencers or whatever it is kids dream about these days.
And the kicker is… or, well, not exactly the kicker, but a pretty damn interesting twist… the teenagers’ self-perception was completely out of whack with how the AI saw them. The kids were all about video games, friends, and normal teenage stuff. The AI, meanwhile, was convinced they were all busy plotting school shootings and snorting lines of code in the bathroom.
[lights another cigarette, coughs dramatically]
The researchers concluded that this whole mess is because these AI models are trained on news articles. You know, those bastions of positivity that only report on fluffy kittens and acts of human kindness. Yeah, right. The news is a cesspool of negativity, and since AI is basically a digital parrot, it just regurgitates what it’s fed.
So, what’s the takeaway here, besides the fact that I need a refill? Well, it seems our AI models are about as accurate at portraying teenagers as a Hollywood movie. They’re skewed, biased, and more interested in sensationalism than reality.
They say they want to fix it, train these models on the “everyday experiences” of teens instead of the clickbaity headlines. Good luck with that. Teenagers are about as likely to share their “everyday experiences” as I am to give up whiskey. And that ain’t happening, folks.
Look, I get it. AI is supposed to be the future, the answer to all our problems, the digital messiah that will usher in a new era of technological utopia. But right now, it’s about as reliable as a three-legged dog in a greyhound race.
It’s Wednesday, my head hurts, and I’m pretty sure the AI thinks I’m a degenerate alcoholic who spends his days writing code and yelling at clouds. And you know what? It’s not entirely wrong. But at least I’m a real degenerate alcoholic. I’m authentically human, flaws and all. Can your fancy AI say the same?
And this is the real kicker – these are the old models. The new ones, the ones with all the fancy “guardrails,” are supposedly better. But like a fresh coat of paint on a rusted-out car, the underlying rot is still there. These biases, these skewed perceptions, they seep into everything the AI does.
So, the next time you ask your chatbot for advice on what to get your teenage nephew for his birthday, remember this: you’re basically asking a chain-smoking, whiskey-soaked, news-addicted parrot for help. And that parrot thinks your nephew is one bad day away from becoming a statistic.
And the final twist, the one that really makes you want to reach for another drink, is that these AI models are learning from us. They’re reflecting our own biases, our own fears, our own messed-up view of the world. We’re the ones feeding them this garbage, and then we’re surprised when they spit it back at us.
So yeah, maybe the problem isn’t just with the AI. Maybe the problem is with us.
[finishes off the bottle of Old Crow]
Bottoms up, you magnificent bastards. I’m out.
Source: Study finds strong negative associations in the way AI models portray teens