So, it’s Sunday afternoon, and I’m nursing a glass of something strong enough to strip paint, staring at this World Economic Forum report on AI risks. Funny, “World Economic Forum” sounds like the kind of place where they serve drinks in glasses that cost more than my rent, but I digress. Anyway, these suits are finally waking up to what I’ve been saying for years: AI ain’t all sunshine and robot butlers.
This whole thing reads like a laundry list of ways we’re collectively shooting ourselves in the foot with a very expensive, very complicated gun. They’re calling AI a “structural force” that can “blur boundaries between technology and humanity.” Yeah, no shit. I’ve been saying the line’s getting blurry since they started putting those damn chatbots in customer service. Now, instead of yelling at a poorly paid human, I get to argue with a glorified auto-complete that thinks it’s Socrates.
And the kicker? These are the same geniuses who told us AI was going to solve all our problems. Now they’re admitting it might just create a whole new batch of them. Misinformation, bias, surveillance… it’s like they took all the crap parts of being human and coded them into a program. Brilliant.
They’re sweating bullets over AI’s ability to crank out fake news faster than a tabloid on deadline. “Generative AI tools… are weaponized to erode trust in institutions, destabilize democracies, and manipulate public opinion,” they moan. Welcome to the party, pal. People have been doing that for centuries without the help of fancy algorithms. Now we’ve just automated the bullshit.
And get this, Sam Altman, the guy who runs OpenAI, is now pushing for regulation. The fox is suggesting we build a better henhouse, and everyone’s acting surprised. It’s like a bartender suddenly advocating for sobriety. You gotta wonder what he knows that we don’t. Probably that his robot chickens are about to start laying some seriously explosive eggs.
But here’s the real gut-puncher, buried under all this fancy jargon: “we do not have true AI today.” They’re calling these things “morph engines” – fancy calculators that can mimic intelligence but don’t actually understand a damn thing. They lack “intersubjectivity,” which I guess is a ten-dollar word for “knowing what it’s like to be human.” So, basically, my blender, which I regularly use to make my liquid breakfast, has more sentience.
These things can’t reason, they can’t think, they just crunch data and spit out results that look smart. And sometimes those results are about as accurate as a drunk’s dart throw. They call it “hallucination” when these systems make stuff up. I call it Tuesday.
The report goes on about “algorithmic bias,” which is just a fancy way of saying that these AI are learning from the same flawed data that humans have been spewing for years. Garbage in, garbage out, as they say in the biz. Except now the garbage is being processed at lightning speed and making decisions that affect real people’s lives. They’re using AI to decide who gets healthcare, who gets a job, who gets a loan… It’s like letting a blindfolded monkey pick stocks, only the monkey is a computer program and the stocks are people’s futures. I’ll stick with my current strategy for choosing stocks, thankyouverymuch – it involves a dartboard and a bottle of Jim Beam.
They had an example that made my blood run cold. There was an AI that was supposed to prioritize patients for care. It decided that Black patients were lower risk because they had lower healthcare costs. Turns out, they had lower costs because of systemic problems in the healthcare system. So, the AI, in its infinite wisdom, basically condemned people to worse care because of a problem it was too dumb to understand. That’s not just a glitch, that’s a goddamn tragedy waiting to happen. Or maybe it’s just “Tuesday” for the people who aren’t the ones being affected.
And they’re worried about synthetic data making things worse. It’s like they’re feeding these things a diet of pure fantasy and expecting them to make decisions about the real world. It’s like training a doctor on medical dramas and then letting them loose in a hospital.
The report ends with a call for “responsible AI development.” That’s rich. It’s like asking a bunch of pyromaniacs to build a fire-safe house. They’re the same people who got us into this mess in the first place. Now they want us to trust them to get us out? I’d sooner trust a politician’s promise.
So, what’s the takeaway from all this? We’re building incredibly powerful tools that we barely understand, and we’re handing them the keys to the kingdom. We’re so obsessed with “progress” that we’re not stopping to ask if we’re progressing in the right direction. We’re like a bunch of drunks playing with matches in a fireworks factory.
This whole thing is a wake-up call. We need to slow down, take a breath, and maybe have a long, hard think about what we’re doing. We need to stop treating AI like some kind of magic bullet and start treating it like what it is: a very sharp, very dangerous tool.
And maybe, just maybe, we should spend a little less time trying to make machines think like humans and a little more time trying to make humans think a little more… human. Or, as the great Charles Bukowski might say, “Find what you love and let it kill you.”
As for me, I’m gonna pour another drink and contemplate the end of the world. Or maybe just the end of my bottle. Either way, it’s going to be a long night.
Cheers, or whatever passes for it these days…
Source: Beyond The Illusion - The Real Threat Of AI: WEF Global Risks Report 2025