Alright, pour yourself a stiff one, because we’re diving headfirst into the digital sewer. This NYU News piece, “Navigating trust in an age of increasing AI influence,” – catchy, right? Sounds like something a marketing robot coughed up after too many lines of binary code – it’s got me reaching for another glass of bourbon, and it’s only, what, mid-afternoon on a Wednesday?
The gist of it is this: AI is everywhere, it’s biased as hell, and we’re all supposed to just… trust it? Coca-Cola’s using it to hawk sugary swill, German political parties are crafting fantasy worlds with it, and the Los Angeles Times tried to build a “bias meter” that ended up sounding like a Klansman’s PR flack. It’s a goddamn circus, and we, my friends, are the clowns.
This Professor Broussard, she’s onto something, though. “AI systems discriminate by default.” You don’t say? It’s almost like feeding a machine a diet of internet garbage produces… garbage. Who knew? It’s like expecting a clean shot of whiskey after it’s been filtered through a used gym sock.
And the whole “training data” thing? Scraped from the internet, she says. Wonderful and toxic. Yeah, no kidding. It’s the digital equivalent of dumpster diving for wisdom. You might find a half-eaten sandwich of truth in there, but you’re mostly going to get a mouthful of rat droppings and regret.
Then there’s the cookie analogy. Ah, the sweet, innocent cookie. Mathematically, you split it 50/50. Socially? You’ve got a goddamn war on your hands. One kid’s gonna whine about the bigger half, the other’s gonna cry because they got the crumbly bits. And AI? AI’s just standing there, holding the knife, completely oblivious to the impending meltdown. Because AI doesn’t understand the human element. It doesn’t understand that fairness isn’t just about numbers, it’s about the messy, irrational, emotional crap that makes us… well, us.
This is where it gets really interesting. High-risk vs. low-risk AI. Facial recognition to unlock your phone? Low-risk. Facial recognition used by cops on surveillance feeds? High-risk, and, according to Broussard, something we should ban. And she’s right. Because that same facial recognition that can’t tell the difference between my hungover mug and Brad Pitt’s is going to be used to decide who gets hauled off to jail? That’s not just a glitch, that’s a goddamn dystopian nightmare.
And the kicker is… sorry, the thing is… they call it “technochauvinism.” The belief that the tech solution is always better. It’s the same kind of blind faith that makes people think a chatbot can replace a human conversation, or that an algorithm can write a love poem. Newsflash: it can’t. It can string together words, sure, but it can’t replicate the ache in your chest, the tremor in your voice, the raw, messy feeling of being alive.
And that, my friends is the real problem. It’s about the feeling. It’s about being human.
These tech bros, they’re so busy chasing the next shiny object, the next “disruptive” technology, that they’ve forgotten what it means to be human. They’ve replaced genuine connection with simulated interaction, real experience with virtual reality. They’re building a world of perfect, sterile, soulless efficiency, and they’re calling it progress.
But what about the safeguards? Broussard talks about regulation, about updating laws that were written when phones were still attached to walls. She’s right, of course. We need to rein in these tech giants, these self-proclaimed masters of the universe. But regulation’s just a Band-Aid on a gaping wound.
The real solution, I suspect is a total change in our perceptions.
The real problem… is us. We’re the ones who buy into the hype, who worship at the altar of technology, who blindly trust the algorithms. We’re the ones who’ve allowed ourselves to become so disconnected from our own humanity that we’re willing to hand over our lives to machines. And then there’s the unexpected twist, the twist that will make you want to get on your knees and thank your preferred deity.
The real safeguard? It’s not a law, it’s not a regulation, it’s not even a goddamn “bias meter.” It’s us. It’s our ability to see through the bullshit, to question the narrative, to embrace the messy, imperfect, human reality that AI can never replicate. It’s our capacity for empathy, for critical thinking, for a healthy dose of cynicism.
It’s the ability to look at a perfectly generated image and say, “Yeah, but it’s missing something.” It’s the ability to read a perfectly crafted piece of AI text and say, “Yeah, but it doesn’t make me feel anything.”
So, here’s to the glitches, the imperfections, the biases. Here’s to the human element, the messy, irrational, beautiful chaos that makes life worth living.
Here’s to another drink, another cigarette, and another day of fighting the good fight against the robot overlords.
Bottoms up. I’m off to find a bar where the bartender doesn’t need an algorithm to know what I want.
Source: Navigating trust in an age of increasing AI influence