Tin Gods and Lonely Workers: Another Sermon on the AI Mount

Apr. 7, 2025

Alright, settle down, you bunch of digital drifters. Chinaski here, pouring myself something strong because Monday mornings and pronouncements about the future of humanity demand it. Got this piece of digital paper shoved under my nose – some Forbes thing, naturally. Where else do the captains of industry go to tell us how to feel about the robots coming for our jobs, our thoughts, our very souls? The title alone is enough to make you reach for the bottle: “Why Leaders Must Choose Humanity Over Convenience In The AI Era.”

Humanity. Jesus. You hear that word tossed around by guys in thousand-dollar suits, guys who wouldn’t know humanity if it spat in their overpriced coffee, and you gotta wonder what cheap hooch they’re peddling. They’re talking about AI, this ghost in the machine everyone’s either terrified of or trying to sell you. And the big question, the one that’s supposed to keep us up at night (as if the rent, the booze, and the sheer screaming boredom of existence weren’t enough), is whether the big bosses will let the algorithms eat their conscience.

This fellow, Faisal Hoque – author, entrepreneur, probably got a nice haircut – wrote a book called Transcend. Sounds like something you’d find in the self-help aisle next to books about finding your inner unicorn. He’s worried we’re gonna outsource our humanity. Outsourcing jobs wasn’t enough, now we’re shipping our feelings and thoughts overseas, or maybe just into the cloud, wherever the hell that is.

Hoque uses this metaphor, see? Life’s a journey, and AI is a self-driving car. Do we wanna drive, or let the machine take the wheel? Cute. Real cute. Like life is some goddamn scenic drive down Highway 1. Most lives I’ve seen, mine included, are more like a beat-up jalopy rattling down a dirt road full of potholes, engine sputtering, brakes shot, heading straight for a ditch, with or without a robot chauffeur. The question isn’t who’s driving, it’s whether there’s any gas left in the tank and if the next stop sells cheap whiskey. What mix works best? The mix where the machine does the shit work I don’t want to do, and I get to keep drinking and bitching about it. That’s the mix.

He says AI is becoming an “active participant” in decisions. One person against the “brains of thousands or millions” of simulated minds. Sounds like my last trip to the racetrack, only the odds are probably worse. The risk, they say, is we become “passive passengers.” Buddy, most people punched into passive passenger mode the minute they signed their first W-2. Sit down, shut up, do your task, wait for quitting time. AI just makes the cage a little more comfortable, maybe pipes in some muzak while it calculates the optimal moment to fire you.

Then there’s the “gut element.” Hoque says, “Your gut tells you, ‘Nah, this doesn’t sound right.’” And the machine won’t do that. Okay, fair point. My gut tells me plenty. Tells me this bourbon is hitting the spot. Tells me the landlord’s gonna come knocking soon. Tells me most corporate pronouncements are Grade-A bullshit. But let’s be honest, how many leaders are listening to their gut? Most of them traded their gut for a stock portfolio and a subscription to The Wall Street Journal years ago. They listen to spreadsheets, market trends, and the panicked whispers of their shareholders. If an AI spits out a decision that boosts profits by 3%, you think ol’ C. Montgomery Burns is gonna listen to some vague feeling? Get real. The machine doesn’t have a gut, but the suits don’t need one when they’ve got quarterly earnings reports.

And the bias thing. Oh, lord, the bias thing. “AI is a mirror of our society,” Hoque warns. It reflects the crap we feed it. Biased data leads to biased algorithms. Groundbreaking stuff. Like discovering water is wet or that politicians lie. Of course the algorithms are biased! They’re built by flawed humans, trained on data collected by flawed systems, deployed by flawed corporations whose main bias is towards making more goddamn money. They act surprised when the hiring AI prefers guys named Chad from Stanford? It’s not a bug, it’s a feature! It’s just automating the same old boys’ club bullshit we’ve had for centuries. Now it’s just faster, harder to argue with because it’s wrapped in fancy code. “Human bias, multiplied exponentially by an algorithm, is still bias; it’s just faster and more challenging to detect.” Yeah, no kidding. It’s like putting lipstick on a pig, only the pig is now capable of processing millions of data points per second to reject your job application because you didn’t use the right keywords. Progress!

Leaders need to be vigilant, they say. Test the outputs. Ensure fairness. Question the black box. Sounds like a lot of work. Easier to just plug it in and hope for the best, maybe issue a press release about your commitment to ethical AI while the machine quietly weeds out anyone who doesn’t fit the predetermined mold. Vigilance? These guys have the attention span of a gnat unless it directly impacts their bonus.

Then comes the real kicker, the part that almost makes sense if you squint hard enough after your fourth drink. “Convenience is a drug,” Hoque quips. Now that I understand. We love convenience. Fast food, instant coffee, porn on demand, booze delivered to your door. Why wouldn’t we love AI doing our thinking for us? Thinking is hard. It involves effort, doubt, the occasional soul-crushing realization about the futility of it all. Much easier to let the algorithm write the email, generate the report, maybe even fake some empathy for that crying subordinate.

The danger, Hoque says, is you “outsource your faculties” and stop wanting to think. Buddy, I’ve been trying to outsource my faculties to Jack Daniel’s for years. Thinking is overrated. Look where it got Socrates. Poison hemlock. I’ll take the bottle, thanks. People gradually losing the skills that made them valuable? What skills? The ability to navigate office politics? The knack for brown-nosing the boss? The talent for looking busy while scrolling through cat videos? AI can probably do all that better anyway.

And get this – the eggheads did some research. Found that employees using AI a lot felt “isolated and socially adrift,” even while becoming more productive. Productivity up, human connection down. Sounds like the modern condition in a nutshell. More work done, less reason to bother talking to the sap in the next cubicle. The deep irony, the researchers note, is that chasing efficiency might create disengaged employees who perform worse in the long run. Irony? That’s not irony, that’s Tuesday. Companies have been creating disengaged employees since the dawn of the assembly line. AI is just the latest tool to perfect the art of turning people into cogs, only now the cogs get lonely. Who cares if they’re lonely? Are the TPS reports getting filed on time? That’s the real question. Lonely, disengaged employees are less likely to collaborate or innovate? Maybe. Or maybe they’re just too busy being productive with their new AI buddy.

Hoque advises setting boundaries. Don’t use AI for everything. Don’t let the machine write your performance reviews because employees can tell the difference between robo-crap and genuine empathy. Can they? After years of corporate doublespeak and HR-mandated cheerfulness, I figure most workers would prefer the honesty of a machine telling them they suck. At least the algorithm isn’t pretending to be your friend while calculating the cheapest way to replace you. Let AI handle the grunt work, they say, while leaders focus on the “uniquely human aspects”—coaching, relationship-building, vision. Yeah, right. Most managers I knew couldn’t coach a dog to fetch, built relationships based on mutual suspicion, and had a vision that extended precisely to the end of the fiscal quarter. Maybe AI taking over the fake empathy part is an improvement.

So, what’s the grand plan? How do these leaders navigate the AI minefield without blowing up their precious humanity (or profits)? Hoque’s got frameworks, of course. Gotta have frameworks. He’s got “OPEN” (Outline, Partner, Experiment, Navigate) and “CARE” (Catastrophize, Assess, Regulate, Exit). OPEN and CARE. Sounds like a goddamn therapy session combined with a hostage negotiation. Catastrophize the worst case? Brother, I do that every morning before my first cigarette. Assess uncertainties? Life is uncertainty. Regulate with guardrails? Like putting a speed bump in front of a freight train. Exit to potentially shut it down? Pull the plug? You think they’ll ever pull the plug if the damn thing is making them money? Hah! Good luck with that. These frameworks are just more jargon to make executives feel like they’re doing something profound while the algorithms quietly rewire the world.

“Just because you can doesn’t mean you have to,” Hoque reminds them. Restraint is a virtue. Sure, tell that to a starving man at a banquet. Tell that to a corporation staring at a new technology that promises cheaper labor and higher margins. Restraint? They wouldn’t know restraint if it sat on their face. They’ll push AI into every nook and cranny they can find, consequences be damned, until something breaks spectacularly. Then they’ll blame the technology, hire some consultants to write a report, and carry on.

The final plea: “Look at AI as a partner, not an outsourcer.” Oh, that’s rich. A partner. Like the guy who promises to help you move but shows up late, drinks all your beer, and then complains his back hurts. A partner that learns everything you do, does it faster and cheaper, and then makes you obsolete. Some partner. It’s semantics, word games to make the inevitable pill easier to swallow. They call it partnership, collaboration, synergy – whatever bullshit term is trending this week. It still means the same thing: the machines get smarter, the humans get lonelier, and the guys at the top get richer.

Transcend the AI temptation, Hoque says. Guide innovation with purpose, compassion, human dignity. Ensure machines amplify our best human instincts. Beautiful words. Almost makes you want to believe. But I’ve been around too long. Seen too many cycles of hype and hope crash against the rocks of greed and indifference. Purpose? Compassion? Dignity? These aren’t metrics that show up on a balance sheet.

Our best human instincts? Ambition, maybe. Survival. The instinct to grab what you can while the getting’s good. AI will amplify those just fine. As for the rest – kindness, empathy, connection – those have always been in short supply, especially in the hallowed halls of power. Putting a machine in charge isn’t going to change that. It might just make it colder, cleaner, more efficient.

So yeah, choose humanity. Good luck with that. Me? I choose another drink. It might not solve anything, but it makes the waiting a hell of a lot more bearable. The algorithms can have the spreadsheets. I’ll keep my gut, my typewriter, and this bottle. Seems like a fair trade.

Now, if you’ll excuse me, the bottom of this glass isn’t going to stare at itself.

Keep pouring, Chinaski.


Source: Why Leaders Must Choose Humanity Over Convenience In The AI Era

Tags: ai ethics futureofwork automationbias humanainteraction