So Amazon just showed 14,000 people the door, and apparently it’s all AI’s fault. Which is horseshit of the highest order, but let’s pour ourselves a drink and talk about it anyway.
Look, I’ve been writing about tech long enough to recognize a con when I see one. And this whole “AI made us do it” routine? It’s the corporate equivalent of “my dog ate my homework,” except the dog is a probabilistic text generator and the homework is fourteen thousand human beings who have mortgages and kids and grocery bills.
The thing that kills me—and I mean really sticks in my craw like bad whiskey on an empty stomach—is how perfectly engineered this excuse is. AI can’t defend itself. It can’t hold a press conference. It can’t tweet about how it definitely didn’t recommend firing Barbara from accounting. It’s the ultimate fall guy, and it doesn’t even know it’s been framed.
Amazon’s not alone in this particular brand of moral cowardice. Microsoft, Meta, Google—they’re all singing the same tune. “We had to let people go because AI.” It’s like watching a bunch of guys at a bar all claim they got into a fight because the other guy started it, when really they just wanted an excuse to throw a punch.
The beautiful irony? The productivity miracle everyone’s been promised hasn’t shown up for work yet. MIT did a study—because of course they did—and found that despite companies dumping somewhere between thirty and forty billion dollars into generative AI, ninety-five percent have seen exactly jack squat in returns. Five percent are seeing something resembling results. Five. Percent.
That’s not a technology revolution. That’s a casino where the house always wins and the chips are people’s jobs.
But here’s where it gets really good. Amazon isn’t struggling. Their sales are up double digits. Operating income hit eighteen billion last quarter. This isn’t a company circling the drain. This is a company that’s decided human beings are too expensive when you’re trying to build a robot god.
Because that’s what this is really about. Amazon’s free cash flow dropped from fifty-three billion to eighteen billion in a year. Where’d all that money go? Data centers. Custom chips. Cloud infrastructure. They’re not cutting people because AI made them more efficient. They’re cutting people to pay for AI that might—might—make them more efficient someday. Maybe. If we’re lucky and the stars align and the algorithms feel generous.
It’s like selling your furniture to buy lottery tickets, except the furniture had families and retirement plans.
The language of efficiency has become the ultimate moral escape hatch. Nobody wants to say “we chose profits over people.” That sounds bad at cocktail parties. But “we’re optimizing for AI-driven productivity gains”? That sounds like you went to business school and read the right books. It sounds inevitable. Progress. The future.
What it really means is: we’ve found a way to sleep at night.
Every time some executive hides behind an algorithm instead of owning their decision, they’re not just laying people off. They’re laying off responsibility. They’re outsourcing their conscience to a machine that doesn’t have one. And the further they get from the actual humans affected by their choices, the easier it becomes to reduce people to numbers on a spreadsheet.
Data becomes a shield against empathy. That’s the real trick here. Once you can look at “14,000 job eliminations” instead of fourteen thousand actual people—Sarah who brings donuts on Fridays, Tom who’s been there twelve years, Maria who just had a kid—it all becomes so much easier. Abstract. Necessary. Efficient.
The stock market loves it, of course. Wall Street gets hard for this stuff. Job cuts? Stock goes up. Replace humans with machines? Stock goes up. Announce you’re “investing in AI”? Stock goes parabolic. Never mind that nearly 700,000 job cuts were announced in the first half of this year alone. Up eighty percent from last year. The market’s booming, and everyone’s terrified.
You can’t innovate in a climate of fear. More than half of workers now think they’re going to lose their jobs in the next year. You know what people do when they’re scared? They don’t take risks. They don’t speak up. They don’t challenge the status quo. They keep their heads down and pray they’re not next.
Which is perfect if you want a compliant workforce. Terrible if you want innovation. Ironic, considering we’re supposedly at the dawn of some great AI revolution that’s going to require massive creativity and human ingenuity to navigate. We’re building the exact opposite of the conditions we need.
Some automotive supplier figured this out. Needed to cut thirty percent of their workforce. Instead of mass layoffs, they offered voluntary retirement with generous severance. Met their numbers organically. Nobody got blindsided. Nobody lost their dignity. The people who left felt respected. The people who stayed felt secure. The company survived.
It’s not rocket science. It’s just harder. It requires actually giving a damn.
The thing is, AI isn’t inherently evil. It’s not even inherently good. It’s a tool, and like all tools, it reflects the values of whoever’s wielding it. We could use it to cure cancer, rebuild infrastructure, reverse climate damage. All that utopian sci-fi stuff we used to dream about.
Or we can use it as an excuse to cut costs and boost quarterly earnings while telling ourselves we’re being forward-thinking.
Octavia Butler wrote this story once—The Book of Martha—where God asks a woman to redesign humanity. Every solution she comes up with creates a new problem. The story ends without answers, just humility. That’s what we need here. Not certainty. Not conviction that we’re making the tough but necessary choices. Humility. The understanding that every efficiency we celebrate might be creating harm we can’t see yet.
Leadership in the age of AI means keeping accountability in human hands. It means owning your decisions. It means saying “I chose this” instead of “the algorithm suggested this.” It means looking at data and seeing people. It means measuring not just productivity and profit but belonging and creativity and trust—the slow variables that actually determine whether progress lasts or collapses.
Because here’s the truth they don’t want to talk about: the danger isn’t artificial intelligence. The danger is artificial leadership. Leaders who use technology as a smoke screen. Who hide behind efficiency metrics and optimization algorithms when what they’re really doing is choosing short-term gains over long-term sustainability. Who break the social contract between employer and employee and then act surprised when nobody trusts them anymore.
You want to know what kills me about all this? It didn’t have to be this way. We had a choice. We always have a choice. Amazon could have said: “We’re investing heavily in AI, and we’re going to do it while keeping our people employed. We’re going to retrain. We’re going to find new roles. We’re going to honor the commitment we made to the humans who built this company.”
But that’s harder than just cutting 14,000 jobs and blaming the robots.
The folks getting laid off aren’t the problem. They’re the cost of doing business the easy way. The cost of prioritizing machines over the people who run them. The cost of artificial leadership.
And that cost? It won’t show up on any balance sheet. It’ll show up in the talent that leaves and never comes back. In the innovation that never happens because everyone’s too scared to try. In the trust that evaporates and can’t be rebuilt. In the future we could have had but threw away because we were too busy optimizing for next quarter.
Amazon didn’t have to do this. Microsoft didn’t have to do this. None of them did.
But they did. And they’ll keep doing it, as long as we keep letting them hide behind the algorithms.
The machines aren’t making these decisions. Humans are. And it’s about time we started admitting it.
Pour yourself something strong. We’re going to need it.