The Digital Asbestos in Your Walls and the Hangovers to Come

Jan. 19, 2026

My head feels like someone took a socket wrench to my temples and tightened it until the threads stripped. It’s Monday morning, the sun is assaulting the blinds with unnecessary enthusiasm, and I’m staring at a screen that’s too bright, reading about how the smartest guys in the room are busy stuffing the walls of civilization with technological carcinogens.

I’m nursing a black coffee that tastes like burnt rubber and regret, thinking about taking the edge off with a splash of the cheap bourbon sitting on the shelf, but I need my wits about me. Or at least what’s left of them. Because I just read Cory Doctorow’s latest autopsy of the AI hype cycle, and for once, someone isn’t trying to sell me a bridge to the future. He’s telling me the bridge is made of balsa wood and soaked in gasoline.

The piece is titled “AI companies will fail,” which is the kind of blunt force trauma I appreciate. Doctorow calls AI “asbestos in the walls of our tech society.”

God, that is beautiful. It’s grim, it’s dirty, and it’s perfectly accurate. It’s not the shiny, chrome-plated Terminator future the tech bros are jerking off to in their pressurized sleep pods. It’s just toxic insulation. We’re stuffing it everywhere because it’s cheap and the salesmen are aggressive, and twenty years from now, we’re going to be coughing up blood trying to scrape it out of our cultural lungs.

Here’s the thing about the current tech landscape: it runs on a fuel mixture of delusion and greed that would make a Vegas bookie blush. Doctorow points out something that usually gets lost in the noise of press releases and breathless LinkedIn posts. He talks about the difference between a “growth stock” and a “mature stock.”

See, in the world of the giants—Google, Amazon, Meta—nobody wants to be a grown-up. Being a grown-up company means you make a steady profit, you pay dividends, and your stock price sits still. It’s boring. It’s the corporate equivalent of settling down, getting a mortgage, and watching Wheel of Fortune every night. These guys want to be rock stars forever. They need the market to believe they are infinitely expanding balloons.

But when you own 90% of the market, there’s nowhere left to grow. You’ve eaten the whole pie, you’ve licked the plate, and you’re eyeing the tablecloth. So, what do you do? You invent a new pie. You hallucinate a pie.

First, it was crypto. Remember that? The revolution of money that turned out to be a casino run by guys who look like they sleep in their clothes. Then it was the Metaverse, a cartoon hellscape where you could attend meetings with no legs. Now, it’s AI. They have to pump billions into this bubble to convince the shareholders that the balloon is still inflating, that they aren’t just a mature utility company selling ads to angry uncles.

If they stop growing, the market panics. The stock drops. The employees, who are paid in stock options and promises, flee like rats from a sinking barge. It’s a desperate hustle. It’s a guy at the end of the bar putting drinks on a tab he knows he can’t pay, hoping a miracle happens before closing time.

Doctorow brings up this concept of the “centaur” versus the “reverse centaur.” Now, in automation theory, a centaur is a human assisted by a machine. You’re the head, the machine is the horse body doing the heavy lifting. That sounds nice. I wouldn’t mind a machine that carries the heavy stuff while I direct traffic and smoke a cigarette.

But that’s not what Big Tech is selling. They’re building reverse centaurs. A machine head on a human body.

Think about the Amazon delivery driver. The truck has the GPS, the route, the cameras watching the driver’s eyes to make sure they aren’t looking at a bird or feeling a moment of joy. The truck is the brain. The driver is just a meat-servo, a squishy appendage used to scramble up the porch steps because the robot can’t handle stairs yet. The machine drives the human.

That’s the endgame. They don’t want to help you write better code or diagnose cancer faster. They want to fire the expensive expert, replace them with a spicy autocomplete script, and hire a minimum-wage human to act as the “accountability sink.”

That’s a phrase that sticks in your throat like a dry pretzel: Accountability Sink.

Let’s look at the radiology example. If AI actually worked the way a sane person wanted it to, it would double-check the doctor. “Hey doc, looks like a tumor here.” The doc checks again, finds the cancer, saves the life. Great. But that doesn’t save money. That costs more money because the doctor takes longer.

The venture capitalists aren’t interested in saving lives; they’re interested in firing radiologists. They want the AI to make the call, and they want a frantic, underpaid human to rubber-stamp it. If the AI misses the tumor? Blame the human. “You were the human in the loop! It’s your fault the machine hallucinated a clean bill of health!”

It’s the perfect crime. The company pockets the salary of the fired expert, and the remaining poor bastard takes the heat when the algorithm craps the bed.

And speaking of crapping the bed, let’s talk about coding.

I know a few coders. They’re usually decent people, quiet, prone to anxiety. They like solving puzzles. Enter the AI. The bosses think they can fire the senior devs—the ones who know where the bodies are buried—and replace them with a chatbot that predicts the next word.

Doctorow explains this beautifully. AI doesn’t know what code is. It just guesses what code looks like. It’s a statistical parrot. It hallucinates distinct libraries and functions because the names sound right. Hackers know this. They’re already planting malware in packages with names that the AI is likely to invent, waiting for some junior dev using ChatGPT to suck the poison right into the mainframe.

It’s sloppy. It’s dangerous. But it’s cheap, or at least it looks cheap on a quarterly report, and that’s all that matters until the data breach happens and everyone acts surprised.

Then there’s the “art.”

I look at a lot of AI-generated images. We all do. You can’t avoid them. They’re everywhere, looking like a fever dream painted by a committee of ghosts. There’s something wrong with them. Doctorow uses the word “eerie,” quoting Mark Fisher. It’s the presence of nothing where there should be something.

Art—real art, even bad art—is communication. It’s a person feeling something terrible or beautiful and trying to shove that feeling into a format that another person can understand. It’s telepathy.

AI art is just math. It’s an average of everything it’s ever seen, diluted down to a gray sludge of aesthetic competence. It looks like a painting, but it has the emotional resonance of a wet cardboard box. It’s filler. It’s visual Muzak.

The tech companies want you to believe that illustrators and writers are obsolete. They want us to panic. They want the artists to scream, “It’s stealing my job!” because that validates the product. It proves the AI is powerful.

But the reality is, the AI can’t do the job. It can’t make you feel. It can only make you look.

And the kicker is the copyright angle. Everyone is running around trying to sue these companies for scraping data. I get it. It feels like theft. But Doctorow argues that demanding new copyright laws to stop AI training is a trap.

If we expand copyright to cover “training,” we aren’t handing power to the struggling artist. We’re handing a loaded gun to Disney and Universal. They’re the ones with the libraries. They’ll just rewrite the contracts so that every artist has to sign away their “training rights” to get a gig. It’s giving the bully more lunch money and hoping he decides to be nice.

The Copyright Office, in a rare moment of government competence, has actually held the line. They say if a human didn’t make it, it can’t be copyrighted. That means AI slop is public domain. If Disney uses AI to make a movie poster, I can print it on a t-shirt and sell it, and they can’t do a damn thing. That terrifies them. That’s the leverage.

So, where does this leave us?

We’re sitting inside a bubble. A massive, shimmering, distinctively stupid bubble. And bubbles burst. It’s physics. Or economics. Or just karma.

Doctorow compares it to the telecom bubble. Worldcom committed massive fraud, people went to jail, fortunes were vaporized. But when the dust settled, the fiber optic cables were still in the ground. We got cheap internet out of the wreckage.

Crypto? That burst and left us nothing but sad ape JPEGs and a lot of wasted electricity.

But AI might leave us some scrap metal worth keeping. When the investors realize that the chatbot can’t actually run a Fortune 500 company and the stock crashes, the data centers will still be there. The GPUs will be there.

We’ll inherit the wreckage. We’ll get cheap hardware and open-source models that run on our laptops. The “plugins.” The useful little tools that help you transcribe a drunken rant or remove a photobomber from a picture of your ex-wife. The stuff that’s actually useful, stripped of the trillion-dollar hype.

We just have to survive the explosion first.

We have to pull the asbestos out of the walls, one handful at a time. We have to resist the urge to turn ourselves into reverse centaurs, functioning as the fleshy error-handlers for a machine that doesn’t care if we live or die.

It’s going to be ugly. The collapse always is. The rich will float away on golden parachutes, and the rest of us will be left sweeping up the glass. But maybe, just maybe, we can salvage a few working parts from the junk pile.

The sun is still too bright, and my coffee is empty. I think I’m going to have that drink now. Here’s to the crash. May it be loud, may it be fast, and may it leave us with something better than a hallucinating parrot that eats electricity and shits out lies.

Cheers.


Source: AI companies will fail. We can salvage something from the wreckage | Cory Doctorow

Tags: ai bigtech automation futureofwork regulation