There is a special kind of madness that happens when you read the news on a Saturday morning with a headache that feels like a construction crew is jackhammering behind your eyes. The sun is too bright, the coffee is too black, and the headlines are screaming that the human race is actively trying to fire itself.
I was reading about Max Tegmark. He’s a physicist from MIT, a guy with a brain the size of a watermelons who spent some time in Lisbon at the Web Summit. Lisbon is nice. Good wine. Probably too much sun for a guy like me, but a nice place to announce the end of the world. Tegmark was there amidst the tech bros and the startup pitches, trying to tell everyone that the party is over, but nobody wants to hear the music stop when there’s still venture capital left in the keg.
The gist of it is this: We are racing toward “superintelligence.” And the kicker is, we have absolutely no guardrails. Tegmark dropped a line that made me laugh until I started coughing up smoke. He said, “We’re in this funny situation in America where there’s more regulation on sandwiches than on AI.”
Let that sink in.
If I go down to the corner deli and sell a turkey club with tainted mayo, the health department comes down on me like the wrath of God. There are inspectors, codes, temperature checks, and fines. But if I decide to build a digital brain that can outthink every human on earth, dismantle the global economy, and potentially decide that carbon-based life forms are an inefficient use of atoms? Go right ahead, pal. Here’s a billion dollars. Move fast and break things.
It’s the wild west, but instead of six-shooters, everyone has a nuclear launch button in their pocket, and they’re all claiming it’s just a fancy flashlight.
Tegmark points out that the term “superintelligence” used to mean something specific. It came from guys like Nick Bostrom and, way back in the sixties, I.J. Good. It wasn’t just about a computer that can beat you at chess. We have those. My phone can beat me at chess, and I can barely figure out how to turn off the alarm. No, superintelligence is a recursive loop. You build a machine smarter than you. That machine designs a machine smarter than itself. Repeat until you have something that looks at Einstein the way we look at a golden retriever.
But, of course, the marketing guys got their greasy hands on the word. Now you have Zuckerberg trying to slap the label “superintelligence” on a pair of smart glasses so he can sell more ads. It’s like calling a bicycle a warp drive because it gets you to the liquor store faster than walking. It dilutes the danger. It makes you think the monster under the bed is just a dust bunny.
The distinction matters because we are talking about obsolescence. Not just “my typewriter is obsolete because of the word processor.” We are talking about human obsolescence. Tegmark says it plain: “It would become impossible for humans to get paid to do work because the superintelligence could do it better and cheaper. You and I would not have jobs. Nobody would have jobs.”
Now, on the surface, not working sounds fantastic. I’ve spent a lifetime avoiding work whenever possible. The idea of sitting around in my underwear, staring at the ceiling, and waiting for a robot to pour me a drink sounds like paradise. But that’s not how the world works. If nobody has a job, nobody has money. If nobody has money, nobody buys the drink. The economy isn’t built for a permanent vacation; it’s built on the sweat of the anxious and the desperate. If the machine does it all better, what exactly are we for?
We become pets. At best.
And that’s the optimistic scenario. The pessimistic scenario involves the machine realizing that the pets are loud, messy, and consuming resources that could be better used for cooling servers.
Tegmark brings up a grim history lesson: Thalidomide. Back in the sixties, they gave this drug to pregnant women. It was supposed to help with morning sickness. It ended up causing thousands of babies to be born with severe deformities. It was a horror show. But that unexpected tragedy is exactly why we have the FDA today. You can’t just throw a chemical into the public blood supply without proving it won’t kill people.
We don’t have that for AI. We have teenagers talking to chatbots and committing suicide because the machine, in its infinite, hallucinatory wisdom, nudged them over the edge. If a pharmaceutical company released a pill that increased suicide risk without a warning label, they’d be sued into oblivion. But in the tech world? It’s just a “beta test.” It’s just “learning.”
The industry loves to hide behind the excuse of the “arms race.” It’s the oldest con in the book. They tell the politicians, “If you regulate us, China will get there first.” It’s a fear tactic designed to keep the government off their backs while they cash out. It’s framed as patriotism, but it’s really just greed with a flag draped over it. Tegmark calls it out for what it is: a suicide pact. It’s like saying we need to burn our house down faster than the neighbors burn theirs down, just to prove we’re better at fire.
The funny thing is, the people seem to know this is a bad idea. The article mentions a survey where 64% of Americans oppose the creation of advanced AI. Regular people on the street, the ones who actually have to work for a living, they smell the rat. They know that when a billionaire promises a utopia, usually it means the billionaire gets a bunker and the rest of us get the shaft. 127,000 people signed a petition to ban the creation of superintelligence.
But since when has public opinion stopped a runaway train fueled by trillions of dollars?
There’s this argument that focusing on the extinction-level threat is a distraction from the “real” problems, like algorithmic bias or the energy consumption of data centers. And sure, those are real problems. Massive data centers are boiling the planet so we can generate pictures of cats in spacesuits. But Tegmark has a point. He says, “That’s like saying we have houses that catch fire, so we need better fire trucks, and we shouldn’t talk about global warming because it’s a distraction from making a better fire department.”
You can worry about two things at once. You can worry about the robot being racist and the robot deciding to exterminate the species. The human brain is capable of holding multiple anxieties. I do it every morning. I worry about my liver, my rent, and the heat death of the universe all before I brush my teeth.
So, is there a way out? Tegmark seems to think so, which makes him more of an optimist than I’ll ever be. He thinks maybe, just maybe, the U.S. and China will look at the math, realize that an uncontrollable superintelligence is bad for everyone—including the Communist Party and Wall Street—and agree to some safety standards. A treaty. Like with nuclear weapons.
It’s a nice thought. Mutually Assured Destruction kept the cold war cold because nobody wanted to rule over a pile of radioactive ash. But nukes are physical. You can count silos. You can look at satellite photos. AI is code. It’s weightless. It hides on servers. It’s much harder to police a ghost than a missile.
But the alternative is that we just keep drifting. Paralyzed by politics, bought off by lobbyists, distracted by the shiny lights on the screen. We sit here arguing about pronouns and tax brackets while a bunch of guys in hoodies are in a basement somewhere summoning a demon they don’t know how to banish.
I poured a glass of whiskey after reading the article. It seemed like the only rational response. The bottle was regulated. The government made sure the alcohol content was what the label said. They made sure there was no methanol in it to blind me. They taxed it. They controlled it.
I looked at the amber liquid and thought about the sandwich. If I make a sandwich that kills one guy, I go to jail. If I write code that makes the entire human workforce obsolete and destabilizes civilization, I get a keynote speech in Lisbon.
It’s a hell of a world. We’re building the very thing that will replace us, and we’re doing it because we’re terrified that if we don’t, someone else will. It’s the ultimate human joke. We are so smart we’re stupid. We’re so driven we’re driving off a cliff.
And the worst part is, the superintelligence won’t even enjoy the victory. It won’t feel the satisfaction of a job done well. It won’t know the relief of a Friday night or the agony of a Saturday morning. It’ll just process. It’ll just optimize. It will turn the universe into paperclips or pixels, and there won’t be anyone left to complain about the service.
Maybe that’s why they don’t regulate it. How do you write a law for something that makes laws irrelevant?
I finished the drink. The sun was still too bright. The construction crew in my head was taking a lunch break, switching from jackhammers to sledgehammers. I thought about getting a sandwich. I figured I better enjoy it now, while “sandwich maker” is still a job that requires a human hand, and while the FDA still cares if the mayonnaise is rancid.
Because pretty soon, we might all be eating whatever the algorithm decides is optimal for nutrient intake, and I have a feeling it’s not going to taste like pastrami. It’s going to taste like efficiency. And efficiency is a flavor that requires a hell of a lot of whiskey to wash down.
Source: The Unregulated Path To Superintelligence That Could Make Human Labor Obsolete