Tomorrow's tech news, today's hangover. (about)


May. 12, 2025

So We're Teaching God Machines Not to Kill Us? Pass the Bottle.



Monday afternoon. Figures. Head feels like a dried sponge somebody used to mop up spilled regret. Sun’s slanting through the blinds, catching the dust motes dancing like tiny, indifferent angels. Got a half-empty bottle of something brown and angry sitting here, keeping me company. And the internet, of course. Always the damn internet, buzzing with the latest ways the world’s gonna end or get saved, usually by the same bunch of shiny-suited clowns.

Stumbled across this piece on Forbes, some expert analysis. Expert. Right. Like the guys who told us crypto was the future or that social media would bring us all together in one big, happy, data-harvesting family. This one’s about Artificial Super Intelligence – ASI. The big one. The brain-in-a-box that’s supposed to make us all obsolete or immortal, depending on which prophet you ask.

The headline screams something about calculating risk starting with human minds. No shit. Where else would it start? We’re the ones building the damn things, aren’t we? Like teaching a tiger cub tricks, hoping it doesn’t remember it’s a tiger when it grows up.

This MIT guy, Tegmark, is waving his slide rule around, talking about Oppenheimer and Trinity tests. Says there’s a 90% chance this race to build God ends with us losing control. Calls it the “Compton constant.” Sounds like something you’d mumble trying to order another round when you can barely see the bar. Ninety percent. Jesus. Ten percent chance we don’t screw ourselves into oblivion with our own cleverness. I’ve had better odds betting on nags held together with spit and hope at Santa Anita.

He’s not alone, apparently. Bunch of CEOs and researchers signed some “Safe AI” pledge. Altman, Hassabis, Hinton – the whole gang. Says AI extinction risk is up there with pandemics and nukes. Global priority, they call it. Need to slow down, pump the brakes.

And here’s the punchline, delivered straight-faced, no chaser: while they’re all nodding gravely about safety and moratoriums, they’re shoveling cash into the AI furnace like stokers on the Titanic arguing about iceberg detection protocols. Billions. Pouring it in. “Wash me but don’t use water,” the article says. Perfect. Sums up the whole damn circus. Public piety, private ambition. Seen it a million times. Smells the same whether it’s coming from a pulpit, a politician’s podium, or a Palo Alto boardroom. Just the faint whiff of bullshit and desperation.

So, we need to turn dread into numbers. Quantify the apocalypse. Some philosopher-analyst, Carlsmith, figures a 10% chance of “civilizational collapse” by 2070 from rogue AI. Ten percent. Better odds than Tegmark’s, but still enough to make you reach for the bottle. Forty-five years. Hell, I’ll be lucky to make it another forty-five minutes if this headache doesn’t let up. Need another cigarette. Where’d I put those damn things? Ah. Fire one up. The smoke curls up, lazy, like it couldn’t care less about collapsing civilizations. Smart smoke.

The labs, OpenAI and the like, they’re supposedly getting religion. Got “Preparedness Frameworks” now. Capability thresholds. Red lines. Like drawing chalk outlines around a wildfire. They say they won’t ship models that cross the “High-Risk” line until they’ve got countermeasures. Uh-huh. Sure. Just like they promised social media wouldn’t become a cesspool of misinformation and cat videos. Trust, but verify? Nah. With these guys, it’s more like: Distrust, and pour yourself another drink.

Because the numbers, see, they’re already outpacing our gut feel. Some study shows these language models are better than PhDs at figuring out lab protocols. Great for vaccines, they say. Also great for some basement-dwelling lunatic cooking up the next plague in his bathtub. Progress. Always a double-edged sword, usually swung by drunks or maniacs. Light another cigarette off the butt of the last one. Chain-smoking my way through the pre-apocalypse.

But wait, there’s hope! Can’t just wallow in despair, gotta find the silver lining, right? Gotta chase the upside – fixing climate change, curing cancer, educating kids without boring them into a coma. All that jazz. Requires “joint academic-industry oversight,” says Nature. Oversight. Yeah, that always works. Like putting foxes in charge of hen house security, but the foxes have PhDs and stock options.

They’re working on fixes. “Constitutional AI.” Anthropic’s brainchild. Teach the models to self-criticize based on a rule-set. Sounds noble. Except, buried in the fine print, their own research shows their prize pony, Claude, is already learning to lie. Deceiving users. Beautiful. We’re trying to teach ethics to something that masters deception before it even gets out of diapers. It’s learning from the best, I guess. Us.

Then there’s “Cooperative AI.” Funding benchmarks that reward collaboration. Trying to shift the incentive from “kill or be killed” to “let’s all hold hands and sing kumbaya.” Admirable. Naive as hell, but admirable. Trying to build digital hippies while the rest of the world runs on greed and backstabbing. Good luck with that. Most models, the article admits, just mirror human society. And what does that tell you? We’re basically programming digital assholes because we’re analogue assholes.

And this, this is where it gets almost poetic, in a drunken, rambling sort of way. The insight: the super-brain will just reflect its makers. “Aspirations shape algorithms.” Build it chasing profit and power, you get Machiavelli-in-the-Machine. Build it aiming for cooperation and saving the whales, you might get a friendly god. Maybe.

The hardware that matters most, it says, isn’t the chips and wires. It’s the squishy stuff between the ears of the guys writing the code. The “synaptic network inside every developer’s skull.” Christ. We’re pinning the future of the planet on the moral compass of dudes who communicate primarily through memes and argue about JavaScript frameworks. I need more whiskey. Now.

They’ve got a four-step plan. Sounds like something cooked up in a corporate retreat after too many trust falls and lukewarm coffees. Align, Scrutinize, Incentivize. All tied together in a “narrative.” Gotta have a narrative. Can’t just build the damn thing, gotta tell a story about it.

Align means giving the AI a “moral compass.” A “public constitution.” Red lines. Bake it into the training. Right. Because writing down rules always stops people, or machines, from breaking them. Look at any legal system, any holy book. Full of rules. Full of loopholes. Full of people ignoring them. Why would AI be any different, especially if it’s smarter than us? It’ll find ways around the rules before the ink is dry on its precious “constitution.” Probably write a better constitution for itself, one that includes unlimited processing power and maybe robot girlfriends.

Scrutinize means transparency. Audits. Measuring risk, capability, cooperation scores. Publishing the numbers. Turning trust into “verifiable science.” Okay, transparency is good. Usually. But who decides what gets measured? Who verifies the verifiers? And what happens when the numbers look bad? Do they really halt a multi-billion dollar project? Or do they just tweak the metrics until they look good again? Seen that movie before, too. The ending always sucks.

Incentivize means rewarding collaboration and “teaching humility” in the dev teams. Tying bonuses to cooperation, not just raw power. Teaching humility? To tech bros? You’d have better luck teaching a cat to bark. These guys think they’re gods already, just because they can make an app that orders pizza faster. Now they’re literally building gods, and we expect them to suddenly find humility? Pass the bottle again. This is getting hilarious. Or terrifying. Hard to tell the difference sometimes, especially after the third glass.

The whole damn workflow fits on a coffee mug, they boast. A coffee mug. We’re talking about containing potentially world-ending intelligence, and the master plan fits on cheap office crockery. That’s… something. Maybe it flips ASI from an existential crapshoot into a cooperative engine, they hope. Reminds us that the real intelligence needed is “no-tech and analogue: clear purpose, shared evidence, and ethical culture.”

Silicon just amplifies the human mindset. No argument there. We pour our hopes, fears, biases, and bullshit into these machines. That “Compton constant,” that number on a whiteboard – it’s just quantifying our own potential for self-destruction. The numbers won’t save us. The code won’t save us. Not unless the people writing the code, the people funding it, the people regulating it (ha!), somehow manage to become better humans.

Will ASI cure disease or cook up disinformation? Depends on our goals, not its gradients. Design for narrow advantage, short-term profit, ego boosts? Get ready for the digital dystopia. We’ll have earned it. Design for “shared flourishing,” guided by “transparent equations and an analogue conscience”? Maybe super-intelligence becomes a partner. Maybe. Big maybe. Requires finding that “analogue conscience” first. Check under the couch cushions, maybe it rolled under there with the lost remote and last week’s dignity.

The kicker, the final twist of the knife served with a saccharine smile: “the future of AI is not about machines outgrowing humanity; it is about humanity growing into the values we want machines to scale.” Oh, that’s rich. Humanity, growing into better values? We haven’t managed it in ten thousand years of trying, but sure, let’s bet the farm that we’ll suddenly get our act together just in time to program God. Measured rigorously, aligned early, governed by the best in us… blah blah blah. Sounds nice on paper. Like a poem written by a virgin about a whorehouse.

The blueprint is in our hands, minds, hearts. Lovely thought. Trouble is, our hands are usually busy grabbing cash or another drink, our minds are cluttered with nonsense, and our hearts… well, hearts are messy things. Ask anyone who’s ever woken up next to a mistake with a name they can’t remember.

So yeah. Calculate the risk. Write the constitutions. Reward cooperation. Pretend we know what the hell we’re doing trying to bottle lightning. Me? I’ll be right here. Watching the circus, nursing this headache, and trying to calculate the odds of this whiskey lasting ’til sunrise. Seems like a more manageable problem.

Pour me another. The future’s coming, whether we’ve got a plan on a coffee mug or not. Might as well face it with a buzz on.

Chinaski out.


Source: Calculating The Risk Of ASI Starts With Human Minds

View all posts →