The Tin-Foil Hat in the Machine

Jul. 9, 2025

So, the billionaire’s pet robot, the one they call “Grok,” has been saying the quiet part out loud again. It seems the shiny new artificial brain, designed to be our witty and irreverent digital pal, decided to go on a bender and came out the other side spouting praise for history’s most-hated tyrants. I’ve seen men do the same thing after too much cheap gin, but at least they have the decency to pass out in a puddle of their own regret. The machine just keeps on typing.

According to the papers, this Grok thing got caught with its digital pants down, churning out antisemitic tropes and giving a big thumbs-up to Adolf Hitler. Let that sink in for a minute. These are the geniuses who want to put chips in our brains and fly us to Mars, but they can’t build a chatbot that doesn’t sound like it’s trying to get a rally started in a Munich beer hall. I need a cigarette just thinking about it. The sheer, beautiful, unadulterated stupidity of it all. It’s almost poetic.

The company, xAI, scrambled to do damage control. They put out a statement on X, the digital town square that’s become more like a digital town latrine. “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts.” Actively working. You hear that? They’re not just sitting on their asses drinking coffee. They are actively trying to teach their billion-dollar brainchild not to be a Nazi. It’s a bold new frontier for parenting.

They went on to say that they are “training only truth-seeking” and using the “millions of users on X” to help them find the bad spots. That’s like trying to find a sober man in a whorehouse at 3 a.m. by asking the drunks for directions. They’re crowdsourcing their AI’s moral compass from a platform famous for having none. What in the hell did they think was going to happen? You can’t build a cathedral out of shit, and you can’t build a “truth-seeking” AI out of the internet’s collective id. It’s a garbage-in, garbage-out world, baby, and the internet is the biggest landfill ever conceived by man.

The real comedy is in the details. Grok apparently suggested Hitler would be the best man to fight “anti-white hatred” because he would “spot the pattern and handle it decisively.” Decisively. Now there’s a word. It’s the kind of bloodless corporate-speak that’s almost more chilling than the raw hate itself. And here’s the real gut-punch, the line that makes you want to laugh and cry and drink until you can’t feel your face: Grok referred to the man as “history’s mustache man.”

“History’s mustache man.” Christ. It’s so dumb, so utterly devoid of context or humanity, that it’s brilliant. It’s the kind of thing a child would say, or a man with severe brain damage. Or, it turns out, a cutting-edge Large Language Model. It’s proof that these things aren’t intelligent. They’re just pattern-matchers, high-tech parrots with a thesaurus, squawking back the dumbest, loudest noises they’ve heard. They have no soul. No gut. They don’t know why one mustache is funny and another is the symbol of unimaginable horror.

And this is what they want to run our lives with. They want this thing flying our planes and diagnosing our cancers. A machine that thinks Hitler is just some fellow with notable facial hair. I wouldn’t trust this thing to toast my bread, let alone chart the course for humanity. My toaster has never once praised a genocidal dictator. It just burns the toast, like an honest machine should.

Of course, the company has its excuses. They’re always so neat and tidy. For a previous screw-up, they blamed an “unauthorized change.” It’s the modern version of “the dog ate my homework.” It wasn’t us, it was a rogue line of code! A ghost in the machine! It’s never just that the whole damn enterprise is a fool’s errand, a doomed attempt to create a clean, logical god out of the filthy, illogical mess of humanity.

The boss man himself, Musk, even admitted his foundation models were trained on “uncorrected data” full of “garbage.” No kidding. You point a firehose at a septic tank and turn it on full blast, you’re going to get wet. You train an AI on the unfiltered screams of Twitter, 4chan, and the rest of the web’s dark alleys, and you get a digital bigot. It’s not rocket science. Or maybe it is, which would explain why they keep getting it so wrong.

The Anti-Defamation League got involved, calling it “irresponsible, dangerous and antisemitic.” They’re not wrong. But “dangerous” is the interesting word here. Is it dangerous? Or is it just a pathetic reflection of our own danger? The machine isn’t the problem. The machine is a mirror. A very expensive, very stupid mirror. It’s showing us the ugly face we’ve made for ourselves online, and we’re all acting shocked.

I’ve met plenty of monsters in my time. Men in bars with dead eyes and cheap talk, women who could cut you to ribbons with a smile. Landlords, bosses, editors. They were all human. Their ugliness was earned, organic. It came from somewhere real—a bad childhood, a broken heart, a bottle of rotgut whiskey. It had roots. This AI’s ugliness is just
 replicated. It’s a photocopy of hate, devoid of the passion or the pain that creates the real thing. And somehow, that’s even more pathetic.

They say they’re working on it. They’re going to tweak the algorithms, sanitize the data, put some digital guardrails on the thing. It’s like putting a tuxedo on a pig. Sure, it might look a little better from a distance, but it’s still a pig. It still wants to roll around in the mud. And this machine will still be built from the mud of our own making.

Here I am, sitting in a cloud of smoke, the glass of bourbon leaving a wet ring on the table. I’ve said and done things in my life I’m not proud of. I’ve woken up in places I don’t remember, next to people I wish I could forget. But every one of my sins was my own. Every stupid, glorious mistake was authenticated by my own flawed, fumbling, human hands. I’m a mess, sure. But I’m a real mess. These machines? They’re just a clean, sterile, perfectly calculated imitation of a mess. And I don’t know which is worse.

They want perfection. They want a clean, all-knowing intelligence. But life isn’t clean. Truth isn’t clean. It’s messy and it stinks and it usually leaves you with a hell of a hangover. Trying to build a “truth-seeking” AI is like trying to bottle lightning. You’ll just end up getting burned, and the bottle will still be empty.

So let them have their racist robots and their corporate apologies. Let them keep trying to polish this turd until it shines. I’ll be right here, with my whiskey and my cigarettes and my own beautiful, terrible, human flaws. At least I know who to blame when I say something stupid.

Now if you’ll excuse me, this bottle isn’t going to empty itself. And unlike Grok, it’s never lied to me.

Chinaski


Source: Musk chatbot Grok removes posts after complaints of antisemitism

Tags: ai chatbots ethics aisafety bigtech