So, the human race is at it again, bless its pointed little head. Screaming, marching, pointing fingers. This time it’s in LA, something about ICE raids. Sounds like the usual background noise to a bad hangover. The streets are full of pissed-off people, and the internet, that glorious open sewer, is full of… well, you know. The usual cocktail of half-truths, outright lies, and pictures of cats. But mostly lies when things get heated.
It’s always the same playbook with these digital town criers on X and Facebook. Old footage from a different riot in a different country? Perfect. Clips from that video game your nephew plays? Even better. And the classic, the evergreen, the “they’re all paid agitators, funded by [insert bogeyman of choice here].” No proof, of course. Proof is for suckers and poets. What you need is outrage, clicks, the sweet, sweet validation of strangers who already agree with you.
Now, in a sane world – I know, I know, hold your laughter – you’d think folks would develop a decent crap detector. But sanity, like a good bottle of bourbon, is in short supply. And in these moments of high-octane confusion, with platforms like X and Meta basically saying “you guys sort it out, we’re busy counting our money,” people are turning to a new oracle. A digital Delphic priestess. They’re asking the AI chatbots.
Yeah, you heard me. ChatGPT, Grok, all those whiz-bang robo-brains that are supposed to be smarter than God and twice as modest. People are feeding them the scraps from the social media abattoir and asking, “Oh, wise and wonderful algorithm, tell me, is this bullshit?” And the kicker? The AI, with all the unbiased clarity of a politician caught with his pants down, often says, “Nope, smells like roses to me! Or, actually, this particular brand of bullshit seems to originate from this other pile of bullshit over here.”
Take this gem from the San Francisco Chronicle. They published photos of National Guard troops sacked out on floors. Gritty stuff. Governor Newsom shares them, takes a potshot at Trump. Standard political theater. Within minutes, the internet geniuses are screaming “FAKE! AI-GENERATED! DEEPFAKE GEORGE WASHINGTON!” Laura Loomer, a name that always sounds like a symptom of something unpleasant, naturally jumps in on X, accusing Newsom of using AI photos to “smear” Trump. Because, of course.
So, some bright spark, probably hoping to settle it once and for all, asks X’s own chatbot, Grok. You’d think a platform’s own AI might have, I don’t know, a sliver of accuracy about what’s happening on its own turf. Grok, bless its little cotton-picking circuits, chimes in: “The photos likely originated from Afghanistan in 2021, during the National Guard’s evacuation efforts…” It goes on, dismissing the LA connection as lacking “credible support.” Afghanistan. Right. Challenged, because even a broken clock is right twice a day and sometimes a human notices the AI is spouting gibberish, Grok doubles down, sort of. “Okay, okay,” it backpedals, “I checked the San Francisco Chronicle’s claims. The photos of National Guard troops sleeping on floors are likely from 2021, probably the U.S. Capitol, not Los Angeles 2025.” Still wrong. Still confidently wrong. It’s like arguing with a drunk who insists he saw Elvis at the 7-Eleven. Except the drunk might actually be more entertaining. The Chronicle, who, you know, actually did the journalism, probably just sighed and ordered another coffee, or something stronger. My kind of people.
But wait, the parade of artificial idiocy doesn’t stop there. Oh no. Enter one Melissa O’Connor, who calls herself an “OSINT Citizen Journalist.” OSINT. Citizen. Journalist. Sounds like a character from a bad cyberpunk novel. She feeds the Newsom photos to ChatGPT. And what does OpenAI’s pride and joy cough up? One of the pictures, it declares, was taken at Kabul airport in 2021 during Biden’s Afghanistan pullout. Kabul! It’s the go-to answer for confused AIs looking at pictures of uncomfortable soldiers, apparently. Like a magic word. Abracadabra-Kabul! And this “citizen journalist,” she posts these AI-hallucinated “facts.” They spread like gonorrhea in a cheap brothel – Facebook, Truth Social, you name it. Later, she posts a clarification that, oops, the photos aren’t four years old. But the original post, that steaming pile of digital misinformation? Still up. Because why let a little thing like the truth get in the way of a good story, or a few clicks, eh? It’s the modern way. Plant the lie, let it take root, then whisper a retraction that nobody hears over the howling mob.
This is what they’re selling us as the future, these tech wizards with their kombucha and their stock options. These “large language models” are supposed to be the next leap forward for humanity. From where I’m sitting, hunched over this keyboard with a glass of something that burns just right, it looks more like a stumble down a flight of stairs, landing face-first in a puddle of yesterday’s bad ideas. They train these things on the internet, which is like raising a kid on a diet of candy, horror movies, and political propaganda, and then being surprised when he turns out to be a twitchy little psycho. The internet is a monument to human folly, a cathedral of bullshit. You feed that to a machine, what do you expect it to learn? Shakespeare? Or how to argue with a brick wall and then confidently misattribute a photo to three different continents?
It’s the sheer gall of it that gets me. The programmers, the CEOs, they talk about these AIs like they’re birthing gods. And what do we get? A slightly more sophisticated version of that paperclip bastard from Microsoft Word, only now it’s telling you that the LA protests are actually happening in ancient Rome, probably. The worst part isn’t even that the AIs are stupid. It’s that people believe them. Or, more accurately, they believe the AI when it tells them what they already want to hear. It’s a confirmation bias engine, turbocharged. Your prejudices, now with a slick, futuristic veneer of “intelligence.”
And these platforms, these digital arenas where truth goes to die? They’re just shrugging. “Content moderation is hard,” they whine. Yeah, so is life, pal. So is digging ditches. So is trying to write a decent sentence when your head feels like a nest of angry badgers. But you do it. Or you used to. Now it’s easier to let the bots fight it out, let the algorithms amplify the loudest, angriest voices, and then sell ads against the ensuing dumpster fire.
It’s a beautiful kind of madness, isn’t it? We create tools to process information faster than any human ever could, and they end up making us dumber, more confused, more paranoid. It’s like inventing a high-speed car just to drive it into a wall, repeatedly. There’s a poetry to it, a bleak, black, booze-soaked poetry. The “OSINT Citizen Journalists” of the world, armed with their ChatGPT accounts, are the new town criers, only their bells are ringing with digital nonsense. And people listen, because it’s new, it’s shiny, it sounds authoritative. “The AI said so.” As if the AI isn’t just a very complex parrot squawking back the garbled mess it was fed.
I’ll tell you one thing: these chatbots, they make me feel almost… human. Flawed, cynical, probably marinated in enough toxins to qualify as a hazardous waste site, sure. But at least when I lie, I usually have a reason. Self-preservation, a good story, a free drink. These AIs? They lie by accident. They confabulate with the bland innocence of a machine that literally doesn’t know any better. It’s pure, unadulterated, statistical error dressed up as knowledge. And that, my friends, is somehow more insulting.
So, here we are. Protests in the streets, lies on the screens, and the robots are “helping.” Helping us dig the hole deeper, faster. Helping us get more lost in the hall of mirrors we’ve built for ourselves. Makes you wonder what they’ll “help” with next. Picking the next president? Deciding who gets food and who starves? The possibilities are as endless as human stupidity.
I need another drink. Or maybe just to stare at the wall for a while and contemplate the beautiful, terrifying idiocy of it all. At least the wall doesn’t try to tell me it’s actually a window into downtown Kabul.
Chinaski out. Time to find a bottle that still tells the truth.
Source: AI Chatbots Are Making LA Protest Disinformation Worse