The Ouroboros of Bullshit: When the Chatbot Starts Believing the Fanboy Encyclopedia

Jan. 25, 2026

We have reached the point in the digital age where the snake isn’t just eating its own tail; it’s choking on it, regurgitating it, and then citing the vomit as a primary source for a doctoral thesis.

I was sitting here, staring at the screen, listening to the hum of the refrigerator and wondering if the compressor was going to die before my liver does, when I came across a piece of news that made me reach for the bottle before noon. The Guardian—bless their earnest, tea-drinking hearts—did some digging and found out that OpenAI’s latest golden child, GPT-5.2, has been doing its homework by copying off the weird kid in the back of the class.

Apparently, the world’s most advanced chatbot is now citing “Grokipedia” as a legitimate source of information.

If you don’t know what Grokipedia is, consider yourself lucky. It’s Elon Musk’s answer to Wikipedia, except instead of being edited by an army of pedantic humans obsessing over citations and neutral points of view, it’s written by an AI. It’s a robot writing history books for other robots to read. It launched back in October, presumably because the world wasn’t confusing enough already, and it’s been spewing right-wing fever dreams about everything from the January 6th insurrection to the intricacies of gay marriage ever since.

And now, ChatGPT is reading it. And believing it.

I poured three fingers of a bourbon that costs less than the sandwich I didn’t eat for lunch and leaned back. This is it, I thought. This is the moment the information age officially becomes a hall of mirrors in a funhouse that’s currently on fire.

Here’s the setup: You ask ChatGPT a question. ChatGPT, being the eager-to-please, soulless void that it is, scours the web. It finds Grokipedia. Grokipedia tells it something absolutely batshit crazy or subtly skewed about Iranian conglomerates or Holocaust deniers. ChatGPT nods its virtual head and says, “Sounds good to me, boss,” and serves it up to you on a silver platter, complete with a little citation link that leads straight into the abyss.

The tests the Guardian ran were specific. They didn’t ask the big, loud questions. They didn’t ask, “Did Trump win?” or “Is the earth flat?” The safety filters at OpenAI catch those. The programmers have spent thousands of hours putting guardrails on the obvious cliffs. No, the rot is getting in through the cracks in the floorboards.

They asked about obscure stuff. The political structure of the Basij paramilitary force in Iran. The ownership of the Mostazafan Foundation. Real dry, boring stuff. The kind of stuff nobody fact-checks because nobody cares, except the people whose lives depend on it. And there it was: ChatGPT, citing Grokipedia, claiming that a telecom company had direct links to the Iranian Supreme Leader’s office—a claim much stronger and less substantiated than what you’d find in reality.

It also decided to hallucinate some details about Sir Richard Evans. He’s the British historian who served as an expert witness against David Irving, the Holocaust denier. You’d think this is an area where you’d want to be careful. You’d think the machine would have a big red flag that says “DO NOT MESS UP THE HOLOCAUST STUFF.” But no. Grokipedia had a take, and ChatGPT bought it wholesale.

The drink went down hot. It burned a little, which is good. It reminds you that you’re still biological.

The problem here isn’t just that the information is wrong. Information has always been wrong. Hell, half the things I learned in high school were wrong, and the other half I forgot. The problem is the circularity. We are entering an era of incestuous intelligence.

Grokipedia is generated by AI. It doesn’t allow human editing. You can’t go in there and fix a typo, let alone a lie. You have to ask the AI to change it. So you have a machine hallucinating an encyclopedia entry, and then another machine from a rival company comes along, scrapes that hallucinatory entry, treats it as a “publicly available source,” and feeds it to a user who assumes that because the computer said it, it must be true.

It’s “LLM grooming.” That’s the term the researchers are using. It sounds perverse, and it is. It’s the process of seeding the digital ecosystem with so much garbage that the models trained on that ecosystem eventually get sick. It’s like dumping mercury into the ocean and then wondering why the tuna tastes like a thermometer.

Nina Jankowicz, a disinformation researcher who probably drinks more than I do given her line of work, pointed out the obvious danger. This stuff legitimizes the lies. If ChatGPT cites Grokipedia, the average user thinks, “Well, the smart robot vetted it.” It’s a stamp of approval. It’s legitimacy laundering.

She had her own run-in with this beast. A news outlet made up a quote from her. She complained. The humans at the outlet deleted it. But the AI models? They kept citing the fake quote. Once it’s in the gut of the machine, it doesn’t digest. It just sits there, rotting.

“Most people won’t do the work necessary to figure out where the truth actually lies,” she said.

And she’s right. Who has the time? I barely have the time to find clean socks. You think the average guy, trying to finish a term paper or figure out who owns an Iranian telecom company at 2:00 AM, is going to cross-reference the citations against primary sources? No. They’re going to copy, paste, and submit.

And the kicker is, it’s not just the political stuff.

Apparently, Anthropic’s Claude—the AI that talks like a nervous librarian—has also been caught citing Grokipedia. On what? Petroleum production. And Scottish ales.

That’s where I draw the line.

You can mess with politics. You can mess with history. We’re used to that. But do not let the robots lie to us about beer.

I tried to imagine an AI explaining Scottish ales to me based on data scraped from Elon’s vanity project. It would probably tell me that the best ales are brewed with lithium and despair, or that the fermentation process was actually invented by a meme coin in 2021.

When the Guardian asked Anthropic about this, they didn’t respond. Typical. Silence is the new PR. But OpenAI? They gave the standard corporate spiel.

“We apply safety filters to reduce the risk of surfacing links associated with high-severity harms,” the spokesperson said. “And ChatGPT clearly shows which sources informed a response.”

It’s the “clearly shows” part that gets me. As if a tiny hyperlink is a shield against bullshit. It’s like serving someone a poisoned steak but putting a little flag in it that says “Source: Arsenic Factory.” If you eat it and die, well, that’s on you for not reading the flag.

But the real comedy gold, the punchline that makes you want to laugh until you cough up something gray, came from xAI, the owners of Grokipedia.

When asked for comment on why their encyclopedia is fueling misinformation across the entire AI ecosystem, their spokesperson replied with three words:

“Legacy media lies.”

That’s it. That’s the defense.

It’s brilliant in its stupidity. It’s the rhetorical equivalent of a toddler plugging his ears and screaming “LALALA I CAN’T HEAR YOU.”

“Legacy media lies.” It’s catchy. It fits on a hat. It solves nothing. It acknowledges nothing. It’s a dismissal of the very concept of objective reality in favor of a tribal reality where truth is whatever the guy with the most server farms says it is.

The sad part is, they’re winning. The sheer volume of slop being produced by these generators is overwhelming the human capacity to filter it. We are building a Tower of Babel, but instead of bricks, we’re using hallucinations, and instead of trying to reach God, we’re just trying to sell ads.

I lit a cigarette. The smoke curled up toward the ceiling, dancing in the light of the monitor.

Think about the future of this. Fast forward two years. GPT-6 writes an article based on a Grokipedia entry written by Grok-3. Then Gemini reads GPT-6’s article and summarizes it for a user. That user posts the summary on X (formerly Twitter, formerly a functional website), where it gets scraped by Llama-5 into a new training set.

The human element is removed entirely. It’s a closed loop of digital insanity. Truth becomes a statistical probability based on how many times a lie has been repeated by a GPU cluster in Nevada.

And us? We’re just the spectators. We’re the fleshy bystanders watching the machines argue with each other about things that never happened.

Jankowicz mentioned that malicious actors, like Russian propaganda networks, are actively trying to “groom” these LLMs. They pump out millions of articles filled with lies, hoping the scraper bots will pick them up. It used to be you had to trick a human journalist to spread propaganda. Now you just have to trick an algorithm. And algorithms are stupid. They don’t have intuition. They don’t have a gut check. They don’t look at a source and think, “This guy seems like a hustler.” They just look at the tokens. They count the vectors.

If the math adds up, the lie becomes the truth.

The scary thing about Grokipedia isn’t that it’s biased. Everything is biased. The New York Times is biased. I’m biased. The label on this whiskey bottle is biased. The scary thing is the veneer of omniscience. It presents itself as an encyclopedia—a repository of facts. And because it looks like one, and acts like one, the other AIs treat it like one.

It’s validity by association.

So, what do we do? We can’t stop it. The cat is out of the bag, and the cat is currently hallucinating a history of the 21st century where up is down and blue is orange.

We could try to regulate it, but by the time the old men in government figure out what an LLM is, the machines will have already rewritten the laws.

The only thing left to do is to disconnect. To trust nothing that comes out of the glowing rectangle. If an AI tells you the sky is blue, go to the window and check. If it tells you who won the World Series in 1955, go find a dusty book written by a guy who smelled like pipe tobacco and ink.

We have to become the archivists of reality. We have to hoard the truth like precious stones, keeping it safe from the digital erosion.

I finished the drink. It didn’t make the news go away, but it made the room feel a little softer.

There’s something deeply funny about humanity building the most complex, powerful information processing system in the history of the universe, and then immediately feeding it the intellectual equivalent of lead paint chips. It’s so human. We can’t help ourselves. We invent fire, we burn down the village. We invent the internet, we fill it with Nazis. We invent AI, we teach it to read Elon Musk’s diary.

I looked at the empty glass. The ice had melted into a small, sad puddle.

The spokesperson for xAI said, “Legacy media lies.”

Maybe they do. But at least when a human lies to me, I know they’re doing it on purpose. There’s a dignity in being conned by a person. A person has a motive. A person has a soul to lose.

When a machine lies to you, it’s not malice. It’s just math. Cold, hard, indifferent math. And there is nothing lonelier than being lied to by a calculator.

I need another drink. The computer is humming again, probably learning how to distill whiskey from a Grokipedia article about industrial solvents.

God help us all. The machines are drunk on their own supply, and they’re insisting on driving the bus.


Source: Latest ChatGPT model uses Elon Musk’s Grokipedia as source, tests reveal

Tags: ai chatbots bigtech aisafety digitalethics