AI Gets Brain Rot: Turns Out Feeding Garbage to Machines Makes Them Stupid Too

Oct. 24, 2025

So apparently we’ve managed to give artificial intelligence the same mental deterioration we’ve inflicted on ourselves through years of scrolling through rage-bait and cat memes. Congratulations, humanity. We’re not just destroying our own cognitive function anymore—we’re teaching machines how to be just as dumb.

Some researchers from a few universities in Texas and Indiana decided to ask the obvious question nobody wanted to answer: what happens when you train AI on the same cesspool of viral garbage that’s turned our collective attention span into that of a caffeinated goldfish? The answer, as published in their study, is about what you’d expect if you had any sense left. The AI gets stupider. Much stupider.

They call it “LLM Brain Rot,” which is both hilarious and deeply depressing. These scientists actually sat down and fed large language models a steady diet of clickbait threads, recycled meme commentary, outrage-farming posts, and algorithmically generated listicles—basically everything that makes up about 90% of the internet these days. Then they watched what happened.

What happened is the AI equivalent of watching someone’s IQ drop in real-time. The models started having lapses in reasoning, spitting out factual inconsistencies, and losing the ability to maintain logical coherence. In other words, they started sounding like every comment section on every article ever written about politics, vaccines, or whether pineapple belongs on pizza.

But here’s the truly beautiful part—and I mean beautiful in the way a train wreck is beautiful when you’re watching it from a safe distance: even after they tried to “rehabilitate” these models with cleaner data, the damage stuck. The researchers called it “cognitive scarring.” The AI never fully recovered. It’s like how your liver never quite forgives you for that three-day bender in Vegas, except this is happening to machines that were supposed to be our intellectual superiors.

One of the researchers, Junyuan Hong, along with his colleague Atlas Wang, put it perfectly: “When exposed to junk text, models don’t just sound worse, they begin to think worse.” They don’t just mimic the surface-level stupidity—they internalize it. They learn to prioritize attention over understanding, which is basically the entire business model of every social media platform currently destroying civilization.

The junk data they used looked perfectly fine on the surface. It was fluent, grammatically correct, structurally sound. Traditional data quality classifiers gave it a thumbs up. But underneath that veneer of competence was pure cognitive poison. It’s the digital equivalent of those fancy cocktails that taste like fruit juice but are 80% vodka—goes down smooth, destroys you from the inside.

And here’s what really gets me: this is exactly what’s happening to us. We’re consuming the same garbage. We think we’re staying informed by reading viral threads and trending posts, but we’re really just training ourselves to chase dopamine hits instead of understanding. We’re teaching our brains to think in hot takes and outrage cycles instead of nuance and complexity. The machines are just following our example, and doing it faster.

The study focused on models like Meta’s Llama3 and Alibaba’s Qwen, feeding them content from that platform formerly known as Twitter. You know, the place where everyone goes to have their worst opinions validated and their best instincts obliterated. The perfect laboratory for studying cognitive decay.

What the researchers found is that this kind of content teaches models to “mimic attention rather than understanding.” Think about that for a second. We’ve created an entire online ecosystem optimized for grabbing attention, and when we train AI on it, we’re essentially teaching them that attention is more important than truth, coherence, or logical consistency. We’re building machines in our own damaged image.

Now, some experts are pointing out that this isn’t entirely surprising. Ilia Shumailov, who used to work at Google DeepMind, noted that these results align with what we already know about model poisoning—when someone deliberately manipulates training data to screw with AI systems. But here’s the thing: this isn’t deliberate poisoning. This is just the natural state of the internet. We’re poisoning these models by accident, simply by being ourselves.

Shumailov also pointed out that most internet data is pretty low quality, yet we still manage to build capable models. Which is true, but it’s also missing the point. Just because you can build a functional alcoholic doesn’t mean you should be proud of the accomplishment.

The AI companies, naturally, claim they’re already being careful about data quality. Gideon Futerman from the Center for AI Safety says leading corporations spend lots of effort trying to improve their training data. Which is corporate-speak for “we’re aware of the problem and we’re definitely going to fix it, probably, eventually, maybe, no promises.”

Futerman’s more worried about deliberate data poisoning than accidental brain rot from low-quality content. Which is like being more worried about someone spiking your drink than about the fact that you’re drinking straight from the sewer. Both are problems, buddy.

The researchers call proper data selection “cognitive hygiene,” which is a fancy way of saying “don’t train your AI on garbage.” But here’s the recursive nightmare keeping me up at night: as more content online becomes AI-generated, and as that AI-generated content gets trained on previous AI-generated content that was already trained on human-generated garbage, we’re creating a feedback loop of stupidity that would make Ouroboros jealous. We’re teaching machines to eat their own tail, except the tail is made of clickbait and the snake is getting dumber with every bite.

The researchers warn that future AI models risk “inheriting distortions in reasoning and representation embedded within that data.” Translation: if we keep feeding AI the same brain-rotting slop we feed ourselves, we’re going to end up with artificial intelligence that’s just as confused, contradictory, and confidently wrong as we are.

And maybe that’s the real lesson here. We built these models to be better than us, smarter than us, more capable than us. But we’re training them on data generated by creatures who can’t stop watching videos of people falling down stairs and getting into fights over parking spaces. We’re trying to create superintelligence using the intellectual equivalent of gas station sushi.

The persistence of the damage is what really drives it home. Even after extensive rehabilitation with clean data, the models never fully recovered. That “cognitive scarring” is permanent, or near enough to it. Which means every piece of junk data you feed an AI system is like a little scar on its ability to think clearly. Death by a thousand clickbait cuts.

So what’s the solution? The researchers say we need “deeper, more systematic study” of data integrity, which is academic code for “we’re screwed but we’re going to keep researching it anyway because what else are we going to do?”

Meanwhile, the internet keeps churning out its daily quota of outrage porn, engagement bait, and algorithmically optimized nonsense. And we keep consuming it. And the AI keeps learning from it. And we all keep getting dumber together, humans and machines alike, in perfect harmony.

It’s almost poetic, in a deeply pathetic way. We created artificial intelligence to transcend human limitations, and instead we’re teaching it to inherit all our worst qualities. We’re not building gods. We’re building broken mirrors that reflect our own cognitive decline back at us, with slightly better grammar.

The machines are getting brain rot because we gave it to them. They’re learning to think poorly because we taught them how. And the real horror is that we can’t even claim this was malicious or intentional. This is just what happens when you build intelligence on a foundation of garbage.

Sleep tight.


Source: Junky Online Content Gives AI Models Brain Rot Too

Tags: ai machinelearning dataprivacy ethics aisafety