So I’m sitting here, the bottom of a bourbon bottle looking back at me like the eye of a patient god, and I stumble across this piece of high-minded panic from a fella named Ryan Trattner. He’s the co-founder of some “edtech AI platform,” and he’s wringing his hands raw because kids are using ChatGPT to do their homework. The headline screams about fighting back, about saving critical thinking. It’s a beautiful, noble sentiment. Almost makes me want to put down my glass and stand up for something. Almost.
The gist of his sermon is that these generic AI tools are turning an entire generation into mush-brained zombies. He calls it “COVID 2.0,” which is a catchy little tag for the apocalypse, I’ll give him that. He says kids are losing their ability to think, to struggle, to be human. He tells a sad story about his friend who graduated from a top-tier university without typing a single assignment himself for two years. Just copy, paste, and collect your diploma.
And you know what? He’s not wrong. The kid is wholly unprepared for the workforce. But the part that makes me laugh so hard I nearly spill the good stuff is the idea that this is some new tragedy. As if before the magic answer box came along, every student was a miniature Plato, wrestling with grand ideas by candlelight, emerging from the library with a furrowed brow and a profound new understanding of the human condition.
Bullshit.
Before ChatGPT, they were buying term papers online. Before that, they were copying from the encyclopedia. Before that, they were paying some other poor student a case of cheap beer to write the thing for them. The human animal has always looked for the shortcut. It’s in our nature. We are creatures of efficiency and bone-deep laziness. The only thing that’s changed is the quality of the tool. We’ve gone from a rusty spoon to a goddamn industrial excavator for digging our way out of doing any real work.
I need a cigarette.
The author, our concerned citizen Mr. Trattner, seems to think this is a disaster that requires immediate action. He even offers a solution, a neat little five-point plan. And here’s where the whole beautiful tragedy turns into a cheap magic trick. You see, Mr. Trattner isn’t just a concerned citizen. He’s a salesman. And he’s got a product.
He says we need to ban “generic direct answer generation AI tools” from the classroom. The bad AI. The ChatGPT. But, he says, we need to embrace “purpose-built AI learning platforms.” Platforms that promote “responsible studying and true learning.” I wonder who makes one of those? Oh, right. He does. His company, StudyFetch.
It’s the oldest play in the book. Create the panic, then sell the cure. Don’t use that dirty street-corner AI, kids. Come over here. I’ve got the good stuff, the pure stuff. The ethical AI. It’s a hell of a pitch. He’s not fighting for the souls of our children; he’s fighting for market share. And I can respect the hustle, I really can. It takes a certain kind of guts to stand on a soapbox, decry the flood, and then try to sell everyone your own brand of leaky bucket.
But he misses the real point. He’s so busy trying to carve out his slice of the pie that he can’t see the whole damn bakery is on fire.
The problem isn’t that students are using a machine to cheat on their assignments. The problem is that the assignments, the classes, the degrees—the whole goddamn system—is a joke that isn’t funny anymore. It’s a four-year, hundred-thousand-dollar hazing ritual where the prize is a piece of paper that qualifies you for a job that a well-trained monkey could do.
The kids aren’t being dumbed down by AI. They’re just smart enough to see the game for what it is. A hoop to be jumped through. A box to be checked. And if you can find a machine that jumps through the hoop for you, why in the hell would you waste your own energy? They’re not cheating the system. They’re beating it at its own game. The system demanded an output, a finished essay, a correct answer. It never really gave a damn about the process. The AI just provides the output with horrifying efficiency.
It’s like being mad at a prisoner for using a new kind of file to saw through the bars. Don’t blame the prisoner. Ask why he’s in the cage in the first place.
This whole setup, this “education,” isn’t about creating thinkers. It hasn’t been for a long, long time. It’s about creating credentials. It’s a factory for producing compliant little workers who know how to follow instructions and meet deadlines. And what better tool for that than an AI? It teaches you the most important lesson of the modern world: don’t think, just prompt.
And here’s the part that this article only grazes, the dark little secret at the bottom of the bottle. The author ends with a coy little line: “…maybe that’s what some want . . .”
Maybe? There’s no maybe about it. Of course that’s what they want. The people pouring money into these universities, the corporations waiting to hire these graduates, the whole damn machine. They don’t want a generation of critical thinkers. Thinkers are a pain in the ass. They ask questions. They complain. They unionize. They drink too much and write miserable little blogs.
What they want is a generation of proficient users. A workforce that doesn’t need to know how the engine works, just how to push the button that makes it go. A society of people who can’t fix their own car, grow their own food, or write their own emails. People completely, utterly dependent on the tools sold to them by a handful of companies.
This isn’t a bug, it’s a feature. They’re not destroying the muscle of critical thought by accident. They’re engineering its atrophy. They’re creating the perfect consumer, the perfect employee, the perfect citizen for their brave new world: a human who outsources their humanity to the cloud.
So Mr. Trattner can peddle his “responsible AI” all he wants. It’s like putting a filter on a sewer pipe. The water coming out might look a little cleaner, but you still wouldn’t want to drink it. The whole system is rotten, and we’re just arguing about the brand of air freshener to hang in the room.
Me, I’ll take the mess. I’ll take the hangover and the blinking cursor on a blank page and the terrifying silence that forces you to dig up a thought from your own skull. It’s a dirty, painful business, this thinking. It doesn’t scale. It’s not productive. But it’s real. It’s all we’ve got that the machines don’t. Not yet, anyway.
Now if you’ll excuse me, this bottle isn’t going to finish itself.
Chinaski. Out.
Source: AI tools are killing students’ critical thinking. It’s time to fight back