Gen Z vs. The Robot Ghost in the Machine

Dec. 24, 2025

Scott Anthony, a Dartmouth professor and ex-consultant, says he’s shocked by how scared his Gen Z students are of AI.

Not “concerned.” Not “thoughtful.” Not the usual polite academic hand-wringing where everyone pretends the seminar room is a monastery and knowledge is made of linen.

Scared. Full stop.

And honestly? I believe him. Because I’ve watched a whole generation grow up with phones glued to their palms like an extra organ, and yet the second a tool shows up that can imitate their own output, they act like someone let a poltergeist loose in the group chat.

It’s a weird kind of fear, too. Not just the “am I cheating” fear. It’s the “if I use this, I will evaporate as a person” fear. Like the moment they type a prompt, their soul gets vacuum-sealed and shipped to a warehouse labeled AUTOMATION.

Anthony’s explanation is basically: they worry they’ll lose their humanity if they lean too hard into AI.

On one hand, that sounds dramatic. On the other hand, have you ever read a corporate email written by someone who’s been “leaning into templates” for fifteen years? It’s not exactly Hamlet. It’s barely human. It’s a bag of warm phrases: “circle back,” “touch base,” “per my last email,” “moving forward,” “excited to announce,” etc. If humanity can be lost to Outlook alone, then sure, a chatbot could speedrun the job.

But what’s really going on here isn’t that Gen Z is uniquely fragile. It’s that they’re the first cohort forced to stare directly at the knife-edge bargain that everyone else got to ignore: outsource your thinking to a machine and you might end up with a beautiful life and an empty head. Or you might end up with neither.

The new fear isn’t cheating. It’s irrelevance.

The old academic panic was: students will cheat.

The new panic is: students will never become the kind of people who can’t be replaced.

That’s a different monster.

A lot of older folks talk about AI like it’s a power tool: “Just learn it and you’ll be fine.” Like it’s a circular saw. Respect it, keep your fingers out of it, wear goggles, don’t drink three beers first. Basic stuff.

But AI isn’t a circular saw. It’s more like a mirror that lies politely. It reflects you back… if you were faster, smoother, less awkward, less you. It doesn’t just cut wood. It cuts the part of life where you struggle and sound dumb and write a clumsy paragraph and then fix it and then fix it again and finally something clicks.

And the part nobody wants to say out loud: that struggle is the whole education. Not the PDF at the end. Not the “deliverable.” The struggle.

So when Anthony says a meaningful portion of students are scared, I hear: they know the bargain is rigged. They can see the shortcut. They can also see what it costs. They just don’t have the language for it yet, so they call it “fear of losing humanity.” Which sounds like a philosophy term, but it’s really a street-level instinct: if I let the machine do the reps, my brain turns into decorative foam.

“Your brain on ChatGPT” and the cult of the alarming headline

Then you get the MIT study making the rounds, the one framed as “your brain on ChatGPT” with talk about “cognitive debt.” Media loved that phrase because it makes AI sound like a payday loan for your frontal lobe. Borrow convenience now, pay later with compound stupidity.

I haven’t met a terrifying concept the media couldn’t turn into a juice cleanse.

The study’s gist, as popularly digested, is: use AI and your cognitive activity scales down. You get lazier, duller, less engaged. Which is plausible in the way “if you never walk, your legs get weaker” is plausible. Nobody needs a lab coat to tell them that.

But the pushback from researchers in Australia is also plausible: maybe the “brain-only” group just got the advantage of repeating the task and learning the task, while the AI group didn’t get the same repetition cycle. Familiarity matters. People get better at things because they do them. Shocking.

What does this leave us with? Two camps throwing studies at each other while regular people sit there thinking, “I just want to write an email that doesn’t make me sound like a hostage.”

Here’s my barstool version: AI can make you dumber if you let it. It can also make you sharper if you treat it like a sparring partner instead of a replacement brain.

Calculators didn’t destroy math. But they did destroy a certain kind of mental toughness. Ask someone to do long division at gunpoint and they’ll beg you to just shoot them. Still, we didn’t ban calculators. We changed what we considered valuable: less mechanical grinding, more understanding.

AI is the same shift, only messier, because writing and thinking aren’t just mechanical grinding. They’re identity. People don’t say, “I am a long division.” They say, “I am a writer,” “I am creative,” “I am smart.” Then a machine shows up and fakes it convincingly, and suddenly your identity feels like a Halloween costume you bought at a gas station.

No wonder they’re spooked.

The professor’s solution: show me the guts

Anthony’s response is the only sane one I’ve heard from an educator lately: don’t just grade the glossy output. Make students expose the guts of the work.

That’s the whole ballgame.

Because the real issue with AI in school isn’t that it produces text. The real issue is that it produces plausible text. It produces the kind of text that looks like someone learned something even when they learned nothing. It’s academic cosplay. A student can turn in a clean, confident essay with all the right vocabulary and still not understand the topic enough to explain it to a bored cousin at a family dinner.

And educators are staring at a future where the most polished work might be the least authentic. That’s a nice little twist in the old “neat handwriting means a neat mind” lie.

Anthony says he wants students to do the work and then show him what they did, how they did it, what decisions they made. That’s not anti-AI. That’s anti-faking. If AI is a tool, fine. But the student has to prove they weren’t just a tourist holding the camera.

This is what a lot of workplaces will demand too, once the honeymoon period ends. Right now, companies are drunk on “productivity.” They’re acting like they found a magic faucet that pours out deliverables. But eventually someone will ask the ugly question: who in this room can still think when the faucet breaks?

Blue books, bathroom bans, and the romance of suffering

The article mentions Jure Leskovec at Stanford going back to blue-book exams, and Anthony respects it but doesn’t do it himself. Then there’s the colleague who not only does blue books, but doesn’t allow bathroom breaks during the exam.

That detail is so absurd it loops back around into comedy. Nothing says “education” like forcing a 20-year-old to choose between bladder failure and academic failure. That’s not testing knowledge, that’s training hostages.

But I get the impulse. When tools get too powerful, people retreat to the only environment they can control: a room, paper, pen, no internet, no AI, no nothing. Just you and your own brain, sweating under fluorescent lights like a criminal being interrogated.

It’s the academic version of “unplug the router.” Primitive, effective, and slightly deranged.

Still, there’s a deeper point hiding in the bathroom-ban insanity: we’re trying to preserve a space where effort is visible. Where the struggle is the artifact. AI erases the struggle from the final product. That makes teachers suspicious, and students anxious, because everybody can feel the ground shifting.

“The writing is all good now” and the new curse of competence

Anthony drops a line that’s both funny and terrifying: “The writing is all good now. The bad writing has been taken out.”

That’s true, and it’s not all upside.

Bad writing used to be a signal. It said: here’s a person who doesn’t understand what they’re saying. Or they understand it but can’t express it. Either way, there’s something honest in the mess.

Now the mess can be cleaned instantly. So we’re headed into a world where everyone sounds “professional,” and professionalism becomes meaningless. The baseline rises. The differentiator moves.

If everyone’s writing is smooth, then smooth writing no longer indicates clear thinking. It indicates access to a tool. The curse of competence is that it becomes a commodity.

So what matters next?

Taste. Judgment. The ability to ask a good question. The ability to know when the AI is confidently wrong. The ability to take a bland, safe answer and say, “No, that’s not it,” and push deeper.

In other words: the human parts. The annoying parts. The parts you can’t automate without automating the whole person.

Which circles back to Gen Z’s fear. They’re not wrong to sense that the old ways of proving yourself are collapsing. They just don’t know what replaces them.

Seinfeld, McKinsey, and the hard way

Anthony tells a story about Jerry Seinfeld being asked if he ever wanted McKinsey to help his process, and Seinfeld responding: “Who’s McKinsey?” Then: “Are they funny?”

It’s a great line because it punctures the central delusion of consultants and tool-sellers everywhere: that process is the point.

Seinfeld’s point is the point: the hard way is the right way. Not because suffering is holy, but because the hard way forces you to develop the muscles the shortcut would skip.

AI is a shortcut machine. It’s a forklift in the gym, like Anthony says. You can lift the weight, sure. But you’re not getting stronger. You’re just moving objects around.

So yes: use AI. But make it earn its place. Use it to generate options, then critique them. Use it to challenge your outline, then build a better one. Use it to surface counterarguments, then decide which ones matter. Don’t use it like a pacifier you jam into your mouth whenever your brain starts crying.

I say this as someone who loves convenience the way some people love religion. Convenience is seductive. Convenience is also how you end up with a life where you can’t do anything without an app holding your hand.

Julia Child and the romance of the disaster meal

Then Anthony brings in Julia Child, who failed her first exam at Cordon Bleu and spent a decade wrestling her cookbook into existence. She cooked brains in red wine for her husband and everybody agreed it was a disaster.

That’s the part I like. Not the brains. The disaster.

Because it’s the opposite of the AI fantasy. The AI fantasy is: type prompt, receive perfection, become genius, skip the humiliation.

Julia Child’s real lesson is: humiliation is the tuition. You pay it whether you want to or not. You either pay it early, while you’re learning, or you pay it later, when you’re in charge of something important and the stakes are real and you have no idea what you’re doing because you never built the muscle.

Gen Z’s fear, in that sense, is a healthy instinct. They’re scared of becoming the kind of person who can produce an answer without understanding. They’re scared of becoming a manager of words instead of a maker of meaning. They’re scared of being a high-gloss shell.

And the sick joke is: plenty of adults already are high-gloss shells, and they’re doing great. They’re running meetings. They’re shipping roadmaps. They’re “driving alignment.” The world rewards performance. It always has.

So when a student looks at AI and thinks, “This will make me fake,” what they’re also thinking is, “Will the world reward me for being fake?”

And the honest answer is: sometimes, yes. Frequently, yes. For long stretches of time, yes.

But not always. Because reality still shows up. Customers still get angry. Markets still shift. Systems still break. Someone still has to think.

That’s the bet: be the person who can think.

The weird comfort in the mess

Anthony says disruption is messy while you’re inside it, and only looks clear later. That’s true. Living through change is like trying to read a map in a windstorm while someone keeps turning the lights on and off.

So what do you tell a scared student?

You tell them this: AI won’t steal your humanity. You’ll hand it over voluntarily if you decide comfort is more important than competence. The machine can’t take what you refuse to surrender.

Use the tool, don’t become the tool.

Learn the hard way often enough that, when you take the shortcut, you know exactly what you’re skipping and what it costs.

And if you’re going to be scared of something, don’t be scared of the chatbot. Be scared of the part of you that wants to stop trying. Be scared of that soft little voice that says, “Good enough, ship it,” when you know you haven’t understood a damn thing.

That voice is the real automation.

Later, when the screens blur and the room gets quiet, there’s a particular kind of relief in doing something the slow way, the human way, the way that doesn’t scale. Pour a drink, write the ugly first draft, argue with yourself, look stupid, fix it, and earn the right to sound smart.


Source: Dartmouth professor says he's surprised just how scared his Gen Z students are of AI | Fortune

Tags: ai chatbots humanainteraction futureofwork digitalethics