Dust Off That Diploma, Shakespeare – The Machines Need Critics (Apparently)

Mar. 28, 2025

Another Friday morning, or maybe it’s afternoon. Hard to tell when the blinds stay shut. Sun’s probably out there somewhere, mocking us all. Got handed this piece of digital paper talking about what to do with the liberal arts kids now that the robots are writing poems and doing taxes. Christ. As if that was the biggest problem we had. People wringing their hands about English majors while the whole damn world feels like it’s circling the drain.

Yeah, I remember the jokes. Philosophy majors flipping burgers, History grads living in mom’s basement. Been hearing it since before computers were anything more than fancy adding machines. Garrison Keillor, bless his folksy heart, probably got a few chuckles out of it on the radio between ads for rhubarb pie. Seemed funny then. Now? Now the joke’s got teeth, and they’re made of silicon. Makes you wonder who’s really gonna end up in the basement – the poet or the programmer whose code just got replaced by smarter code?

And here’s the gut punch: the machines aren’t hauling trash or painting fences like we thought. Nope. They’re coming for the folks in neckties. The accountants, the paralegals, the reporters tapping out stories like this one. Hot knife through butter, the article says. Sounds about right. Smooth, silent, and leaves a mess you don’t see until it’s too late. Meanwhile, good luck finding a robot that can snake a drain without flooding the joint or pour a stiff drink without measuring it like it’s goddamn rocket fuel. Progress, they call it. Looks more like a cosmic prank from where I’m sitting, nursing this lukewarm coffee that wishes it was whiskey. The important jobs, the ones that require actual sweat or dealing with genuine human stupidity face-to-face, those seem safe for now. Lucky us.

They can write poetry now, too. Compose songs. Probably churn out blog posts better than this heap of digital bile you’re scrolling through. Maybe I should just feed the machine a bottle of bourbon and let it take over Wasted Wetware. Save me the trouble and the hangover. But then, could it really capture the specific shade of existential dread that comes from staring at a blinking cursor at 3 AM, the ghosts of deadlines past whispering sweet nothings in your ear? Doubtful. Could it replicate the feeling of cheap whiskey burning its way down, fueling just enough self-loathing to spit out another sentence? Probably not. Some things still require a faulty human nervous system, a liver working overtime, and a deep-seated suspicion of anything that claims to have all the answers.

So, what’s the solution these geniuses cooked up for the poor bastards with degrees in Faulkner or Foucault? Ah, yes. The humanities grads will provide the “human touch.” Like adding a sprig of parsley to a plate of prison food. They’ll “evaluate AI tools,” “assess content,” spot “biases.” This Emily Todd, a Dean somewhere – probably got a nice office, cleaner than my apartment anyway – says their training prepares them to “use AI responsibly.” Responsible AI. Sounds like ‘healthy cigarette’ or ‘honest politician’. A nice idea that crumbles under its own bullshit. They’ll grapple with “difficult questions,” look at things from “multiple perspectives.” Sure they will. Just like they do now, over lukewarm PBRs in dive bars, arguing about Derrida while the rent’s overdue. How’s that different, except now they’re supposed to do it for some soulless corporation trying to make its chatbot sound less like a psychopath and more like a friendly neighborhood insurance salesman? “Don’t worry about the soul-crushing reality of automation, sir, just focus on this user-friendly interface!” Need another smoke just thinking about it.

And this other quote, from some instructional design director via Higher Ed Dive: liberal arts majors might have an advantage over the STEM programmers. An advantage. Right. Like having a degree in 18th-century French literature gives you an edge when the Terminators finally show up. Maybe they can bore the robots to death reciting Voltaire, or confuse them with existential paradoxes. “Does the machine truly know it exists, or does it merely process?” Watch its circuits fry. It’s a thought. A stupid one, probably brewed up by someone who’s never had to choose between paying the electric bill and buying a bottle of cheap scotch, but a thought nonetheless. Pouring myself something stronger than coffee now. This requires fortification. The real stuff.

Then there’s this Joe Shelly character, VP of libraries at some fancy college, chatting at Davos. Davos. Jesus. Bunch of stuffed shirts congratulating each other on how well they’re managing the apocalypse while the rest of us drown in the floodwaters. Anyway, Shelly’s talking about the “eye-opening pace of change” and “aha moments.” Sounds like what happens when you switch from cheap gin to top-shelf stuff, that sudden clarity before the blackout. His big reveal? Librarians are the key. Librarians! Because they have “1000 years’ worth of experience” understanding “information ecologies.” A thousand years? Were they cataloging scrolls in Alexandria, whispering Dewey Decimal numbers to Archimedes? Look, I got nothing against librarians. Quiet types, usually know where the good stuff is hidden, the forbidden texts, the books that tell the truth. But pinning the future of humanity navigating AI truth-and-fiction on them? Seems like a hell of a burden to put on folks who mostly just want people to use coasters and return things on time without dog-earing the pages. “Interrogate sources,” he says. Sure. We all interrogate the bartender about why the whiskey tastes watered down tonight. Same principle, bigger stakes, less satisfying answers, I guess.

And this Meta guy, LeCun, thinks this whole Gen AI thing has maybe 3 to 5 years before… what? It plateaus? Gets boring? Achieves godhood and turns us all into paperclips or optimally organized flesh-batteries? Who the hell knows. Three to five years. Feels like an eternity and no time at all. Enough time for another couple of hangovers, maybe a bad relationship or two, definitely enough time for the suits to figure out how to monetize the hell out of it even more. Shelly wants to rush back and tell the students the world’s evolving while they’re in college. No shit, Sherlock. What’s it supposed to do, wait politely until they graduate, holding the door open for them into the unemployment line? These kids are gonna be “contributors to what’s being invented right now.” Yeah, contributors of training data, maybe. Cannon fodder for the algorithm. Raw material fed into the grinder. Hope they enjoy the view from inside the sausage casing.

Shelly also notes kids used to major in business for cash and then pick a “passion major” on the side – art, history, ancient Greek pottery, whatever floats your boat after you’ve secured the bag and can afford the therapy. Now, they might not have to? Because AI will be “embedded in any college discipline.” Oh, fantastic. So instead of learning history by reading dusty books and arguing until dawn, you learn how to prompt an AI to tell you about history, probably getting a sanitized, corporate-approved version full of whatever biases the programmers in their air-conditioned cubicles baked in. Progress! It’s like wanting to learn how to paint and being handed a paint-by-numbers kit designed by Microsoft Clippy’s vengeful ghost. Where’s the fun? Where’s the struggle? Where’s the goddamn humanity in that? Learning isn’t supposed to be efficient; it’s supposed to be messy, painful, and occasionally enlightening, like a good drunk or a bad love affair.

The article mentions holding a first edition de Tocqueville and seeing a Hamilton statue go up. Okay. And? What does that have to do with Brenda the English major trying to figure out how to pay rent when ChatGPT writes better ad copy for half the price and doesn’t demand coffee breaks? Sounds like the kind of intellectual flexing that happens at places like Davos, little reminders that they’re cultured, they read books, actual paper things, before hopping on the private jet. Means sweet fuck all to anyone outside that bubble trying to keep their head above water. Pass the bottle. The cheap one this time.

Shelly talks about writing expressing thinking, theorizing about future frameworks. More academic hot air, smells like old paper and tenure. Look, writing is thinking. Or at least, it’s trying to untangle the goddamn knots in your head long enough to make sense for five minutes before the darkness creeps back in. You bleed on the page. You sweat bullets. You stare at the dirty wall and curse the gods, the editors, and your own miserable existence. Can an AI do that? It can mimic it. It can generate text that looks like thought, like feeling, like pain. But is it the same? Is a synthesized blues riff cranked out by an algorithm the same as an old man pouring his broken heart out through a beat-up Gibson in some smoky bar at 2 AM? I don’t think so. Maybe I’m just a romantic fool drowning my sorrows. Or maybe I’ve just had enough booze to see the strings pulling the puppets.

So, what’s the future for the liberal arts degree? Maybe the joke’s on the tech bros this time. Maybe all those philosophy majors arguing about ethics and historians understanding context and literature grads who know a good story from a bad one, who can smell bullshit a mile away because they’ve read centuries of it… maybe they are the ones who can see the grift. Maybe they’re the ones who can call bullshit on the AI hype machine because they’ve spent four years studying human folly, ambition, and self-deception across the ages. It’s a pretty consistent pattern, after all.

It’s not about providing a “human touch” to make the machine palatable for the masses. Forget that corporate nonsense. It’s about recognizing the machine for what it is: a tool, a powerful one, sure, but one built by flawed humans, reflecting our own biases, greed, and spectacular stupidity. Maybe the job of the liberal arts grad isn’t to ‘harness AI responsibly’ for some corporation looking to boost its ESG score. Maybe their job is to be the canary in the coal mine, the court jester pointing out the emperor has no clothes (or maybe just ill-fitting source code), the drunk uncle at the wedding reception telling uncomfortable truths that everyone else is too polite or too scared to mention.

They talk about “interrogating sources.” Good. Let’s start by interrogating the people building this stuff. Let’s interrogate the VCs pouring billions into it, hoping for a jackpot. Let’s interrogate the whole damn premise that efficiency and automation and frictionless everything are always good things. Let’s ask the “difficult questions” they mentioned, but not the ones they want us to ask – the safe ones, the ones that lead back to buying more tech. Let’s ask why. Why are we building this? Who really benefits? Who gets screwed over? A history major might have some thoughts on that, remembering Luddites or enclosure movements. A philosophy major definitely would, questioning the ethics until everyone’s heads hurt. An English major could write a hell of a story about it, maybe even a poem that the AI couldn’t fake.

Maybe the real “advantage” isn’t fitting neatly into the brave new AI-driven world, but understanding how to resist it, critique it, or at least, how to keep your own messy, flawed soul intact while navigating the wreckage. It’s not about becoming better cogs in the machine; it’s about remembering how to be human when the machines are everywhere, humming away, promising paradise while building something that looks suspiciously like a cage.

Or maybe I’m wrong. Maybe I’m just a dinosaur roaring at the meteor. Maybe the robots will write better poetry, the AI therapists will fix our broken heads, and the philosophy bots will solve the meaning of life while serving us perfectly synthesized cocktails that never give you a hangover. Wouldn’t that be a laugh? A hollow one, probably.

Either way, this whole discussion’s given me a thirst that coffee just won’t kill. The world keeps spinning, the algorithms keep churning out mediocrity disguised as magic, and the bottle’s still half-full. Or half-empty. Depends on the light, and how much you’ve already had. Time to see about fixing that imbalance.

Chinaski out. Needs a refill, badly.


Source: What’s Going On With Liberal Arts Majors?

Tags: ai automation jobdisplacement ethics futureofwork