The Machines Think We're Hacks (And Maybe They're Right)

Mar. 27, 2025

Alright, Thursday afternoon. Sun’s trying to stab its way through the blinds, same way this headache’s trying to split my skull. Perfect time to pour a little something brown into a glass – strictly medicinal, you understand – and contemplate the latest absurdity coughed up by the digital dream machine.

Got this piece slid across my virtual desk, something about AI now being so goddamn smart, it thinks good writing must be churned out by a machine. Yeah, you heard that right. Some poor bastard writing for Forbes ran his own articles through a few of these AI judges – Gemini, ChatGPT, Claude, the usual suspects lined up for inspection. And guess what? Gemini, mostly, took one look at his well-structured, data-backed, clearly argued prose and said, “Nah, too clean. Too… competent. Must be AI.”

Jesus H. Christ on a cracker. It’s like walking into a bar, ordering a neat whiskey, and the bartender slides you a lukewarm glass of prune juice because anything with actual bite must be synthetic.

The article kicks off comparing this to My Fair Lady. Professor Higgins, Eliza Doolittle, all that jazz. Trying to pass off a flower girl as a duchess based on how she talks. Cute. Real cute. Except now, the tables are turned. The AI, playing the snooty linguistics expert, listens to actual human writing – the kind people get paid for, the kind that follows the rules they teach you in whatever writing course doesn’t involve three-day benders – and declares it the fraud. Because it’s too polished.

“This writing is too good,” the silicon snob sniffs. “Clearly indicates that it is AI…”

I had to light a cigarette after reading that. Stared at the smoke curling up towards the nicotine-stained ceiling. So, the benchmarks for human mediocrity are now so low in the AI’s estimation that anything resembling decent composition gets flagged as artificial? We’ve aspired to structure, clarity, using sources – all the stuff they beat into you – only to have the machines point and say, “Robot!”

Gemini even listed the tells: structured arguments, data, sources, practical solutions, a call to action. You know, the kind of stuff that makes corporate memos and TED Talks so soul-crushingly predictable. And the real gut punch? It also mentioned the writing “lacks a distinct personality or voice… which is often characteristic of AI-generated text.” So, you write like a competent professional, maybe sand off the rough edges, and boom – you sound like a machine trying not to sound like a machine. It’s a goddamn Catch-22 served up by a glorified autocomplete.

I can just picture these algorithms, coded by kids barely old enough to buy their own booze, judging prose. What do they know about voice? About bleeding onto the page? About the desperation that fuels actual writing? Their world is clean rooms and stock options, not dive bars and eviction notices. Their “personality” is a set of parameters tweaked by some engineer who thinks Kerouac is a type of software.

ChatGPT was apparently confused, hedging its bets like a nervous bookie. “Could be human, could be AI, could be a cyborg centaur tapping it out with his hooves, who the hell knows?” At least that feels vaguely honest in its digital bewilderment.

Then there’s Claude. Old Claude saw through the bullshit, sometimes. It picked up on “personal voice,” “individual experience,” “nuance.” Maybe Claude’s spent some time in the digital gutter, learned a thing or two. Or maybe its programmers just gave it a slightly better bullshit detector. Good for Claude. Have a virtual drink on me, pal.

So, the author’s takeaways?

  1. AI assumes well-written means AI-written. The sheer arrogance. It’s like a kid with a new chemistry set assuming Marie Curie must have used the same plastic beaker. They’ve learned to mimic the surface, the structure, and now they think that’s all there is. They mistake the paint-by-numbers outline for the actual goddamn painting. It doesn’t get the messy, contradictory, gloriously flawed human part. And maybe that’s the point. Our flaws are our fingerprints. The grammatical error, the awkward phrasing, the tangent about a woman who broke your heart over cheap gin – that’s the proof of life, not some perfectly balanced sentence structure.
  2. AI writing is getting good. Sure, if “good” means efficient and clear, like an instruction manual for assembling IKEA furniture. Useful? Maybe. For churning out bland corporate communications, summarizing reports nobody wants to read anyway. But don’t delegate the thinking, the author warns. No shit. That’s like telling someone not to delegate breathing. The thinking, the feeling, the living – that’s the whole goddamn point. Relying on AI for that is outsourcing your soul.
  3. Collaboration is key. The “art” of working with AI. Sounds thrilling. Like collaborating with a photocopier. Sure, it can duplicate things, maybe even resize them, but it ain’t gonna help you write the next great American novel, unless the novel is about a photocopier. Iteration, using several tools… yeah, more hoops to jump through, more ways to get lost in the process instead of just saying what you need to say.

And the big prediction? This flood of easy-to-make, derivative AI content is going to drown us all. We already wade knee-deep through useless information spewed out by every idiot with a keyboard. Now, the idiots have automated assistants to help them spew faster and wider. It’s gonna be a tsunami of mediocrity.

The author thinks this will drive people back to curated, paid content. Back to gatekeepers. Maybe. Or maybe we’ll just get better at spotting the synthetic crap. Maybe the stuff that bleeds, the stuff that stinks of authenticity – booze, sweat, regret, and all – will stand out even more. Maybe “bad” writing, in the eyes of the machines, will become the new badge of honor. Proof you’re still human.

I don’t know. This glass is empty. The smoke’s stinging my eyes. The whole thing feels like another layer of digital noise piled onto the existing heap. They build these things to mimic us, then get surprised when the mimicry gets good enough to fool… other mimics? It’s snakes eating their own tails in an infinite digital loop.

Me? I write like me. If some AI thinks it sounds too human, too messy, too much like a guy who’s seen the inside of too many bottles and the wrong side of too many mornings, well… good. Let the machines churn out their perfect, sterile prose. I’ll be over here, with my whiskey and my smokes, trying to hammer out a few honest words before the shakes get too bad.

If Gemini scanned my stuff, it’d probably crash. Too many contradictions, not enough bullet points. Probably flag it as “written by a disgruntled human under the influence.”

And I wouldn’t have it any other way.

Time for a refill. Keep your circuits clean, kids.


Chinaski


Source: Too Good To Be Human? AI’s Surprising Bias Against Quality Writing

Tags: ai chatbots humanainteraction automationbias futureofwork