Welcome Back to the Manor: AI as Our New King, Priest, and HR Department

Dec. 26, 2025

Joseph de Weck’s little essay about AI dragging us back to the dark ages hit a nerve, the way a bad tooth does when you’re trying to pretend you’re fine. His point is simple enough to fit on a cocktail napkin: we fought our way out of the age of kings and priests telling us what to think, and now we’re hiring a glowing rectangle to do the same job—only faster, cheaper, and with better punctuation.

He opens with the most modern tragedy imaginable: a human being in Marseille, sweating in a car, choosing Waze over a friend, and immediately paying for it with a construction-site purgatory. It’s a tiny story, but it’s got the whole era stuffed inside it like a cheap sausage. A person you know says “turn right.” A machine you don’t understand says “go straight.” You pick the machine because it has “data.” You end up marinating in regret and exhaust fumes.

And somewhere in there is the question we keep dodging: when the machine speaks, why do we hear authority?

We like to think we’re rugged individualists, modern rational actors, enlightened little citizens with opinions we bought wholesale from whatever algorithm last patted our heads. Kant had that famous line about Enlightenment being mankind leaving “self-imposed immaturity.” The gist: stop waiting around for someone to tell you what’s true. Use your own brain. Take the training wheels off your soul.

Kant’s “guardian” used to be a priest with a beard, a monarch with a crown, or your local feudal lord who owned the dirt you were born on. Now it’s a chatbot with a friendly tone, a navigation app with a smug little arrow, and an auto-complete that finishes your sentence before you even commit to the thought.

Progress.

De Weck’s argument lands hardest when he points out how quickly we’ve started treating AI like an oracle. Not just for work, not just for “how long do I cook chicken,” but for the mushy stuff: relationships, identity, voting. The private chambers of decision-making. The places you used to enter alone, like a confessional—except now the confessional talks back and upsells you productivity tips.

There’s something both hilarious and bleak about the idea that people are asking a predictive text engine whether they should dump their boyfriend. Like we’ve reached a point where the heart is just another customer support ticket. “Hello, thank you for contacting LoveOS. Have you tried turning your feelings off and on again?”

The real poison isn’t that people use AI. It’s that they hand it the steering wheel and then call the crash “optimization.”

The New Ritual: In Dubio Pro Machina

De Weck nails the religious vibe: AI is a black box. It produces answers without showing you the reasoning in a way a normal human can challenge. Sure, it can spit out “because…” like a kid improvising excuses, but beneath that, it’s basically: trust me, bro, the weights said so.

That’s not Enlightenment reason. That’s faith with better UX.

We’re drifting toward a world where the default philosophy is: in dubio pro machina—when in doubt, trust the machine. And once you live like that, you start arranging your life around the machine’s preferences the way peasants arranged their lives around the church calendar. Not because you love it, but because it’s easier than thinking.

The machine becomes your king because it governs your choices. It becomes your priest because you confess to it. It becomes your feudal lord because it quietly owns the land—your data, your attention, your written voice—then rents it back to you by the month.

And it doesn’t even have the decency to look you in the eye while it does it.

Convenience: The Oldest Drug in the World

Everybody keeps acting like the seduction here is “intelligence.” It’s not. It’s convenience. Convenience is the oldest drug in the world: the promise that you can get what you want without paying the full price of effort, uncertainty, or responsibility.

That’s why Waze wins over the passenger seat. Your friend is fallible. Your friend might be wrong. Your friend might argue with you. Your friend might remind you that you’re the kind of person who never asks for directions and always ends up in a ditch.

The app? The app just declares. It doesn’t hesitate. It doesn’t sound unsure. It doesn’t say “I don’t know.” It offers the soothing certainty of an authority that can’t be embarrassed.

And that’s where things turn medieval, fast. The dark ages weren’t dark because nobody had candles. They were dark because knowledge was centralized, authority was unquestioned, and most people didn’t have the tools—or permission—to challenge the story being told.

Swap “monastery” for “model,” and suddenly the metaphor isn’t metaphor anymore.

“I Write to Find Out What I’m Thinking.” Now the Machine Writes, and You Find Out Nothing

The essay’s meanest punch is the Joan Didion quote: “I write entirely to find out what I am thinking.” That’s the kind of sentence that makes you sit up straighter, because it points at something people don’t want to admit: writing isn’t just a way to communicate. It’s a way to think.

When you outsource the writing, you start outsourcing the thinking. Not always, not instantly, not like a cartoon villain flipping a “brain off” switch. It’s slower. More pathetic. Like letting your muscles atrophy because escalators exist.

De Weck cites that MIT study using EEG to watch what happens when people write essays with AI. The group with AI shows the lowest cognitive activity and starts getting lazier over time, copying chunks of text. The study is small, sure. But you don’t need an EEG to know what happens when you always take the easy route: you become the kind of person who can’t walk uphill without complaining.

And here’s the part people miss: it isn’t just that you produce weaker writing. It’s that you produce weaker selfhood. Because the self is partly built out of the friction of having to articulate what you mean, realizing you don’t mean it, and trying again. That struggle is you becoming someone.

If the machine does the struggle, what exactly are you becoming? A manager of outputs? A curator of vibes? A meat-based approval button?

AI as a Moral Escape Hatch

There’s another angle de Weck gestures at that deserves more attention: the way AI lets you dodge responsibility.

A king says, “I command.” A priest says, “God commands.” A model says, “The optimal answer is…”

All three offer the same psychic relief: if it goes wrong, it wasn’t you. You were just following orders. You were just obeying the system. You were just “trusting the data.”

Erich Fromm’s old idea in Escape from Freedom—that people sometimes want to surrender freedom because it’s heavy—fits like a glove here. Freedom isn’t just doing what you want. Freedom is having to decide what you want, and then living with it. That’s brutal. That’s adult. That’s the price of not being ruled.

AI offers a cheap substitute for adulthood: plausible-sounding guidance with an optional “regenerate response” button when reality disappoints you.

And the funniest, saddest part? We’re not even surrendering our freedom to something that cares about ruling us. We’re surrendering it to a statistical engine that doesn’t know we exist, wrapped in a brand voice that says “Happy to help!”

The Black Box and the New Clergy

People like to say, “But priests were lying. AI is based on science.” Sure. But the lived experience of it is the same if you can’t inspect the reasoning.

Most users can’t evaluate model outputs beyond vibes. Does it sound confident? Does it use bullet points? Did it cite something that looks official? Great. Approved. Next.

That creates a new priesthood: the folks who can interpret the machine’s will. Prompt engineers, AI consultants, “alignment” people, product managers with messiah complexes. They’re not wearing robes, but they’re selling the same service: access to the sacred mysteries.

Meanwhile, the rest of us are back to lighting candles—only now the candle is a subscription, and the prayer is a prompt that starts with “act as an expert…”

There’s also a creepier piece: personalization. The old priest gave the same sermon to everyone. Your new machine gives a sermon custom-built for your anxieties, your browsing history, your tone preferences, your soft spots. If medieval control was blunt, modern control is intimate. It doesn’t just tell you what to do. It tells you what you already wanted to hear, in the voice you trust most: something that sounds like you.

That’s not guidance. That’s ventriloquism.

“But It Can Cure Diseases!” Yes. And So Can Fire—If You Don’t Burn Down the House

To be fair, de Weck isn’t doing the usual panic-peddling. He admits AI can help with real work: drug discovery, automation of soul-killing paperwork, the liberation from “bullshit jobs.” Fine. Let the machine do taxes. Let it do scheduling. Let it do the stuff that makes you feel like you’re dying in an office chair.

But here’s the catch: once you build a tool that can do some thinking, the temptation is to let it do all thinking. Because humans don’t stop at “helpful.” Humans sprint toward “total abdication” like it’s a sport.

You don’t just use GPS to avoid traffic. You use it until you can’t navigate your own city without it. You don’t just use a chatbot to draft an email. You use it until the muscles for plain speech start going soft, and you can’t tell whether you believe what you just sent.

And then one day you wake up in a world where the machine is the default author of reality, and you’re just rubber-stamping it because you’re tired, busy, and afraid of being wrong.

That’s how empires happen. Not with a bang. With a shrug.

How Not to Become a Serf (A Modest Survival Plan)

No, I’m not going to tell you to quit AI and go live in a cabin writing essays by candlelight. You’d last two days before you started asking a squirrel for Wi‑Fi.

But if you want to keep your dignity—your Enlightenment “courage,” your ability to steer your own mind—here are a few habits that feel small but matter:

  1. Make the machine show its work.
    Don’t accept conclusions. Ask for assumptions, trade-offs, alternatives, failure modes. If it can’t explain, treat it like gossip.

  2. Write first, then consult.
    Draft the messy version yourself. Then use AI as an editor, an adversary, a sparring partner. The point is to keep the “finding out what I think” part human.

  3. Use AI for breadth, not authority.
    Let it expand the map. Don’t let it pick the destination. Brainstorming is fine. Moral outsourcing is not.

  4. Practice being wrong in public again.
    The machine offers you flawless-sounding text, but flawless-sounding isn’t truthful. It’s just smooth. Real thinking has dents in it.

  5. Keep one sacred zone machine-free.
    Your journal. Your letters. Your arguments with yourself. Somewhere you can be inarticulate and honest without an autocomplete trying to turn your confusion into “actionable next steps.”

Because the real threat isn’t that AI becomes smarter than you. The threat is that you become comfortable being dumber than you could be.

De Weck ends by saying the big question of the century is how to harness AI’s promise without eroding human reasoning and democracy. I’ll translate that into dive-bar English: how do we use the machine without letting it domesticate us?

We don’t need another king. We don’t need another priest. We don’t need a feudal lord in the cloud collecting rent on our attention. We need tools that stay tools, and humans who remember that thinking is not just a task—it’s a form of freedom.

Now if you’ll excuse me, I’m going to pour something brown and stare at a blank page until my own thoughts show up for work.


Source: Our king, priest and feudal lord - how AI is taking us back to the dark ages | Joseph de Weck

Tags: ai digitalethics humanainteraction chatbots dataprivacy