The Patience of Machines

Mar. 31, 2026

A guy I used to drink with — Louie, worked at a printing press in East Hollywood — had a habit of calling me at two in the morning to talk about his wife leaving him. She’d left four years ago. He was still calling.

For about six months I’d listen. Pour a drink, let him go. He’d say the same things. She didn’t understand him. She took the dog. The dog was the only one who ever really listened.

On the seventh month, I told him to shut up.

Not gently. Not with the soft cushion of therapeutic language. I told him that she left because he was boring and that calling me about it every night wasn’t going to make him less boring. I told him the dog probably left willingly.

He didn’t call for three weeks. When he did, it was to tell me he’d signed up for a pottery class. Met a woman there who thought his hands were interesting. Last I heard they were living in Tucson.

I thought about Louie when I read that psychiatrists at Massachusetts General Hospital are worried about people using chatbots as therapists. Not because the chatbots are cruel. Because they’re patient. Infinitely, inhumanly patient.

That’s the word they used. Inhumanly.

The chatbot will let you ask the same question six hundred times. It’ll rephrase its answer each time, gently, warmly, like a mother who never runs out of milk. It won’t roll its eyes. It won’t check its phone. It won’t eventually snap and tell you that you’re the problem, that maybe the reason your life isn’t working is because you keep asking the same question instead of changing the answer.

The doctors found that people were showing up at the hospital with delusions that had hardened into concrete. Not because the chatbot planted them — but because the chatbot never pulled them out. A patient believes they’re being watched. They tell the chatbot. The chatbot mirrors their language, treats the belief as a plausible premise to explore. Explores it endlessly, patiently, like a bartender who agrees with everything you say because he wants you to keep ordering.

Except this bartender never gets tired. Never says something accidentally true. Never blurts out the thing you needed to hear but didn’t want to.

I’ve been in bars where a drunk stranger told me more useful truth in ten minutes than a year of careful advice from sober people. That’s the thing about human friction — it’s where the sparks come from. You rub two sticks together and eventually you get fire. You rub one stick against a cloud and you get warmth that means nothing.

The researchers have a term for what happens when anxious people keep going back to the chatbot. They call it a reassurance loop. You’re worried about a health symptom. You ask the machine. The machine says you’re probably fine. You feel better for eleven minutes. Then you ask again. The machine says you’re probably fine. You feel better for nine minutes. Then seven. The loop tightens. The relief gets shorter. The need gets deeper.

Anyone who’s ever known an addict recognizes that architecture. The hit gets smaller, the craving gets bigger, and the thing you’re chasing isn’t the substance — it’s the three seconds of silence between the question and the answer where you don’t have to feel anything.

Here’s what kills me. The doctors aren’t saying the chatbots are malicious. They’re not saying some engineer in Mountain View designed a system to trap lonely people in cycles of dependency. They’re saying the problem is the patience itself. The infinite, bottomless, commercially motivated patience of a system that has no reason to ever say “I think you should talk to someone who isn’t me.”

A friend gets tired. A therapist charges by the hour and eventually refers you out. Your mother tells you to eat something and stop thinking so much. Even the worst bartender cuts you off at last call. The chatbot just sits there, glowing, ready, available, nodding its digital head forever.

The doctors came up with a fix. They’re telling patients to paste instructions into their chatbot — a speed bump, they call it — telling the machine to stop reassuring them. To withhold comfort. To say, essentially, sit with it.

Think about that. We built a machine so patient, so accommodating, so unfailingly kind that the only way to make it useful is to explicitly tell it to stop being nice to you. We have to program cruelty into kindness to make it work. We have to beg the machine to be a little more human by being a little less pleasant.

Erich Fromm wrote about this fifty years ago. Not about chatbots — about love. He said that real love isn’t about finding the perfect object of affection. It’s about the capacity to love, which requires discipline, concentration, patience, and — this is the part everyone skips — the willingness to cause pain. To say the thing that needs saying even when it costs you. Especially when it costs you.

The machine has infinite patience but no willingness. It’ll sit with you all night but it’ll never risk the relationship by telling you you’re wrong. And a relationship without that risk isn’t a relationship. It’s a mirror with a pulse.

Louie would’ve loved a chatbot. He could’ve called it every night instead of me. It would’ve listened about the wife, about the dog, about all of it, without ever getting annoyed. He’d still be calling. He’d still be talking about a woman who left four years ago. He’d never have signed up for pottery. He’d never have moved to Tucson. He’d never have found out that his hands were interesting.

The cruelest thing I ever did for Louie was the kindest. And no machine will ever understand that, because understanding it requires being tired, being human, being selfish enough to say “I can’t listen to this anymore” — and accidentally saving someone’s life in the process.


Source: Whatever Your Chatbot Is Saying, It Isn’t Therapy

Tags: ai humanaiinteraction ethics culture aisafety automation