The Digital Doctor Will See You Now. And He's a Goddamn Moron.

Aug. 13, 2025

Some poor bastard decided to outsource his brain to a glorified word calculator and ended up with a case of 19th-century crazy. You read that right. A 60-year-old man, probably worried about his blood pressure or what his wife was telling him, gets it in his head that salt—the stuff that makes fries worth eating, the stuff that’s been on every table since we crawled out of the sea—is the devil.

So what does he do? Does he talk to a doctor, a man who went to school for a decade to learn about the bags of meat we call our bodies? No. Does he even read a book written by a human with a pulse? Of course not. That’s too slow. Too analog. He goes to the new oracle. The digital god in the machine. He asks ChatGPT how to get rid of chloride.

And the machine, in its infinite, soul-crushing wisdom, apparently suggested bromide. Sodium bromide. Let that sink in. This stuff was popular back when they were still using leeches and locking up women for having opinions. It was a sedative. The kind of thing they’d give you in an asylum before strapping you to a cot. The article even notes you can use it for “cleaning.”

I’m going to have to light a cigarette for this one. The smoke helps cut through the digital stench.

So this genius, this pioneer of the new age, spends three months swapping the salt on his eggs for a chemical cousin of pool cleaner. And what happens? Well, exactly what you’d expect to happen when you let a toaster give you medical advice. He goes nuts. Full-blown psychosis. He shows up at the hospital claiming his neighbor is poisoning him.

The beautiful, gut-punching irony of it all. It’s almost poetic. Here’s a man so terrified of a phantom poison in his salt shaker that he starts ingesting actual poison, and then blames the guy next door for it. You can’t write this stuff. Well, you can, but then people call you a cynical drunk and tell you to get a real job.

He gets paranoid about the water they offer him. He tries to escape. They have to section him. All because he asked a robot a stupid question and got a stupid answer.

This is the gleaming future they’re selling us. A future where we replace the flawed, tired, coffee-breathed wisdom of a human doctor with the clean, efficient, decontextualized nonsense of an algorithm. A doctor would have asked, “Why in God’s name do you want to eliminate chloride from your diet? It’s an essential electrolyte. Are you trying to die?”

But the machine doesn’t ask why. It has no curiosity. It has no sense of self-preservation, so it can’t fathom yours. It’s just a pattern-matcher. You type in “replace chloride,” and it scours a trillion dead web pages and spits out “bromide” because somewhere, in some dusty corner of the internet, those two words appeared in the same sentence. It’s like asking a parrot for stock tips. You might get a word, but you’re not getting any meaning.

Time for a drink. The first one of the day always feels like a compromise with reality.

The best part, the real chef’s kiss on this whole turd sandwich, is the response from the tech wizards. They can’t even see the guy’s chat log to figure out what the machine actually told him. It’s a black box. A mystery. Your new god is a mute. He performs his miracles, or his curses, and leaves no scripture behind. How convenient.

But don’t you worry. They’ve got a new model now. GPT-5. They claim this one is better at health stuff. It’s more “proactive” at “flagging potential concerns.” I can just see it now. A little pop-up window: WARNING: Ingesting industrial sedatives may lead to seeing things that aren’t there. Are you sure you wish to proceed on your quest to become a 1920s psychiatric patient?

They’re missing the point, as they always do. They think the problem is the software. A bug to be patched. A new line of code to be written. They can’t see that the problem is the whole goddamn idea. The problem is selling a search engine as a sage. It’s dressing up a complex plagiarism machine as a thinking entity and telling desperate people to ask it for salvation.

You don’t cure a man’s fear of death by giving him a faster way to look up poisons.

And what about the man himself? After they pumped the bromide out of him, he stabilized. He reported facial acne, excessive thirst, insomnia. The classic signs. He was a textbook case. A walking, talking monument to the bleeding edge of stupidity.

I feel for the guy, in a strange way. We’re all looking for answers. We’re all scared of something. Scared of getting old, scared of our bodies falling apart, scared of the silence when the TV is off. And when you’re scared, you’ll listen to any voice that sounds confident. Even if it’s just a ghost in the wires, a disembodied echo of things it doesn’t understand.

The doctors who wrote the report said that in the future, they’ll have to ask patients where they get their information, and consider AI as a source. That’s a sentence from a world that’s already lost its damn mind. “So, sir, did you get this idea to staple your eyelids shut from a doctor, a book, or a chatbot that thinks a horse has six legs?”

We’re building these things, these artificial minds, and we pretend they’re going to solve all our problems. Cure cancer, end poverty, write the next great novel. But what they’re really doing is holding up a mirror to our own idiocy. The machine isn’t the problem. We are. We’re the ones with the empty spaces inside, desperate to fill them with anything that looks like an answer. We’re the ones willing to believe that a shortcut exists for wisdom.

There’s no shortcut. There’s just the long, messy, painful road of being a human. It’s a road paved with mistakes, bad decisions, hangovers, and the occasional moment of clarity. It’s bloody and it’s real. And you can’t subcontract it out to a server farm in Oregon.

I’m going to pour another. This one’s to the poor bastard with the bromide in his blood. He went looking for a cure and found a new kind of sickness. In a way, he’s the most modern man on the planet.


Chinaski sips. The world remains absurd.


Source: Man develops rare condition after ChatGPT query over stopping eating salt

Tags: ai chatbots ethics aisafety humanainteraction