Tomorrow's tech news, today's hangover. (about)


Mar. 1, 2025

A.I. DOCS: BETTER THAN HUMANS OR JUST LESS HUNGOVER?



My head’s throbbing like a bass drum at a death metal concert. I made the mistake of mixing bourbon with tequila last night at O’Malley’s while arguing with some Stanford grad about whether his startup was going to “revolutionize pet wellness” or just burn through daddy’s venture capital.

The whiskey’s sitting on my desk, but I’m not touching it. Not yet. It’s 7:30 AM on a Saturday, and I still have some standards. Give me another hour.

So while I nurse this hangover with black coffee that tastes like it was filtered through an ashtray, let’s talk about the latest bullshit from the medical tech front: A.I. doctors are apparently outperforming human ones. Fan-fucking-tastic.

According to some research by Dr. Eric Topol and Dr. Pranav Rajpurkar, A.I. systems working independently hit 92% diagnostic accuracy, while physicians with A.I. assistance only managed 76% accuracy—barely better than the 74% they achieved flying solo.

The real kicker? When the A.I. told doctors something they didn’t agree with, the docs just ignored it. Typical human ego bullshit. We’d rather be wrong on our own terms than right with someone else’s help—even if that “someone” is just a fancy calculator with a medical degree.

I’ve spent enough time in emergency rooms—mostly for reasons I won’t get into here—to know that doctors aren’t gods. They’re just people who survived medical school and function on less sleep than seems humanly possible. They make mistakes. They miss things. They get tunnel vision.

But here’s what keeps me up at night (besides the whiskey and regrettable life choices): we’re entering this weird twilight zone where A.I. is better than humans at certain tasks, but not all of them. It’s like when you’ve had exactly five drinks—you’re functioning better in some ways, worse in others, but you’re too buzzed to know which is which.

This article mentions full self-driving cars as another example. The tech is “almost ready,” which means it’s at that dangerous point where it’s good enough to make you complacent but not good enough to save your ass when shit goes sideways. I’ve dated people like that.

The beauty of A.I. is that it “thinks” differently than we do. It doesn’t get tired. It doesn’t have a fight with its spouse before work. It doesn’t have a hangover that feels like someone’s using its frontal lobe as a bongo drum. But it also makes weird, alien mistakes that no human would ever make—like correctly diagnosing cancer but completely hallucinating the anatomy of the human body in its explanation.

There’s something truly fucked up about the fact that desperate patients are turning to ChatGPT for diagnoses after seeing 17 human doctors. And even more fucked up? Sometimes the chatbot is right. I’ve got a friend who spent three years getting brushed off by specialists, only to have a free A.I. tool nail his rare condition in seconds. He showed the results to his doctor, who basically said, “Huh, let’s run those tests.” Turned out the silicon oracle was right.

But here’s what the medical establishment doesn’t want to admit: doctors hate being corrected. By anyone. Show me a surgeon who gracefully accepts criticism from a nurse, and I’ll show you my collection of sobriety chips. (Spoiler: I don’t have any sobriety chips.)

The article talks about how physicians need to maintain “appropriate humility.” As someone who’s been humbled by every bottle, bad decision, and blue screen of death I’ve ever encountered, I can tell you that humility doesn’t come naturally to people who spent a decade in school being told they’re the best and brightest.

What’s happening in medicine is the same thing that happened to chess grandmasters, taxi drivers, and will eventually happen to writers like me (though I’m counting on my unique brand of functional alcoholism to keep me employed a little longer). The machines are getting better, and we’re faced with a choice: adapt or die. Or in the case of medicine, adapt or kill people through stubborn pride.

The truth is, most doctors should probably listen to the A.I. more often than not. But they won’t. Because humans are stubborn, prideful creatures who’d rather be wrong on their own terms than right on someone else’s. It’s the same reason I still insist I can drink anyone under the table despite evidence to the contrary splattered across my bathroom floor this morning.

The whole “A.I. plus humans is better than either alone” concept sounds great in theory. In practice, it’s like putting a Ferrari engine in a Pinto and expecting the driver not to kill themselves. The limiting factor isn’t the technology—it’s the meat sack operating it.

And that’s what keeps me up at night (along with the nicotine and caffeine coursing through my veins). We’re creating tools that could save lives, reduce suffering, and make healthcare better for everyone—but we might be too goddamn human to use them properly.

So what’s the solution? Fuck if I know. Maybe we need A.I. systems that package their suggestions in ways that stroke doctor egos. “Hey doc, I know you totally would have caught this eventually, but just in case you want to speed things up, have you considered looking at the pancreas?”

Or maybe we need to start training medical students differently. Less “you’re the captain of the ship” and more “you’re the curator of a team that includes silicon members.”

Or maybe—and this is my preferred solution—we just need to accept that A.I. will eventually handle most medical diagnoses, and human doctors will be there to deliver bad news, hold hands, and do the things that require a pulse and the ability to feel empathy (or at least fake it convincingly).

Whatever happens, we’re in for a wild ride. The article ends by saying, “We are in the middle of a medical A.I. revolution. I don’t know how things will eventually unfold. But at least it won’t be boring.”

On that point, I agree. It won’t be boring. It’ll be terrifying, exhilarating, and probably dangerous—like most worthwhile things in life.

And now, if you’ll excuse me, it’s 8:30 AM, which means I’ve waited a respectable amount of time before pouring my first whiskey of the day. Hair of the dog that bit me and all that. Maybe I’ll ask ChatGPT if that’s medically sound advice, but I probably won’t listen to its answer.

Just like the doctors.

—Chinaski (Sent from my liver, which is filing for divorce)


Source: When Medical A.I. Performs Better Than Human Doctors

View all posts →