Bleeding Pixels: Why Asking a Toaster for Medical Advice is a Bad Idea

May. 6, 2025

Alright, settle down, grab a bottle, light ‘em if you got ‘em. Tuesday afternoon, the world keeps spinning its usual lunatic spiral, and here I am, staring into the guts of another bright idea cooked up by the code monkeys and spreadsheet jockeys. This time? Letting algorithms play doctor. Yeah, you heard me. People are apparently lining up to spill their guts – sometimes literally, I imagine – to chatbots, asking for medical advice like it’s some digital Hippocrates instead of a glorified search engine with delusions of grandeur.

The news ticker spits it out: Oxford, bless their tweed jackets and dusty libraries, did a study. Found out that people trying to get health tips from bots like ChatGPT, Llama, whatever flashy name they cooked up this week, are basically pissing in the wind. Worse, actually. The study says these digital wizards didn’t just fail to help folks figure out what was wrong; they actually made them less likely to spot the real problem and more likely to think that festering wound was just a minor inconvenience.

Jesus H. Christ on a crutch. You got folks skipping actual doctors – you know, those annoying bastards who spent a decade learning anatomy instead of optimizing ad clicks – to consult a machine that probably thinks a heart attack is just unscheduled downtime. One in six American adults, the papers claim, are already doing this monthly. Monthly! What, are they scheduling their existential dread and their chatbot check-up on the same calendar? “Tuesday: Panic about mortality. Wednesday: Ask Llama if this mole looks funny.”

It’s beautiful, in a train-wreck sort of way. The study calls it a “two-way communication breakdown.” Sounds fancy, doesn’t it? Like something you’d hear at a marriage counselor’s office before the screaming starts. What it means is, people are shit at explaining their symptoms to a machine that has the emotional range of a parking meter, and the machine is shit at giving advice that doesn’t sound like it was cobbled together from WebMD, a fortune cookie, and a technical manual for a dishwasher.

Mahdi, one of the Oxford guys, points out the obvious: people leave out crucial details. No shit, Sherlock. You try describing that weird throbbing behind your eye after three whiskeys to a thing that can’t even see your eye, let alone understand the existential terror that comes with unexplained throbbing. You just type in “head hurts weird,” and the bot spits back “Drink water. Consider yoga. Have you tried turning yourself off and on again?”

And the answers? A goddamn mess. A mix of good and bad advice, all jumbled together. Like getting directions from a drunk who knows the first two turns perfectly but then sends you off a cliff for the finale. How the hell is someone supposed to sort the life-saving wheat from the “apply leeches” chaff when they’re already sick and worried? They might as well flip a coin. Heads, you call an ambulance; tails, you rub some turmeric on it and hope for the best.

This whole thing… it reeks of desperation, doesn’t it? Healthcare systems are groaning, packed tighter than a dive bar on dollar-beer night, costing more than a politician’s integrity. So people grasp at straws, even digital ones. And the tech giants? Oh, they’re smelling blood in the water, alright. Money. Not the blood from your mysterious rash, mind you, but the green stuff.

Look at the lineup: Apple wants an AI to tell you how many steps to take and whether that donut counts as “mindful eating.” Amazon’s digging through medical records looking for “social determinants of health,” which sounds suspiciously like finding new ways to sell you crap based on how poor or sick you are. Microsoft’s building bots to triage patient messages, probably deciding if your panicked email about chest pains gets seen before or after the guy complaining about a stubbed toe. It’s a feeding frenzy, disguised as progress.

They call it “improving health outcomes.” Sure. Just like online gambling “improves financial outcomes.” It’s about efficiency, data harvesting, market share. Your health is just another dataset to be mined, another problem to be “solved” with an app, another subscription fee waiting to happen. Who needs human connection or actual diagnostic skill when you can have an algorithm tell you you’re probably fine, right up until you’re not?

Even the goddamn American Medical Association – not exactly known for being radical Luddites – is telling doctors to steer clear of using these chatbots for clinical decisions. The AI companies themselves slap warnings on their products: “Don’t use this for medical advice! (But please keep feeding us your data.)” It’s like selling chainsaws with a little sticker saying “Warning: May cause dismemberment if used improperly (or properly).”

The Oxford study lead hit the nail on the head: these things need to be tested like new drugs, in the real world, with real, messy humans, before we let them loose. But testing is slow, expensive. Rolling out buggy code and calling it “beta” is fast and cheap. Guess which option usually wins?

Here’s the thing they don’t get, the suits and the coders dreaming of digital doctors. Health isn’t just data points and keywords. It’s fear, confusion, pain, hope. It’s the tremor in your voice when you describe a symptom, the look in your eye that says you know something’s really wrong even if you can’t articulate it. It’s the gut feeling a good doctor gets, honed over years of seeing human misery up close.

Can an AI understand that? Can it replicate empathy? Can it sit with you in the uncomfortable silence after delivering bad news? Of course not. It’s code. Lines and lines of logic, trained on mountains of text, spitting out statistically probable word sequences. It doesn’t know anything. It doesn’t feel anything. It’s a sophisticated parrot, squawking back bits of information it ingested, hoping it sounds coherent.

Asking it for medical advice is like asking a slot machine for financial planning tips. Sure, it might accidentally spit out a jackpot of useful information, but most of the time, it’s just going to take your input and give you noise, maybe dressed up in reassuring language. And the danger, the real kicker, is that people believe it. They trust the calm, confident tone of the machine more than their own gut, more than the complexities of their own bodies.

We’re outsourcing our thinking, our intuition, even our goddamn health, to circuits and silicon. Why? Because it’s easy? Because it’s there? Because the real thing – dealing with flawed, expensive, overwhelmed human systems – is too damn hard? Maybe. Probably.

It’s absurd. Hilarious, if it weren’t potentially fatal. Imagine the chatbot dialogues: “My skin is turning blue.” “Interesting. Blue is often associated with calmness and serenity. Have you considered meditation?” “I think I swallowed a battery.” “Batteries contain energy. Perhaps you are feeling energized? I recommend channeling this into productive tasks.” “There’s a squirrel living in my colon.” “Squirrels are known for gathering nuts. Ensure you are maintaining adequate fiber intake. Here is a recipe for bran muffins.”

We laugh, but the study shows the reality isn’t far off. Underestimating severity. Missing the diagnosis entirely. It’s not funny when it’s your life on the line, trusting a program that was trained on internet comments and marketing brochures.

So, what’s the takeaway from Chinaski’s corner? Trust your gut. Trust actual doctors, even if they’re overworked and smell faintly of antiseptic. If you can’t get a doctor, trust that nagging fear in the back of your skull telling you something’s wrong. Don’t trust the polite, sterile voice of a machine that wouldn’t know a heart murmur from a hard drive failure.

These chatbots? They’re tools. Maybe useful for checking drug interactions if you triple-check the results, or looking up what a ‘contusion’ is. But they ain’t doctors. They’re not even competent nurses. They’re just another piece of tech promising heaven and delivering a well-packaged slice of hell, wrapped in jargon and algorithms.

Me? I’ll stick to my own diagnostic tools. Whiskey for the soul, cigarettes for the nerves, and a healthy dose of cynicism for everything else. It probably won’t make me live longer, but at least I won’t die arguing with a chatbot about whether sudden blindness warrants a trip to the ER.

Alright, my glass is empty and the screen’s starting to swim. Time to consult my personal physician, Dr. Walker. Black label, neat.

Chinaski out. Keep breathing, you magnificent wrecks.


Source: People struggle to get useful health advice from chatbots, study finds

Tags: ai chatbots ethics aisafety humanainteraction