You walk into a doctor’s office these days—doesn’t matter if it’s London, Leeds, or Los Angeles—and you expect a certain ritual. The cold stethoscope, the judgmental look when you lie about how many units of alcohol you consume per week, the illegible scribble on a prescription pad. You expect a human being, flawed and tired, to look at your meat-sack body and tell you why it’s failing.
But apparently, that’s old fashioned. That’s nostalgic thinking, like missing rotary phones or smoking in hospitals.
According to a new report from the Nuffield Trust, the person in the white coat isn’t looking at you. They’re looking at a prompt window. About 30% of General Practitioners in the UK—family doctors, the front line of defense against death and decay—are now using AI tools like ChatGPT to figure out what the hell is wrong with you.
I read this report twice. The first time, I thought I was hallucinating. The second time, I realized the machine was the one doing the hallucinating.
I poured myself a tall glass of something brown and cheap to help digest the information. The bottle was dusty, but the liquid inside did the trick. It burns the cynicism right out of your throat for a few seconds.
Here is the situation: nearly three out of ten doctors are feeding patient appointment summaries, diagnostic questions, and administrative nightmares into the maw of the Great Algorithm. They are doing this because they are drowning. They are doing this because the system is broken. And they are doing this in what the report calls a “Wild West” lack of regulation.
The “Wild West.” I like that. It conjures up images of a doctor wearing a holster, but instead of a six-shooter, he’s got a subscription to the paid tier of a chatbot, and he’s shooting from the hip.
“Doc, I’ve got this pain in my chest.” “Hold on, partner. Let me ask the oracle.”
The kicker is that nobody really knows if these tools are safe. There’s no sheriff in town. The NHS is running around trying to put out fires with a water pistol, while individual doctors are deciding that the risk of being sued for malpractice is less terrifying than the risk of burying themselves under a mountain of paperwork.
It’s completely unregulated. It’s a free-for-all. And honestly, I can’t blame them. If I had to listen to people complain about their hemorrhoids and their existential dread for twelve hours a day, I’d probably ask a robot for help too. I’d probably ask the robot if it could just take the appointment for me while I went to the pub.
But here is where it gets interesting, in that dark, ironic way that makes you want to light a cigarette even though you promised yourself you’d quit before noon.
The government—the suits, the ministers, the people who have never had to wait three weeks for an appointment to get a boil lanced—they love this idea. They are salivating over it. They look at AI and they see “efficiency.” They see a way to cram more sick people into the sausage grinder of the healthcare system. They think, “Aha! If the computer writes the notes, the doctor can see five more patients an hour!”
It’s the classic management delusion. They think technology is a gas pedal.
But the survey found something else. Something human. Something that almost warms the cockles of my shriveled heart.
The doctors aren’t using the saved time to see more patients. They aren’t using it to boost the metrics or hit the KPIs or whatever acronym the bureaucrats are worshiping this week.
They are using the time to rest.
They are using it to recover from the stress of their day. They are using it to go home on time for once. They are using it to prevent burnout.
That is beautiful. It really is. The machine was built to maximize productivity, to turn the human being into a more efficient cog, and the humans simply said, “No. I’m going to use this machine to buy myself twenty minutes of silence.”
It’s a small rebellion. It’s the worker using the factory’s tools to fix his own shoes. The government wants an assembly line of health; the doctors just want to stop their hands from shaking.
I take another drink to salute them. Good for you, docs. Let the chatbot write the referral letter. You sit there and stare out the window. You’ve earned it.
Of course, there is the small matter of the patients. That’s us. The wetware. The broken machinery that needs fixing.
The report mentions that this rapid adoption is terrifying because these Large Language Models are famously confident liars. They are the drunk guy at the end of the bar who knows everything about everything. Ask him about quantum physics, he’s got an opinion. Ask him about international maritime law, he’ll lecture you for an hour. Ask him about that rash on your leg, and he’ll tell you it’s leprosy.
He sounds convincing. He speaks in complete sentences. He has a deep, authoritative voice. But he’s just guessing.
Now imagine that guy is helping your doctor decide if you have a viral infection or if your appendix is about to burst.
“Professional liability and medico-legal issues,” the report says. “Risks of clinical errors.” “Patient privacy and data security.”
Translated from the legalese, that means: “We are feeding your personal medical history into a server farm owned by a tech giant, and if the computer tells the doctor to cut off the wrong leg, nobody knows who to sue.”
It’s a game of Russian Roulette, but the gun is digital and the bullet is a hallucinated medical fact.
And it’s not just the doctors. The patients are doing it too. Healthwatch England—a name that sounds like a neighborhood watch group for hypochondriacs—says about one in ten people are using AI to diagnose themselves because they can’t get a GP appointment.
One poor bastard got advice from an AI that confused shingles with Lyme disease.
Now, I’m no medical professional. My medical knowledge is limited to knowing that whiskey is a disinfectant and sleep is a cure for sadness. But even I know that shingles and Lyme disease are different things. One is a resurgence of chickenpox that feels like being whipped with a wire cable; the other is from a tick bite that messes up your joints and your brain.
Confusing the two is a problem. But the AI doesn’t know what a tick is. It doesn’t know what pain feels like. It just knows that in its vast database of text, the words “rash” and “fatigue” appear near both diseases, so it flips a digital coin.
And we trust it. We trust it because the screen glows and the font is clean and sans-serif. We trust it because we are desperate.
The divide is happening exactly where you’d expect it to happen. The study shows that male doctors are using it more than female doctors. Thirty-three percent of men versus twenty-five percent of women. I don’t want to generalize, but there is something distinctly masculine about looking for a shortcut that involves a gadget. We love the idea that we can engineer our way out of listening to someone talk.
And, naturally, the usage is higher in well-off areas than in poor areas. The rich get the bleeding-edge experimental tech that might accidentally kill them, and the poor get… well, they get nothing. Or maybe the rich get the time-saving magic, and the poor get the overworked doctor who is too tired to log in. It’s hard to tell which is the losing hand in this scenario.
Dr. Becks Fisher from the Nuffield Trust calls it a “huge chasm between policy ambitions and the current disorganised reality.”
That’s a polite way of saying the government is living in a fantasy land while the actual world is held together by duct tape and prayers.
The government has launched a commission, of course. They always launch a commission. “A commission to ensure that AI is used in a safe, effective and properly regulated way.”
By the time that commission reports back, the AI will have advanced three generations, the doctors will have forgotten how to write a summary without it, and I will have finished this bottle.
Regulation always chases technology like a fat cop chasing a Ferrari. It’s not going to catch up. The genie is out of the bottle, and the genie is currently drafting a prescription for Xanax.
The reality is, we are moving toward a world where the interaction is machine-to-machine. You, the patient, will ask your personal AI what’s wrong with you. It will hallucinate a diagnosis. You will send that diagnosis to your doctor’s AI. The doctor’s AI will hallucinate a treatment plan. The pharmacy’s AI will dispense the wrong pills. You will take them, and your smart toilet will analyze the result and sell the data to an insurance company.
And somewhere in the middle of all that data flying around, the actual human beings—the doctor and the patient—will be completely bypassed. We’ll just be the host organisms for the data stream.
But let’s go back to that one bright spot. The doctors using the time to rest.
There is something profound in that. We talk about AI taking our jobs, or AI destroying the world, or AI curing cancer. But right now, in the grim reality of 2025, the best use case we have found for this multi-billion dollar technology is to let a tired GP in Manchester close his eyes for ten minutes and eat a sandwich without choking on stress.
Maybe that’s the victory. Maybe that’s the singularity we deserve.
I look at the screen. The cursor blinks. I could ask the AI to finish this blog post for me. It would be cleaner. It would be more polite. It wouldn’t use the word “bastard.” It would probably end with a hopeful call to action about the future of digital health.
But it wouldn’t taste like cheap whiskey, and it wouldn’t know the specific crushing weight of a Thursday afternoon when the rain is hitting the window and the world seems to be sliding sideways.
We are handing the keys to the kingdom over to the software engineers because we are too exhausted to drive the car ourselves. The doctors are tired. The patients are sick. The government is useless. And the chatbot is always awake, always ready, always polite, and occasionally totally insane.
“From taboo to tool,” the headline says.
From human to synthetic. From care to processing.
I finish the glass. The ice has melted, watering down the last of the burn.
It’s a brave new world. If you get sick, bring a charger. And if your doctor seems to be paying more attention to his screen than your symptoms, don’t worry. He’s just asking the ghost in the machine if you’re going to live.
If the internet goes down, we’re all screwed. But until then, at least the doc gets a lunch break.
Cheers to that.
Source: ‘From taboo to tool’: 30% of GPs in UK use AI tools in patient consultations, study finds