When I first encountered the news that ChatGPT outperformed doctors in diagnosis, my initial reaction wasn’t surprise - it was amusement at our collective inability to understand what’s actually happening. We’re still stuck in a framework where we think of AI as either a godlike entity that will enslave humanity, or a humble digital intern fetching our cognitive coffee.
The reality is far more interesting, and slightly terrifying: we’re watching the collision of two fundamentally different types of information processing systems. Human doctors process information through narrative structures, built up through years of experience and emotional engagement. They construct stories about patients, diseases, and treatments. ChatGPT, on the other hand, is essentially a pattern-matching engine operating across a vast landscape of medical knowledge without any need for narrative coherence.
And here’s where it gets fascinating: doctors often stuck to their initial diagnoses even when ChatGPT suggested better ones. This isn’t just stubbornness - it’s a feature of human cognition. Our brains are prediction machines that hate to be wrong. We’re running on wetware that evolved to keep us alive on the savannah, not to optimize for diagnostic accuracy in complex medical cases.
The real revelation isn’t that ChatGPT can outperform doctors - it’s that we’re discovering how much of medical diagnosis is actually pattern matching rather than the mystical “clinical intuition” we’ve romanticized. It’s like finding out that your favorite artisanal cheese is actually processed in a factory - technically still cheese, but not quite the artisanal experience you imagined.
But the truly delicious irony comes from the MIT materials science experiment. When AI took over the “creative” part of the work, leaving researchers to evaluate its suggestions, their job satisfaction plummeted. It turns out that highly educated professionals don’t enjoy being turned into human verification systems for machine-generated ideas. Who could have possibly predicted that reducing brilliant scientists to checkbox-tickers might affect their job satisfaction?
This reveals a deeper truth about professional work: we’re not just performing tasks, we’re enacting identities. A doctor isn’t just a medical diagnosis machine - they’re participating in a complex social role that includes narrative creation, emotional support, and professional authority. When we reduce their role to pure pattern matching, we’re not just changing their job - we’re disrupting their entire professional identity.
The computational architecture of human cognition isn’t built for this kind of collaboration. We’re trying to interface a narrative-driven, emotionally-grounded, identity-based processing system (the human mind) with a pure pattern-matching engine (the AI). It’s like trying to plug a USB cable into your breakfast cereal - there’s a fundamental incompatibility in the architecture.
And here’s the punchline: the solution isn’t better AI, it’s better interfaces between human and machine cognition. We need to design systems that respect the narrative nature of human thought while leveraging the pattern-matching capabilities of AI. Instead, we’re mostly just throwing doctors and AI together like awkward teenagers at a school dance, hoping they’ll figure out how to work together.
The future of medical work isn’t about replacing doctors or augmenting them - it’s about creating new cognitive architectures that combine human narrative intelligence with machine pattern matching. But first, we need to admit that much of what we consider “professional expertise” is actually just really sophisticated pattern matching wrapped in a warm blanket of narrative meaning.
In other words, we’re not just witnessing a technological revolution - we’re watching the uncomfortable process of making implicit knowledge explicit, and discovering that much of what we do is more computational than we’d like to admit. The real existential crisis isn’t that AI will replace us - it’s that AI is forcing us to confront just how mechanical many of our “human” capabilities really are.
And that, dear reader, is a diagnosis that might be harder to swallow than any pill.
Source: If AI can provide a better diagnosis than a doctor, what’s the prognosis for medics? | John Naughton