The Healthcare AI Paradox: Why We’re Stuck in Pilot Purgatory (And How Agentic AI Might Finally Break Us Free)
*Reflections on McKinsey’s latest report through the lens of three decades in clinical informatics*
I just finished reading McKinsey’s new report on AI agents, and honestly? It gave me flashbacks.
Not the good kind.
I’ve been doing clinical informatics since before we even called it that—back when I was trying to convince Stanford doctors in 1992 that maybe, just maybe, we should store patient data in computers instead of paper charts. (Revolutionary stuff, I know.)
So when McKinsey talks about a “gen AI paradox” where 78% of companies report using AI but see zero meaningful impact on their bottom line… well, let’s just say I’ve seen this movie before. Several times.
We Keep Making the Same Damn Mistake
In the late ‘90s, everyone was going to be saved by electronic medical records. Then came the 2009 HITECH Act with billions in incentives. Fast forward to today: every hospital has an EHR, clinicians are burning out faster than ever, and most of us agree we’ve made documentation worse, not better.
The pattern is always the same: **We slap new technology onto completely broken processes and then act shocked when magic doesn’t happen.**
Here’s what really gets me fired up about McKinsey’s report—they nail this with their “horizontal vs vertical” distinction. Healthcare is absolutely drowning in horizontal AI solutions right now.
Walk into any hospital and you’ll find:
- AI scribes that transcribe everything (including the irrelevant small talk)
- Chatbots that answer the same five questions patients always ask
- “Ambient listening” tools that somehow make documentation take longer
- AI assistants that help write notes no one reads
These tools feel productive in the moment. Doctors love them because they seem helpful. But as McKinsey points out, they deliver “diffuse, hard-to-measure gains.”
Translation: they make us feel better about our broken workflows without actually fixing anything.
Meanwhile, the AI applications that could actually transform how we deliver care? Those are stuck in what I call “pilot purgatory”—that special circle of hell where promising technology goes to die a slow death by committee.
What Makes Agentic AI Different (And Terrifying)
Here’s where McKinsey’s report gets interesting. “Agentic AI” isn’t just a fancier chatbot. We’re talking about AI that can plan multiple steps ahead, remember context across weeks or months, integrate data from dozens of systems, and—here’s the kicker—**act autonomously to complete complex workflows.**
In healthcare, this could be genuinely transformative:
Imagine AI agents that don’t just help you write discharge summaries, but actually coordinate the entire discharge process. Or agents that monitor every ICU patient simultaneously for early signs of sepsis and automatically alert the rapid response team. Or agents that manage medication reconciliation across transitions of care without a human ever touching the process.
This isn’t science fiction. The technical capabilities exist today.
But here’s the thing that keeps me up at night: implementing this stuff requires healthcare to do something we’re historically terrible at—**completely redesigning how clinical work gets done.**
And boy, do we hate changing how work gets done.
The Three Ways Healthcare Will Probably Screw This Up
After leading clinical informatics at Stanford, Partners, and Vanderbilt, I’ve developed a pretty good sense of how healthcare adopts (or fails to adopt) new technology. With agentic AI, I see three ways we’re likely to shoot ourselves in the foot:
1. We’ll Get Lost in Regulatory Paralysis
McKinsey talks about needing “governed autonomy,” which sounds great until you add healthcare’s regulatory environment to the mix.
When an AI agent autonomously adjusts a patient’s insulin protocol at 3 AM, who’s liable if something goes wrong? How do we maintain the physician-patient relationship when the AI is making care decisions? What happens when the state medical board decides autonomous AI constitutes “practicing medicine without a license”?
I’ve spent years on committees trying to figure out liability for basic clinical decision support. Autonomous agents? We’re going to tie ourselves in regulatory knots for decades.
2. We’ll Create the Clinical Decision-Making Paradox
Here’s something I learned after building dozens of clinical decision support systems: doctors want AI that makes them smarter, not AI that replaces their judgment.
But agentic AI, by definition, acts independently. The most powerful applications—the ones that could truly transform care—are exactly the ones that require the most clinical oversight.
It’s like asking someone if they want a really smart resident who might occasionally make decisions without asking. Most attendings would say “hell no.”
3. We’ll Pretend Our Data Isn’t a Disaster
McKinsey correctly identifies data quality as critical, but they clearly haven’t spent much time in hospital IT departments.
Our clinical data is a mess. Always has been. We’ve got critical patient information spread across EHRs that can barely talk to each other, legacy systems from the Clinton administration, and documentation practices that prioritize billing over clinical care.
I’ve spent recent years working on HL7 FHIR standards trying to solve interoperability, but we’re still years away from the seamless data integration that agentic AI requires.
Deploying autonomous AI agents on healthcare’s current data infrastructure is like trying to perform surgery with a butter knife.
But Wait—There’s Actually Hope
Despite my cynicism (earned through decades of watching healthcare botch technology implementations), I’m cautiously optimistic about agentic AI.
Here’s why: McKinsey’s prescription to move “from scattered initiatives to strategic programs” is exactly what healthcare needs to hear.
Every health system I know is running 20+ AI pilots simultaneously with zero coherent strategy. It’s pilot proliferation gone mad. McKinsey’s call to “conclude the experimentation phase” and pick 2-3 high-impact workflows for full transformation? That’s exactly right.
But here’s what healthcare leaders actually need to do:
**Stop the Pilot Madness**: I was just talking to a CIO who told me they have 47 active AI pilots. Forty-seven! You know what they have in production? Three. And none of them are truly transformative.
**Embrace the Scary Part**: The report emphasizes reimagining entire workflows, not just optimizing existing tasks. In healthcare, this means questioning sacred cows like the 15-minute primary care visit or the way we structure nursing shifts. It means admitting that maybe the way we’ve always done things isn’t actually the best way.
**Build for Agents, Not Humans**: We need to start designing healthcare IT systems for AI agents first, humans second. This feels backwards and uncomfortable, but it’s where we’re headed.
The Moderna Moment
McKinsey mentions in passing that Moderna merged its HR and IT leadership because they realized “AI is not just a technical tool but a workforce-shaping force.”
This stopped me in my tracks.
Most healthcare organizations are still treating AI like it’s just another software deployment. But Moderna gets it—this is about fundamentally rethinking how work gets done, not just adding new tools to existing processes.
The health systems that figure this out first won’t just have better AI—they’ll have completely reimagined care delivery models.
My Honest Assessment
After 30 years of watching healthcare adopt (and often bungle) new technology, here’s what I really think:
Agentic AI is the first AI approach sophisticated enough to handle healthcare’s genuinely complex workflows. The technology is there. The potential is enormous.
We can instrument the enterprise, exploit unified data platforms, and imbue healthcare care delivery systems with operational intelligence we can truly model the production function for healthcare across the board, from micro to macro. And then… enable the Learning Health System as we abstract local and system learnings and compare and contrast across geographic regions, countries, and the world.
But success won’t come from better algorithms or faster computers. It’ll come from healthcare leaders who are brave enough to admit that our current workflows are often nonsensical, and that maybe—just maybe—we should design new ones around human-AI collaboration instead of trying to preserve the status quo.
The gen AI paradox in healthcare isn’t really about technology. It’s about transformation. And transformation is hard, especially in an industry that’s spent decades perfecting the art of resistance to change.
But here’s the thing: the organizations that figure this out first won’t just gain a competitive advantage. They’ll literally redefine what healthcare looks like.
And honestly? It’s about time.
*What do you think? Are we actually ready for AI agents in healthcare, or are we still too busy protecting our turf to let them help? Drop me a comment—I’d love to hear war stories from fellow clinicians and health IT folks who are wrestling with this stuff.*
-----
*Dr. Blackford Middleton has been implementing clinical information systems since the early 1990s and has the scars to prove it. He’s currently semi-retired and consulting to health tech startups, which means he gets to watch the next generation make all the same mistakes he did. Views expressed are definitely his own.*