Last week, Mitchell Katz, MD, CEO of NYC Health + Hospitals, stood before a Crain’s New York Business audience and made a declaration disguised as a dilemma.
He wants to replace “a great deal of radiologists” with artificial intelligence (AI). Today. The only thing standing between him and that vision, he told us, is the regulatory landscape (read: FDA).
Let that sit for a moment. Because I have a follow-up question, and I think you already know where it leads.
The Regulatory Double Standard Nobody Wants to Say Out Loud
Here is the uncomfortable truth of clinical AI deployment. When you want an algorithm to read a mammogram, you enter a gauntlet: FDA’s Software as a Medical Device (SaMD) framework, 510(k) clearance or De Novo classification, predicate device analysis, clinical validation studies, post-market surveillance requirements, and crushing liability exposure if the model drifts.
That is not bureaucratic obstruction — that is patient protection and it is slow by design. Now ask yourself: What regulatory framework governs replacing a hospital CEO with an AI system?
Nothing. No FDA submission, no clinical trial, no De Novo pathway, no liability waiver, no “if only regulators would allow it” speech. Zero days to deploy.
This is not a loophole. It is an architectural fact of American healthcare regulation. Administrative AI (systems that manage resource allocation, forecast patient census, optimize supply chains, and draft board governance documents) faces zero regulatory barriers to entry.
So here is the Socratic problem at the center of Katz’s vision: If AI is ready to handle the highest-stakes cognitive work in medicine — life-or-death image interpretation requiring the synthesis of years of clinical history — then why is it not already running your procurement department?
Why are we not already running the experiment where the downside is a bad spreadsheet instead of a missed cancer?
Three Decades of Administrative Bloat
Before we debate AI readiness, let us establish what we are actually debating.
In 1970, there were far more physicians in the U.S. than healthcare executives or managers. By 2009, administrators outnumbered physicians by more than 10 to one. From 1975 to 2010, the number of U.S. physicians grew 150%. Administrative personnel grew 3,200%.
This is not mismanagement. It is not accident. It is mathematical inevitability: the compounding consequence of regulatory complexity, payer fragmentation, billing code proliferation, and the political economy of hospital bureaucracies that reward headcount with status and status with survival. Administrative bloat is not a symptom. It is a system behaving exactly as its incentives demand.
Today, administrative costs consume an estimated 34 cents of every dollar spent on U.S. healthcare. Overhead is the single largest line item in American medicine. Not drugs. Not devices. Not physician compensation.
And here is what makes AI structurally different from every previous reform attempt: it does not negotiate. It does not protect turf. It does not get tired or quietly redirect resources toward its own department’s headcount. AI is not a management consultant who presents a 200-slide deck and then bills you for implementation. It is a system that can right-size three decades of compounding bloat with the same dispassionate consistency it applies to every other optimization problem.
The question is not whether AI can do this. The question is why the people who benefit most from administrative headcount are the loudest advocates for AI everywhere except in their own offices.
Introducing CaaS: CEO-as-a-Service
Here is the genuine intellectual challenge for every hospital CEO nodding along to the AI efficiency gospel:
If an algorithm can manage a global supply chain with thousands of interdependencies and zero margin for error, why can it not manage a hospital’s resource allocation better than a human executive who is, by design, subject to board politics, vendor relationships, and self-preservation instincts?
The architecture already exists. A CaaS, CEO-as-a-Service, framework would not require science fiction. It would require integrating existing enterprise AI capabilities: real-time financial modeling, evidence-based staffing optimization, predictive census management, automated contract analysis, and governance reporting. It would be overseen by a lean human board with genuine accountability.
The CaaS system would not replace human judgment entirely. It would constrain the part of human judgment that has historically been most expensive: the part that protects its own interests at the institution’s expense.
No ego. No turf wars. No conference keynotes about how other people’s jobs should be automated first.
Is this idea uncomfortable? Good. Discomfort is how you know you are asking the right question.
The Accountability Architecture That Nobody Is Building
Let us address the liability argument directly, because it is the one card always played against clinical AI and it is conspicuously absent from the administrative AI conversation.
When an AI-assisted radiology system misses a malignancy, the legal and ethical exposure is immediate, severe, and existentially complex. Who bears liability? The radiologist who reviewed the flag? The vendor who trained the model? The hospital that deployed it? This is a legitimately difficult problem, and the FDA’s SaMD framework exists precisely to force that question into the open before the patient is harmed.
Now ask: When a human CEO misallocates $40 million in capital expenditures, botches a merger, or runs a system into a structural deficit, what is the accountability mechanism?
The board of directors. They hired him. They approved his compensation. They will offer him a soft landing at the next institution.
Here is the asymmetry nobody in hospital leadership wants to defend out loud: the physician faces a licensing board, a malpractice system, and public accountability for every clinical error. The executive faces a friendly board and a severance negotiation. An AI system governing administrative operations would, by contrast, produce an immutable audit trail of every resource decision, without political insulation.
The radiology reading room is not where the accountability gap lives. The C-suite is.
The Call to Accountability
I am not arguing that AI has no role in radiology. I am arguing about sequence. About intellectual honesty.
So, my challenge to every hospital CEO who has used the word “efficiency” in a sentence that ended with a clinical job category is to deploy an AI governance layer in your administrative operations first. Open-source the results. Publish the savings. Show the sector what evidence-based resource allocation looks like when the golden parachute is removed from the equation. Demonstrate that your confidence in AI is not conditional and that it doesn’t mysteriously evaporate when the optimization function points at your own office.
If AI truly delivers the efficiency gains you are promising from the radiology reading room, then the administrative suite should be a showcase, not an exception. The accountability will be clear. And your credibility on clinical AI will be unimpeachable.
But if AI is only transformative for other people’s jobs, then this is not a technology argument. It is a power argument wearing a technology argument’s clothes.
The alpha testers for the AI revolution in healthcare should be the executives selling it.
So, Dr. Katz: after you.
Waseem Ullah, MD, is triple board certified in radiology, neuroradiology, and informatics, and serves as the vice chair of radiology at Henry Ford Allegiance Health in Jackson, Michigan. He is also an investor and serial entrepreneur.
Source link : https://www.medpagetoday.com/opinion/second-opinions/120627
Author :
Publish date : 2026-04-03 12:26:00
Copyright for syndicated content belongs to the linked Source.






