Doctors, Not AI, Should Authorize Treatments


Kanaparthy is a practicing internal medicine physician specializing in clinical informatics.

Last month, California passed a bill ensuring that doctors, not artificial intelligence (AI), have the final say on patients’ treatments and services. The bill, SB1120, allows insurance companies to use AI to review doctors’ recommendations for medical procedures only if those prior approval requests are overseen and reviewed by trained medical professionals.

As AI is rapidly proliferating in healthcare, California and Oklahoma are among the first states that have passed legislation governing the use of AI in prior authorization decisions. Other states are also working on legislation to regulate how AI is used by health insurance companies. I believe we urgently need strict, enforceable legislation in all 50 states, and perhaps at the federal level.

Prior Authorization Needs Patient-Centered Reform

Doctors, patients, and insurance companies all know that prior authorization needs reform — this is one reason we’re seeing a surge in AI use by insurance companies. What began as a process in the 1960s to determine whether a patient really needed to be admitted to a hospital is now a sprawling monster obstructing almost any treatment outside of the insurance carrier’s rule book.

This has harmed patients and doctors alike. In my own experience, the prior authorization process can significantly delay care. Other doctors agree: In 2023, in an American Medical Association survey of 1,000 physicians, nearly one in four doctors reported that authorization delays had resulted in adverse events for a patient. Not to mention, prior authorization processes eat up doctors’ time and energy. For me, filing a prior authorization request requires a two- to three-page form taking 15-45 minutes, often followed by a “peer-to-peer” discussion with a physician who works for the insurance company. That insurance company doctor is rarely a subspecialist who understands specific patient needs. According to Health Affairs, physicians spend 3 hours weekly interacting with plans, and office staff spend significantly more time, with the national time cost to practices estimated to be $23-billion-to-$31-billion annually.

AI to the Rescue?

Insurance companies know there’s a problem. This is why many have tried to accelerate the prior authorization process using AI. The issue is, it appears to be with little doctor oversight. For certain companies right now, decisions about care appear to be based on an algorithm: AI essentially compares the request to the insurance company’s pre-determined rules, built on aggregated data from large numbers of people. Manual review is only used when necessary.

And indeed, using AI to review prior authorization requests has accelerated the process — mainly for rejections. A lawsuit against Cigna alleges that the company rejected 300,000 pre-approved claims in a 2-month period at an average speed of 1.2 seconds per claim. Basing rejections on AI and big data, and not on individual patient needs, has triggered alarm among doctors and the public; a rash of lawsuits filed this year against insurance companies claim harm and even death to patients resulting from AI healthcare decisions.

One suit against Humana alleges that the company used AI, not doctors, to wrongfully deny elderly patients care owed to them under Medicare Advantage plans. Another lawsuit, against United Healthcare, alleges that insurers used AI to deny senior citizens care, knowing that only a tiny percentage of elderly patients contest such decisions.

AI decisions without doctor oversight present three problems in particular.

First, individual patients differ from the big data models that appear to inform the rule book. Such models seemingly aggregate conditions and outcomes of millions of patients, looking for patterns on a massive level and calculating the most cost-effective treatment in general. But the algorithms used for such decisions are not transparent, so it’s impossible to know if their calculations apply to an individual human body, with its own unique history and needs.

Second, as the American Civil Liberties Union has noted, algorithms heavily depend on the training data that are supplied, which can exacerbate racial and ethnic biases.

Third, algorithms don’t automatically adjust as data change over time, a problem known as “model drift,” which can lead to inaccuracies and faulty decisions.

Laws Must Require Involvement of Human Specialists

In the best interest of the patient, insurance companies must employ qualified health professionals specializing in the patient’s particular issue to handle prior authorization requests. Those doctors absolutely should have access to supportive tools like AI to make sound decisions. But AI should be one tool in the arsenal and not the only tool. Human judgment and experience should count, too.

Unfortunately, insurance companies often appear to take the cheapest and easiest solution. On their own, they would be unlikely to hire the physician specialists they need to oversee AI — we know because many haven’t done so. As a result, we need lawmakers to require them to do the right thing and put appropriately qualified doctors, not AI, in charge of the process.

To be sure, insurance companies have a responsibility to their shareholders and the public to maximize efficiency. Insurers need to balance the population health dynamic to ensure care and costs are distributed appropriately; expensive, ineffective treatments should be weeded out.

However, when insurance companies use AI with zero or minimal human supervision, they put us doctors at risk of violating our oath to do no harm. This is why earlier this year, CMS released a memo indicating that Medicare Advantage plans could use AI in assisting in prior authorizations only if decisions do not override standards of medical necessity and if they take into account individual patient characteristics, including medical history and physician recommendations. However, this does not apply to other insurance plans.

Despite growing public concern, too few states have passed legislation mandating that human doctors be in the mix when prior authorization decisions are made. That is not enough. We need laws in every state, or even at the federal level, mandating that prior authorization decisions be led by qualified doctors working with AI, and not for AI, when human lives and well-being are at stake.

Naga Kanaparthy, MD, MPH, is a practicing internal medicine physician specializing in clinical informatics at the Yale School of Medicine. He is a Public Voices fellow of Yale and the OpEd Project.

Please enable JavaScript to view the comments powered by Disqus.



Source link : https://www.medpagetoday.com/opinion/second-opinions/112462

Author :

Publish date : 2024-10-18 15:59:50

Copyright for syndicated content belongs to the linked Source.
Exit mobile version