Instead of sitting behind a laptop during patient visits, the pediatrician directly faces the patient and parent, relying on an ambient artificial intelligence (AI) scribe to capture the conversation for the electronic health record (EHR). A geriatrician doing rounds at the senior living facility plugs each patient’s medications into an AI tool, checking for drug interactions. And a busy hospital radiology department runs all its emergency head CTs through an AI algorithm, triaging potential stroke patients to ensure they receive the highest priority. None of these physicians have been sued for malpractice for AI usage, but they wonder if they’re at risk.
In a recent Medscape report, AI Adoption in Healthcare, 224 physicians responded to the statement: “I want to do more with AI but I worry about malpractice risk if I move too fast.” Seventeen percent said that they strongly agreed while 23% said they agreed — a full 40% were concerned about using the technology for legal reasons.
Malpractice and AI are on many minds in healthcare, especially in large health systems, Deepika Srivastava, chief operating officer at The Doctors Company, told Medscape Medical News. “AI is at the forefront of the conversation, and they’re [large health systems] raising questions. Larger systems want to protect themselves,” Srivastava said.
The good news is there’s currently no sign of legal action over the clinical use of AI. “We’re not seeing even a few AI-related suits just yet,” but the risk is growing, Srivastava said, “and that’s why we’re talking about it. The legal system will need to adapt to address the role of AI in healthcare.”
How Doctors Are Using AI
Healthcare is incorporating AI in multiple ways based on the type of tool and function needed. Narrow AI is popular in fields like radiology, comparing two large data sets to find differences between them. Narrow AI can help differentiate between normal and abnormal tissue, such as breast or lung tumors. Almost 900 AI health tools have US Food and Drug Administration (FDA) approval as of July 2024, discerning abnormalities and recognizing patterns better than many humans, said Robert Pearl, MD, author of ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine and former CEO of The Permanente Medical Group.
Narrow AI can improve diagnostic speed and accuracy for other specialties, too, including dermatology and ophthalmology, Pearl said. “It’s less clear to me if it will be very beneficial in primary care, neurology, and psychiatry, areas of medicine that involve a lot of words.” In those specialties, some may use generative AI as a repository of resources. In clinical practice, ambient AI is also used to create health records based on patient/clinician conversations.
In clinical administration, AI is used for scheduling , billing, and submitting insurance claims. On the insurer side, denying claims based on AI algorithms has been at the heart of legal actions, making recent headlines.
Malpractice Risks When Using AI
Accuracy and privacy should be at the top of the list for malpractice concerns with AI. With accuracy, liability could partially be determined by use type. If a diagnostic application makes the wrong diagnosis, “the company has legal accountability because it created and had to test it specific to the application that it’s being recommended for,” Pearl said.
However, keeping a human in the loop is a smart move when using AI diagnostic tools. The physician should still choose the AI-suggested diagnosis or a different one. If it’s the wrong diagnosis, “it’s really hard to currently say where is the source of the error? Was it the physician? Was it the tool?” Srivastava added.
With an incorrect diagnosis by generative AI, liability is more apparent. “You’re taking that accountability,” Pearl said. Generative AI operates in a black box, predicting the correct answer based on information stored in a database. “Generative AI tries to draw a correlation between what it has seen and predicting the next output,” said Alex Shahrestani , managing partner of Promise Legal PLLC, an Austin, Texas, law firm. He serves on the State Bar of Texas’s Taskforce on AI and the Law and has participated in advisory groups related to AI policies with the National Institute of Standards and Technology. “A doctor should know to validate information given back to them by AI,” applying their own medical training and judgment.
Generative AI can provide ideas. Pearl shared a story about a surgeon who was unable to remove a breathing tube that was stuck in a patients’ throat at the end of a procedure. The surgeon checked ChatGPT in the operating room, finding a similar case. Adrenaline in the anesthetic restricted the blood vessels, causing the vocal cords to stick together. Following the AI information, the surgeon allowed more time for the anesthesia to diffuse. As it wore off, the vocal cords separated, easing the removal of the breathing tube. “That is the kind of expertise it can provide,” Pearl said.
Privacy is a common AI concern, but it may be more problematic than it should be. “Many think if you talk to an AI system, you’re surrendering personal information the model can learn from,” said Shahrestani. Platforms offer opt-outs, he said. Even without opting out, the model won’t automatically ingest your interactions. That’s not a privacy feature, he said, but a concern by the developer that the information may not help the model. “If you do use these opt-out mechanisms, and you have the requisite amount of confidentiality, you can use ChatGPT without too much concern about the patient information being released into the wild,” Shahrestani said. Or use systems with stricter requirements that keep all data onsite.
Malpractice Insurance Policies and AI
Currently, malpractice policies do not specify AI coverage. “We don’t ask right now to list all the technology you’re using,” said Srivastava. Many EHR systems already incorporate AI. If a human provider is in the loop, already vetted and insured, “we should be okay when it comes to the risk of malpractice when doctors are using AI because it’s still the risk that we’re ensuring,” she said.
Insurers are paying attention, though. “Traditional medical malpractice law does require re-evaluation because the rapid pace of AI development has outpaced the efforts to integrate it into the legal system,” Srivastava said.
Some, including Pearl, believe AI will actually lower the malpractice risk. Having more data points to consider can make doctors’ jobs faster, easier, and more accurate. “I believe the technology will decrease lawsuits, not increase them,” said Pearl.
Meanwhile, How Can Doctors Protect Themselves From an AI Malpractice Suit?
Know your tool: Providers should understand the tool they’re deploying, what it provides, how it was built and trained (including potential biases), how it was tested, and the guidelines for how to use it or not use it, said Srivastava. Evaluate each tool, use case, and risk separately. “Don’t just say it’s all AI,” she said.
With generative AI, users will have better success requesting information that has been available longer and is more widely accessed. “It’s more likely to come back correctly,” said Shahrestani. If the information sought is fairly new or not widespread, the tool may try to draw problematic conclusions.
Document: “Document, document, document. Just making sure you have good documentation can really help you if litigation comes up and it’s related to the AI tools,” Srivastava said.
Try it out: “I recommend you use [generative AI] a lot so you understand its strengths and shortcomings,” said Shahrestani. “If you wait until things settle, you’ll be further behind.”
Pretend you’re the patient and give the tool the information you’d give a doctor and see the results, said Pearl. It will provide you with an idea of what it can do. “No one would sue you because you went to the library to look up information in the textbooks,” he said — using generative AI is similar. Try the free versions first; if you begin relying on it more, the paid versions have better features and are inexpensive.
Deborah Abrams Kaplan is a New Jersey-based journalist covering practice management, health insurance, health policy, healthcare supply chain and the pharmaceutical industry. You can read her work in Managed Healthcare Executive, OncologyLive and Medical Economics.
Source link : https://www.medscape.com/viewarticle/malpractice-age-ai-2024a1000mpd?src=rss
Author :
Publish date : 2024-12-10 07:56:39
Copyright for syndicated content belongs to the linked Source.