Artificial intelligence (AI) could get prescribing privileges if a new bill passes through Congress.
The Healthy Technology Act of 2025 would amend the Federal Food, Drug, and Cosmetic Act to allow AI and machine learning to qualify as practitioners eligible to prescribe drugs if authorized by the state involved and approved, cleared, or authorized by the US Food and Drug Administration (FDA) for other purposes.
Physicians who study AI are optimistic about its potential in healthcare. Many doctors already use AI to streamline note-taking or support clinical decision-making. However, they say more research is needed before an AI tool could — if ever — write prescriptions on its own.
“The legislation is referring to a piece of AI technology that doesn’t exist yet,” said Ravi B. Parikh, MD, MPP, , associate professor in the Department of Hematology and Medical Oncology, Emory University School of Medicine, Atlanta, and medical director of the Winship Data and Technology Applications Shared Resource.
The bill, introduced in the US House of Representatives on January 7, was referred to the House Committee on Energy and Commerce, where it currently sits. Here’s what experts say about it — and AI’s potential as an independent prescriber of medication.
Can AI Write Prescriptions Already?
Not by itself. Some researchers, like Parikh, are developing AI tools that could someday help doctors make prescribing decisions. Others are studying how AI from companies like Meta, Google, OpenAI, NVIDIA, Medtronic, Anthropic, DeepSeek, Mistral, and Falcon could someday be used to improve medication management.
AI tools for prescribing come in two main flavors, Parikh told. Predictive tools mine a patient’s electronic health record or genetic information to find the likelihood they will respond to a treatment.
On another front, AI creates digital twins of real-world patients used as computational replications to see which drug might work best.
“All of these tools use a combination of older AI tech, like predictive AI tech, and large language models, but in essence, they’re mining a lot of historical data to come up with systems,” said Parikh.
AI models are being trained to select medications and predict patient outcomes at a time when people need more access to healthcare. Only 55% of Americans say they can access and afford quality healthcare when they need it. “I think the reason why this discussion is happening is probably totally reasonable,” Parikh said.
However, using AI for decision support — something many doctors already do — is much different than letting AI take over.
“There’s no evidence to show that any AI operating in the wild is equivalent or superior to a physician at prescribing medication, nor does it show that it’s prescribing the right medications all the time,” Parikh added.
When Tech Arrives Before It’s Ready
ChatGPT is a good example of technology arriving on a wave of excitement, hope, and hype when in practice it wasn’t ready for serious prime time use.
With AI, accuracy is always the primary concern. Hallucinations appear correct. When you’re using ChatGPT to write an email, a mistake might not mean life or death. A prescribing error could.
“Because of the issues we know about AI performance and AI-based errors, depending on the technology and depending on the guardrails that are in place for the technology, you could imagine an AI prescriber going off the rails,” says Matthew DeCamp, MD, PhD, associate professor in the Center for Bioethics and Humanities and Division of General Internal Medicine at the University of Colorado Anschutz Medical Campus, Aurora, Colorado.
For example, in one study, an AI scribe wrongly predicted a patient had hand, foot, and mouth disease after a doctor mentioned issues with those three appendages.
“It could be that the model, for reasons that we don’t understand, picks up on a detail of something someone says and decides to prescribe in an entirely different direction,” said DeCamp. “In my day-to-day clinical practice, oftentimes, a medication decision is not exactly straightforward. I’m talking with a patient. I’m trying to understand what their views of risks and benefits are and what are the trade-offs we’re making, and sometimes it’s just hard to imagine a machine being able to do that.”
Indeed, AI parses data from conversations or medical records, but it can’t perform a skilled exam. “I don’t know how they’re going to get around all the things that doctors do, like listening to the chest, feeling for the softness of the abdomen, and things like that,” said Samuel K. Cho, MD, chief of Spine Surgery at Mount Sinai West and professor in the Department Orthopaedic Surgery and Neurosurgery at the Icahn School of Medicine at Mount Sinai, New York City.
Bias is another concern. In theory, technology should be more objective than humans. But it’s still made by people using data collected from humans, so bias can be baked in. “If the AI prescriber is based on large language models that are trained on data that may be incomplete or biased, then its responses or recommendations to certain patients could also be biased in certain ways,” added DeCamp.
To train AI for prescribing, researchers still need to figure out which electronic health data and genetic data best predict medication success.
“The datasets that we’ve linked are not representative of all the factors that could influence how a drug is going to work in a patient — that’s the first problem,” said Parikh. “The second is that this data is really noisy.”
Since long before the AI era, biostatisticians have dedicated their careers to cutting through the noise in data, figuring out which variables matter, and which are confounders. Now, they have millions and billions of data sets to sift through.
“What we really need, I think, are deeper data sets that account for a lot of the genetic, socioeconomic, etc, factors even for a smaller number of patients,” noted Parikh. “We just need deeper, not broader, data sets to be able to do what I think this legislation is envisioning an AI system could do.”
How Would This Law Regulate AI Prescribing Tools?
Under the Healthy Technology Act of 2025, AI prescribers would need to be state-authorized and approved, cleared, or authorized by the FDA under section 510(k), 513, 515, or 564 of the Federal Food, Drug, and Cosmetic Act.
What would that look like, exactly? “Questions about regulation and authorization are very unanswered and significant at the present moment,” added DeCamp.
Parikh noted regulatory standards for AI devices are lower than standards for drugs and other non–AI-based devices.“This whole notion of the AI being approved, cleared, authorized by the FDA, and that being a prerequisite for using the tool, I think, is a little bit of a red herring,” said Parikh.
AI-based devices tend to receive approval based on retrospective data from one or a few institutions with fewer patients than drug or device trials, commented Parikh. “I really worry that the legislation is moving too fast to fix a problem that just needs a lot more prospective data to be reassured about,” he said.
Another concern is whether today’s regulatory pathways suit AI. The FDA approves many AI-based devices through its 510(k) pathway, which requires them to work similarly to previous devices.
“Traditional devices are used by maybe one or two patients at a given hospital a day,” Parikh stated. “An AI algorithm might be used for all hospitalized patients in a given day. Just the scale on which we’re using these tools means I don’t think that we can use these sorts of predicate standards for approving a lot of this, especially the type of thing that influences drug prescribing decisions.”
AI algorithms can change daily based on new data. In January, the FDA released draft guidance for lifecycle management of AI-enabled device software. Parikh said questions remain about how this approach will be operationalized and how AI performance will be monitored as algorithms evolve.
Cho hoped governing agencies understand the potential harm of rolling out this tech before it’s ready. “If small companies, or big companies for that matter, start developing these models, and they just release it and see what happens in the market space, and if there’s some harm, that could really put the entire field backward by several years,” he said.
He sees parallels between AI prescribing and self-driving cars. Companies have been testing their safety iteratively in select cities for years. No one has rolled them out to the masses yet — for good reason.
“You don’t want to harm human lives by doing the self-driving,” said Cho. “I think it’s very similar in medicine as well. We want to benefit the patients, and on top of that, the legal implications if your model predicts or gives out some wrong drug and something happens, then, who’s got a responsibility? I think that’s a big, big unanswered question.”
Could AI Prescribing Happen in the Future?
First, the Healthy Technology Act of 2025 would need to pass through Congress. The bill’s sponsor, Rep. David Schweikert of Arizona, introduced similar bills in the US House of Representatives in 2021 and 2023. Both were referred to the House Committee on Energy and Commerce and to its Subcommittee on Health, where no further action was taken.
“I’m not completely sure about the likelihood of it passing or not, or the likelihood of doctors actually trusting it to affect their clinical practice, but I think it spurs this natural discussion of when are we, as clinicians, going to be comfortable enough using AI, not for just improving the way I document but for actually influencing the way I made medical decisions,” said Parikh.
DeCamp envisions very limited circumstances where AI prescribing might make sense — someday. It could be akin to protocolized prescribing, where some medical centers give medications for certain conditions if a patient meets the criteria on a checklist.
“You could imagine — I’m still saying imagine — that an AI technology could operate in a similar manner, where it goes through the same level of checklists for what would be probably simple, low-risk conditions, makes a recommendation, but then importantly, has that human in the loop, perhaps by the end of the day that does require review by a human and oversight and signing,” added DeCamp.
AI could connect with patient charts to reduce prescribing errors that can occur when doctors have incomplete information.
“You could imagine that if a technology like this one is linked to an electronic health record, it might be more aware of the possibility of a drug allergy compared to a clinician at a stand-alone urgent care, say, who sees a patient for the first time and has no knowledge of their past health record and doesn’t have access to their allergies because of the way our system continues to have fragmented electronic health records,” said DeCamp.
How will we know if prescription-writing AI is ready for prime time?
“If it’s been tested in a prospective trial, not a secondary analysis of a previous prospective trial, but an actual trial that randomizes patients to getting drugs based on what the AI says versus not getting drugs based on what the AI says, and it’s shown that that device makes decisions faster with equivalent performance, or that it results in better performance for patients, I think that’s the standard,” concluded Parikh.
Source link : https://www.medscape.com/viewarticle/this-bill-could-make-it-legal-ai-prescribe-medicine-2025a1000504?src=rss
Author :
Publish date : 2025-02-27 09:04:28
Copyright for syndicated content belongs to the linked Source.