This year, MedPage Today reported on a slew of developments for artificial intelligence (AI) in healthcare. In this report, we examine what lies ahead for healthcare AI in the new year.
In November, a newly-assembled FDA advisory committee held a 2-day meeting to develop guidance for the agency on questions around generative AI-enabled medical devices. The Digital Health Advisory Committee advocated developing a regulatory approach that focuses on premarket performance evaluation and risk management as well as continuous performance monitoring after these devices are on the market.
However, the committee stopped short of offering specific recommendations, suggesting that regulatory changes are not in the immediate future for healthcare AI.
“There is a level of caution and thoughtfulness that I’m hearing more from the regulatory community recently,” Brian Anderson, MD, chief executive officer of the Coalition for Health AI (CHAI), told Medpage Today, adding that “it’s getting the cart in front of the horse, if you create a robust regulatory process that’s not informed by where private sector innovators are going.”
In the year to come, Anderson predicted, healthcare AI companies will likely continue to work alongside health systems and health researchers to develop best practices and determine a common definition for how good, responsible AI should work.
One key part of that work will be developing more mature and sophisticated AI tools that provide clear, targeted outcomes for health systems, Anderson said. Health systems also will likely improve processes for vetting and validating AI tools for their needs, including requiring more information about how AI systems are designed and built.
Health systems are increasingly demanding clear return on investment — measurable economic financial returns on the capital that they’re investing in these AI tools, Anderson said.
In addition to the financial considerations, he also noted that health systems are becoming increasingly interested in AI governance, specifically monitoring for and preventing bias, model misalignment, and drift in these tools. Anderson noted that one challenge for many health systems will be finding the right people and processes to best understand these issues as they seek to hire AI companies and implement those tools.
Anderson noted that this could become a major source of tension when it comes to healthcare AI in 2025. Health systems may start demanding more information and control over AI models before implementation, while AI companies will want to continue to protect their intellectual property.
“At a high level, you’re going to see health systems’ demand for greater transparency, both in the post-deployment monitoring phase and in the procurement phase,” Anderson said. Without those controls, “it’s like having a scalpel, not knowing that it’s rusty and doesn’t cut well, and not being able to do anything about it.”
Notably, CHAI has been working with hundreds of organizations, including AI companies and health systems, to develop a standard approach to facilitate these partnerships. Anderson said the organization will continue working to create more guidance around the development, implementation, and use of AI tools in healthcare settings as well.
While Anderson and CHAI see the potential for further improvement in healthcare AI in 2025, other organizations like the Emergency Care Research Institute (ECRI) are continuing to focus on its risks. On ECRI’s recently released top 10 list of health-related technology hazards for 2025, the No. 1 item was AI-enabled health technologies.
“The promise of artificial intelligence’s capabilities must not distract us from its risks or its ability to harm patients and providers,” Marcus Schabacker, MD, PhD, president and chief executive officer of ECRI, said in a press release. “Balancing innovation in AI with privacy and safety will be one of the most difficult, and most defining, endeavors of modern medicine.”
Despite those concerns, Suchi Saria, PhD, the director of the AI, Machine Learning and Healthcare Lab and the founding research director of the Malone Center for Engineering in Healthcare, both at Johns Hopkins University in Baltimore, said that further AI adoption in the clinical setting could define the coming year.
Saria, who is also a member of the board of directors for CHAI, said she saw vast improvements in the use of AI to achieve clinical and operational efficiency over the past year.
“In 2025, you’ll see way more maturity around use of AI within the clinical domain to create workforce efficiency and advocacy,” she told MedPage Today. “Our workflows today were built basically in the pre-[electronic medical record], pre-AI era. … There’s a lot of inefficiency.”
She believes there is now more openness among clinicians and health systems to use AI tools to improve clinical workflows, and expects to see further changes in this area in the year ahead.
Anderson said he hopes the next year will show that healthcare AI can mature into a sophisticated tool that will finally transform how healthcare is delivered.
“In the 90s, [we] talked about this grand vision of a learning healthcare system,” Anderson said. “I think many of us would agree that we haven’t yet realized that. My hope is that in 2025 we’ll see that AI brings us as close as we’ve ever been to this vision.”
Source link : https://www.medpagetoday.com/special-reports/features/113596
Author :
Publish date : 2024-12-31 15:00:00
Copyright for syndicated content belongs to the linked Source.