Not long ago, a patient came to my ophthalmology clinic for a routine exam. Her eyes were healthy. But when I asked how she was doing, something unspooled. She had been having terrible headaches and was convinced they were connected to her vision. She had already explored this extensively, not on a medical website, but in a weeks-long conversation with ChatGPT, where she had described her headaches, her stress, her sleep, her family history of brain aneurysms, her fear that she was dying. She had shared more with a chatbot than she had ever told any of her doctors.
I’m increasingly hearing versions of this: patients arrive with diagnoses they’ve workshopped with artificial intelligence (AI), medication concerns shaped by chatbot exchanges, anxieties soothed or amplified in conversations no clinician will ever read. A parallel health narrative is forming outside the medical record, one that is sometimes more complete and often more candid than anything in a patient’s “official” chart. Meanwhile, the law Americans assume protects their health privacy mostly does not cover it.
The Health Insurance Portability and Accountability Act (HIPAA) of 1996 applies to information held by doctors, hospitals, health plans, and the business associates working on their behalf. It does not generally apply to consumer AI platforms. As HHS has explained, once a patient directs a covered entity to send data to a third-party app, HIPAA generally does not restrict how that app uses the information unless the app is itself a covered entity or business associate.
In other words, the conversations where patients are most cautious, shaped by the awareness that everything enters a permanent hospital or clinic chart, sit inside a far more developed legal framework. The conversations where they may be most honest sit outside it.
Should HIPAA be extended to cover chatbots like ChatGPT? Yes to the principle, no to the mechanism. Americans urgently need stronger privacy protections when they disclose health information to AI. But stretching a law built for the medical bureaucracy of the late 1990s over every chatbot would be too crude a fix for the problems we actually face.
Consider why patients are building this “shadow” health record in the first place. A chat window at midnight offers something the clinic often cannot: speed, privacy, and no immediate entry into the chart. KFF’s latest tracking poll found that one-third of adults used AI for health information or advice in the past year. Four in 10 of those users said they had uploaded personal medical information, such as test results or doctors’ notes, into an AI tool, despite the fact that 77% of the public say they are concerned about the privacy of medical information provided to AI tools.
Many of these people are not replacing their doctors. They are telling a chatbot the things they would never say to one. That candor fills a genuine gap, and any serious privacy framework needs to protect it rather than drive it underground.
We already know what happens when it is not protected. The Federal Trade Commission’s (FTC) cases against GoodRx and BetterHelp both involved health data beyond HIPAA’s reach. In the BetterHelp matter, the FTC alleged the company shared consumers’ mental health information for advertising purposes after promising confidentiality. Meanwhile, OpenAI now says explicitly that HIPAA does not apply to ChatGPT Health because it is a consumer wellness product. That honesty is useful. It is not the same thing as enforceable law.
Yet, simply extending HIPAA as written would not solve the problem. HIPAA was engineered for health plans, clearinghouses, and providers. It is notoriously complex even for the institutions it already governs, and it was never designed for systems that generate health inferences from language, behavior, and patterns across sessions.
What we need is a federal rule purpose-built for consumer health AI. The federal government has already begun sketching pieces of this: the FTC’s updated Health Breach Notification Rule now clearly reaches many health apps outside HIPAA, and Washington State’s My Health My Data Act explicitly covers inferred health data. A national rule should go further. If a platform receives or generates sensitive health information, whether or not it markets itself as a health tool, it should face strict limits on secondary use, real rights to deletion, strong breach notice, and a bright-line prohibition on using health disclosures for targeted advertising without express consent.
But the privacy gap also creates a problem that regulation alone cannot fully solve. We know that many patients withhold medically relevant information from clinicians out of embarrassment, disagreement, or privacy concerns; AI is already becoming a repository for what goes unsaid. When a patient’s most revealing disclosures live in a chatbot, the doctor is working from an incomplete picture without knowing it. I have had patients tell me they researched interactions between their glaucoma drops and supplements they never mentioned to their prescribing physician. The full conversation happened with a chatbot, and what reached the medical record was a fragment. The chart was supposed to be the authoritative account of a patient’s health. For a growing number of people, it is becoming the redacted one.
There is an irony here that should unsettle those involved in health policy. We spent decades building a legal infrastructure to protect health information inside medicine. In doing so, we drew a bright line around doctors and hospitals — and we now watch as the most candid health conversations are migrating outside that line. Patients are not going to stop confiding in chatbots; the freedom to be honest without consequence is too valuable. But we owe them more than the current arrangement, in which the law protects their reticence in the exam room far better than their honesty in a chat window.
Henry Bair, MD, MBA, is a resident physician at Wills Eye Hospital and a physician-writer focused on the intersection of health policy and patient care.
Source link : https://www.medpagetoday.com/opinion/second-opinions/120645
Author :
Publish date : 2026-04-05 16:00:00
Copyright for syndicated content belongs to the linked Source.














