Silicon Valleyโs tech giants are sprinting into the health industry. Their entrances raise questions about data security and AI regulations in health, while also illuminating continuous instability of healthcare policies.
OpenAI on Jan. 7 launched an app dedicated to health and wellness called ChatGPT Health. The app allows users to upload medical records and promises a more grounded, relevant, and useful conversation about their health.
Days later, Anthropic, another big name in the AI space, followed suit announcing its own health app, Claude for Healthcare. Among many features for providers and clinicians, Claude for Healthcare allows its pro and max members to upload medical records for easier understanding.
โWe are on the verge of the wild west as it relates to health care and the use of AI,โ said Assemblymember Mia Bonta (D-Oakland).
Last year as chair of the Assembly health committee, Bonta authored AB 489, which was signed by Gov. Gavin Newsom and took effect Jan 1. The bill ensures that AI companies cannot represent themselves as medical professionals at any point. Previous laws already made it a crime for a person who is not specifically licensed in health care to use certain words, letters, phrases or other terms that imply that they are authorized to practice that profession. AB 489 extends those terms to companies developing AI and generative AI tools.
Bonta said the bill was born out of reported experiences of people, specifically youth and elders, being made susceptible to generative AI giving them medical advice that did not come from a medical professional. She believes OpenAI and Anthropicโs new health tool could touch on aspects of that but that moving forward, the focus will be data security.
โBy asking its millions of users to upload their personal medical records to its new ChatGPT Health, OpenAI is opening itself to the highest possible scrutiny regarding data privacy,โ she said in a recent press release.
Both OpenAI and Anthropic explicitly say in their releases of the tools that medical data uploaded will not be used for training and gives other options of opting in and out of certain privacy settings. Notably, ChatGPT Health allows users to delete chats, but only within 30 days of input.
Training on data is not the only extent to which individuals are worried about their data. Ownership of data and how it is used by private companies now or the future presents concerns.
Health information is one of the most personal forms of data. It contains an array of sensitive and unique information. Unlike providers and insurers bound by the Health Insurance Portability and Accountability Act, private companies such as OpenAI and Anthropic generally are not subject to HIPAA unless they are handling protected health information on behalf of a covered entity, leaving the privacy of health data largely to corporate discretion and patchwork regulations.
According to OpenAI, more than 230 million people globally ask health and wellness-related questions every week. This highlights a huge need for health information, and with rising health care premiums leading to fewer people able to afford health insurance, that number might increase. According to Urban Institute, 4.5 million people will lose coverage due to the expiration of certain tax credits after Congress failed to extend them.
โWe have created a moment in time where many people are not going to have the kind of access to health care that they will need,โ Bonta acknowledged in a recent interview with The OBSERVER.
โMy deep concern right now is that weโve cut off the ability of people to see licensed health care providers, and weโve essentially pushed them into this world where platforms like ChatGPT Health and Claude for Healthcare are going to be the providers of choice,โ she said.
AI health care is expected to grow up to $45.6 billion in 2026, according to a market research firm specializing in emerging technologies.
โIt definitely can play a role in making health care more understandable and more accessible, so I think the impact is mostly positive,โ said Alexander โSashaโ Sidorkin, a Sacramento State professor who briefly served as chief AI officer at the universityโs former National Institution on Artificial Intelligence in Society.
Sidorkin said he sees a benefit to AI that allows the everyday person to understand their data. He revealed that he has input his own medical records into AI.
โI would upload results of tests and [it] will explain it to me in a simple language, which is faster than if you go to the doctor,โ he said of ChatGPT Health, which allows documents not specific to health to be uploaded and analyzed.
Though optimistic about the technology, he still warns that people should not rely on it because AI โcan miss nuancesโ that a doctor wouldnโt.
Those who want to use ChatGPT Health currently must sign up for a spot in a virtual waiting line, while Claude for Healthcare is available to paid users. Users also can connect wellness apps such as Apple Health to both platforms.
