
In brief
- OpenAI says ChatGPT Health will roll out to select users starting this week, with broader access planned in the coming weeks.
- The feature stores health conversations separately from other chats and does not use them to train OpenAI’s models.
- Privacy advocates warn that health data shared with AI tools often falls outside U.S. medical privacy laws.
On Wednesday, OpenAI announced a new feature in ChatGPT, allowing users to connect medical records and wellness data, raising concerns among some experts and advocacy groups over the use of personal data.
The San Francisco, California-based AI giant said the tool, dubbed ChatGPT Health, developed with physicians, is designed to support care rather than diagnose or treat ailments. The company is positioning it as a way to help users better understand their health.
For many users, ChatGPT has already become the go-to platform for questions about medical care and mental health.
OpenAI told Decrypt that ChatGPT Health only shares general, “factual health information” and does not provide “personalized or unsafe medical advice.”
For higher-risk questions, it will provide high-level information, flag potential risks, and encourage people to talk with a pharmacist or healthcare provider who knows their specific situation.
The move follows shortly after the company reported in October that more than 1 million users discuss suicide with the chatbot each week. That amounted to roughly 0.15% of all ChatGPT users at the time.
While those figures represent a relatively small share of the overall user population, most will need to address security and data privacy concerns, experts say.
“Even when companies claim to have privacy safeguards, consumers often lack meaningful consent, transparency, or control over how their data is used, retained, or repurposed,” Public Citizen’s big-tech accountability advocate J.B. Branch told Decrypt. “Health data is uniquely sensitive, and without clear legal limits and enforceable oversight, self-policed safeguards are simply not enough to protect people from misuse, re-identification, or downstream harm.”
OpenAI said in its statement that health data in ChatGPT Health is encrypted by default, stored separately from other chats, and not used to train its foundation models.
According to Center for Democracy and Technology senior policy counsel Andrew Crawford, many users mistakenly assume health data is protected based on its sensitivity, rather than on who holds it.
“When your health data is held by your doctor or your insurance company, the HIPAA privacy rules apply,” Crawford told Decrypt. “The same is not true for non-HIPAA-covered entities, like developers of health apps, wearable health trackers, or AI companies.”
Crawford said the launch of ChatGPT Health also underscores how the burden of responsibility falls on consumers in the absence of a comprehensive federal privacy law governing health data held by technology companies.
“It’s unfortunate that our current federal laws and regulations place that burden on individual consumers to analyze whether they’re comfortable with how the technology they use every day handles and shares their data,” he said.
OpenAI said ChatGPT Health will roll out first to a small group of users.
The waitlist is open to ChatGPT users outside the European Union and the UK, with broader access planned in the coming weeks on web and iOS. OpenAI’s announcement did not mention Google or Android devices.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.

















