The rise of AI chatbots like ChatGPT has inevitably extended into healthcare. People are already using these tools for medical questions, often seeking quick answers to symptoms or lab results when professional help isn’t immediately available. Now, OpenAI has launched “ChatGPT Health,” a dedicated feature designed to navigate this trend – but it’s not a replacement for doctors, and comes with significant risks.
Over 5% of all ChatGPT interactions worldwide now involve health-related queries, with over 40 million weekly users seeking medical information. This demand prompted OpenAI to create a more focused experience within its platform, but the tool is explicitly not designed to diagnose or treat conditions.
What is ChatGPT Health?
ChatGPT Health isn’t a standalone app. Instead, it’s a specialized tab within the existing ChatGPT interface, offering a space for health-related questions, document analysis, and workflow support. OpenAI claims the system was developed with input from over 260 physicians across 60 countries over two years, undergoing rigorous testing with more than 600,000 reviews of model responses. The result is a more cautious and constrained AI, designed to encourage professional medical consultation.
Currently, access is limited to the US, Canada, Australia, parts of Asia and Latin America. The EU, UK, China, and Russia are excluded due to varying regulations surrounding health data. OpenAI plans to expand availability, though timelines remain uncertain.
How it Works: Data, Not Magic
ChatGPT Health doesn’t represent a breakthrough in AI understanding of medicine. According to Alex Kotlar of Bystro AI, the core technology remains the same: “They haven’t created a model that suddenly understands medical records much better. It’s still ChatGPT, just connected to your medical records.”
The key is context. The tool can integrate with data from Apple Health, lab results (from services like Function), and even food logs from MyFitnessPal and Weight Watchers. This allows for personalized insights based on your history, but it requires explicit permission to access your data.
OpenAI uses an evaluation framework called HealthBench, applying over 48,000 criteria to assess the quality and safety of responses. This framework relies on physician-written rubrics to grade model performance in simulated health scenarios.
Privacy and Limits: A Consumer Product, Not HIPAA-Compliant
ChatGPT Health operates as a consumer product, meaning it doesn’t fall under the same strict regulations as clinical healthcare systems. OpenAI explicitly states that HIPAA (the Health Insurance Portability and Accountability Act) does not apply. For regulated clinical use, OpenAI offers a separate “ChatGPT for Healthcare” service with HIPAA compliance.
OpenAI emphasizes additional security measures like encryption, but experts caution against overconfidence. “Encrypted at rest doesn’t mean the company itself can’t access the data,” warns Kotlar. Users can disconnect apps, remove records, and delete Health-specific memories, but risks remain inherent in storing sensitive information online.
The Real Danger: Hallucinations and Misinformation
The most significant concern with ChatGPT Health isn’t privacy, but accuracy. AI models, including this one, are prone to “hallucinations” – confidently providing incorrect information. In healthcare, this can have severe consequences.
ECRI, a patient safety nonprofit, has already identified AI chatbots as the top health technology standard for 2026, highlighting the potential for harm. Even OpenAI admits that older models had higher hallucination rates, though they claim GPT-5 has reduced these errors significantly.
“The biggest danger for consumers is that unless they have a medical background, they’re going to have a hard time evaluating when it’s saying something right and when it’s saying something wrong,” Kotlar explains.
The Bottom Line
ChatGPT Health is a tool to supplement, not replace, professional medical care. It can help translate complex information, organize questions for appointments, or provide general wellness insights. However, it’s crucial to verify any information with reputable sources and avoid self-diagnosis. The tool’s value lies in its potential to improve access to information, but its limitations and risks must be understood. The rise of AI in healthcare is inevitable, but responsible use requires caution and a clear awareness of its boundaries.
