ChatGPT Health: Why Health Literacy Now Requires AI Literacy

Health symbol over a futuristic computer screen

OpenAI just launched ChatGPT Health. That announcement through me into a tailspin and let to me working until 2 am to write the first draft of this post. In many ways, this is the convergence of all of my worlds – health literacy, teaching and learning, clear communication, and Ai literacy. For two decades, I’ve been focused on the first three. But, of course, AI is new. Yet for nearly the past year, I’ve spent a lot of time learning, processing, and thinking about AI’s impact on learning, decisionmaking, and professional.  And OpenAI thrust all of these things together on a random Wednesday in January 2026 and sent me into a tailspin. So let’s dive in to how ChatGPT Health changes the game for health consumers and healthcare practitioners alike.

If you have ever opened a patient portal, stared at a lab report, and thought, I feel like I should know what this means, you are exactly who ChatGPT Health is aimed at. That reaction is not a personal failing. It is a problem of the healthcare system and the organizations that make up it. For years, we have needed a way to make healthcare information more accessible and understandable to everyone – because everyone uses the healthcare system in some way.

So, on that level, I love what ChatGPT Health is trying to do. It is a meaningful step in the right direction. But it is also the kind of step that quietly reshapes people’s expectations before anyone has really thought through the downstream effects.

Patients will assume it can explain their records and make decisions for them because “ChatGPT Health knows more than I do” about it. But it’s not just patients. Clinicians will start seeing AI-generated interpretations in exam rooms. Caregivers and families will use it late at night when a portal alert lands and there is no one available to translate what it means. None of that is hypothetical. That is how people already use ChatGPT for health questions, and OpenAI is now formalizing that behavior.

I want to be clear about my posture from the start. I am genuinely supportive of this direction. Medical record portability and interpretability are enormous problems. If a tool like this helps people understand what is happening in their bodies and ask better questions during a visit, that is real progress. At the same time, I think we are about to collide with the same health literacy problems we have been, only now it will be wrapped in fluent AI language and delivered with confidence.

What follows is not a takedown. I love AI, and I love the problem OpenAI is trying solve with ChatGPT health. Rather, this is a warning label written from a health literacy perspective. The technology matters, but the human context matters more.

What OpenAI actually launched with ChatGPT Health

OpenAI describes ChatGPT Health as a dedicated experience inside ChatGPT that brings health information and the model together in a separate space. The launch post lays out the core idea clearly: people have health information scattered across portals, PDFs, lab reports, visit summaries, and apps, and it is difficult to see the whole picture or know what to ask next. ChatGPT Health is supposed to help with explanation, organization, and preparation for medical conversations. Those are all laudable goals and are things we’ve needed for a long time.

This is also where the product becomes meaningfully different from generic chatbot use. OpenAI is encouraging users to connect medical records and wellness apps so responses can be grounded in personal data rather than general information. As The Verge explains ChatGPT Health lives in a “sandboxed tab within ChatGPT” and allows people to connect personal medical records and apps for more personalized responses.

Of course, OpenAI is careful to state that ChatGPT Health is not intended for diagnosis or treatment, a limitation you can see emphasized in many places, including OpenAI’sHelp Center materials. This limitation and these statements matters, but it does not erase the behavioral reality. When you name something “Health,” invite people to connect their records, and offer to help them interpret results, you have already shaped expectations. Disclaimers do not undo that. They just acknowledge it.

The privacy and security story

OpenAI knows this ChatGPT Health rises or falls on trust. In the launch materials and privacy documentation, the company emphasizes that Health operates as a separate space with enhanced privacy protections and that conversations in Health are not used to train its foundation models. Those are important commitments, and they are directionally the right ones. You can see the full Health Privacy Notice here. (Reader’s note: I’ll write a more detailed post deconstructing these Notices in more legal detail in the coming days.)

Yet it is worth reading that notice carefully now rather than stopping at the headline promise. By default, Health data is not used to improve foundation models, but also explains that “a limited number of authorized personnel and trusted service providers may access Health content” for safety and operational purposes unless a user opts out. That distinction matters. “Not used to train models” is not the same thing as “never accessed by humans under any circumstances.”

There are other practical privacy realities worth acknowledging. As The Verge article explains, OpenAI has built in layered encryption but not end-to-end encryption in ChatGPT Health, and has stated in the past that the legal process can require giving access to user information in some circumstances. The Verge also reports that OpenAI does not have HIPAA concerns because HIPAA does not apply in this consumer context. That last point is critical because many people will assume that a health-focused product automatically falls under HIPAA. In much of consumer health technology, that assumption is false.

Of course, none of this means people should never use the tool or never connect records. I am on the waitlist, and I will upload my health data when I get access. I want to try it out and figure out what it actually does and where the risks really lie. Yet, I also understand that privacy cannot be treated as a footnote. If we care about people with limited health literacy, privacy choices need to be explained in plain language, at the moment they matter, not buried in documents most users will never read.

Why ChatGPT Health is fundamentally about health literacy

Here is the uncomfortable point. We are about to introduce a powerful interpretive layer into a world where many people already struggle to understand basic health information.

In the US, only 12 percent of adults in the United States have proficient health literacy skills, based on the National Assessment of Adult Literacy. Healthy People 2030 defines personal health literacy as the degree to which individuals can find, understand, and use information and services to inform health-related decisions. That definition is important because it reminds us that health literacy is not just about reading ability. It is about decisionmaking under real conditions.

By deploying ChatGPT Health, OpenAI has thrust itself into the health literacy world. In other words, it enters a world where confusion is normal, stress is common, and decisions are often made with incomplete information. A product that changes how people interpret their records is not just a convenience feature. It is an intervention that can shape real health and human behavior.

Health literacy is not the same thing as AI literacy

One of the biggest mistakes we can make here is assuming that health literacy and AI literacy rise and fall together. They do not. Health literacy is about navigating medical information. AI literacy is about understanding what an AI system is doing, how it produces answers, where it is likely to fail, and how much trust its output deserves.

There is a growing body of work treating AI literacy as a distinct set of competencies. Duri Long and Brian Magerko’s work on AI literacy in human-computer interaction is especially useful here because it frames literacy as evaluation, collaboration, and critical judgment rather than clever prompting. Their paper, “What is AI Literacy? Competencies and Design Considerations,” is available here.

Institutions are already starting to frame AI literacy as a civic skill, not a technical specialty. UNESCO’s AI competency frameworks for students and teachers emphasize ethical awareness, human-centered thinking, and understanding system limits rather than technical depth. Those frameworks are available for students here and for teachers here.

In health contexts, this distinction becomes even more important. A National Academy of Medicine Perspectives commentary argues for what it calls “critical AI health literacy,” focusing on skills like cross-checking, recognizing bias, and resisting the urge to treat fluent output as objective truth. That piece can be found here.

The takeaway is simple but important. A person can be highly health literate and still misunderstand how an AI system works. They can also have limited health literacy and limited AI literacy at the same time. In both cases, fluent output combined with personal data creates a real concern that the normal human response to ChatGPT Health will be to default to what it is saying.

The real risks are expectations and over trust

OpenAI can include disclaimers, and clinicians can remind patients that ChatGPT Health is no substitute for the provider’s own advice. That will not stop people from leaning on it. OpenAI cannot control how people use AI once they leave the chat.

The biggest behavioral shift here is that answers grounded in personal records will feel safer and more authoritative. Often, that feeling will be justified. Sometimes, it will not. The risk is not simply that the model can be wrong. The risk is that it can be wrong while sounding calm, confident, and complete. That combination is persuasive, especially for people who already feel overwhelmed.

If someone already struggles to interpret medical information, a fluent explanation can feel like relief. Relief reduces friction, and it also reduces verification. That is why safety is not just about accuracy. It is about how these AI errors impact people who are anxious, tired, and trying to make sense of something that matters

What good looks like for ChatGPT Health

If ChatGPT Health is going to be a net positive, health literacy and AI literacy have to be treated as safety requirements rather than optional user-experience polish. That sounds abstract, but it really comes down to a few concrete design things.

1. Uncertainty has to be visible.

The system should clearly distinguish between what it is quoting directly from the medical record, what it is inferring from patterns or context, and what it does not know. That distinction should be understandable to someone who has never heard the word “inference” and has never been trained to think probabilistically. Research on risk communication consistently shows that people make better decisions when uncertainty is made explicit rather than smoothed over. The National Academies of Sciences, Engineering, and Medicine emphasize that transparent communication of uncertainty is a core feature of trustworthy health communication. For more, see Communicating Science Effectively from the National Academies, which you can download here.

2. Teach back should be built into ChatGPT Health.

When I say that teachback should be incorporated into ChatGPT health, I mean developing a design that increases understanding and reduces risk. Teach back is a long-standing best practice in which a patient is asked to explain information back in their own words. The purpose is not to test the patient. It is to test whether the explanation worked. If the person cannot accurately restate what they heard, the problem is usually the communication, not the listener.

Applied to ChatGPT health, teachback would mean that the system does not treat a plausible answer as the endpoint. Instead, it treats user understanding as the signal that matters. Right now, AI health interactions often end when the response sounds complete. That is precisely where the problem is. AI responses can create confidence, and confidence can mask misunderstanding. Simply put, the risk is that people act on what they think they understood from ChatGPT Health.

Teachback interrupts that pattern. In practice, it would mean that after providing health information, ChatGPT would pause and ask the user to explain what they took away or what they plan to do next. Not as a quiz, and not to assign blame, but to check alignment and understanding. If the user’s explanation reveals confusion or overconfidence, the system can correct course before the person acts on that chat session. This matters for three reasons.

  • First, it surfaces misunderstanding early, when correction is still easy.
  • Second, it counteracts automation bias by forcing active engagement rather than passive acceptance.
  • Third, it provides a stronger safety signal than disclaimers, because it reveals what the user is actually about to do.

Of course, teach back should be used selectively, not everywhere. It belongs in moments where misunderstanding carries real risk, such as medication use, symptom interpretation, or care navigation. But the larger point is simple. In healthcare, the goal is not just to provide correct information. The goal is to support correct action. Incorporating teach back into ChatGPT Health can help bridge that gap.

How should people use ChatGPT Health right now

After all of this, you are probably thinking something like – why would I ever use ChatGPT Health given all that you just said? As I said, I like the promise of ChatGPT Health, and I’ll use it myself. Yet this is the advice I would give if someone asked me how to use ChatGPT Health responsibly, especially if they already feel overwhelmed by health information.

1. Use it to prepare, not to decide.

Let ChatGPT help you organize questions for a clinician visit, summarize what happened at your last appointment, and identify what you do not understand. Do not treat it as the final word on what something means or what you should do.

2. Ask it to show its work.

If ChatGPT Health makes a claim about your labs, diagnoses, or medications, ask where in the record that information is coming from. If it cannot point to something concrete, treat the answer as a draft, not a conclusion.

3. Treat confident tone as a reason to slow down, not speed up.

A calm, confident explanation can feel reassuring, but ChatGPT Health’s tone is not a safety feature. If an answer sounds confident or to certain, that should trigger verification, not action.

4. Be deliberate about privacy.

If you are uncomfortable with the privacy tradeoffs, do not connect or upload your health records. You can still use the tool as a general explainer. If you do connect records, read the Health Privacy Notice and review your settings.

5. Do not use it as your only support when stakes are high.

If you are in acute distress or facing an urgent medical decision, involve a real clinician or trusted person. Don’t outsource your judgment or decisionmaking to ChatGPT Health.

What it all means?

ChatGPT Health could help millions of people engage more effectively with their health care. I want that outcome.

OpenAI has taken great steps by creating a dedicated Health space and making public commitments about privacy and scope. The harder work now is making sure the people most at risk of misunderstanding are not the ones who bear the cost for everyone else’s convenience. If we get that right, this is not just a new feature. It is a public health intervention.

Share the Post:

Related Posts

ai and legal education brain

How to Build Your Own AI Tutor

The guide explains how to build an AI tutor that supports real learning rather than replacing it. It assumes you have already accepted that students are using AI and focuses on the harder work of designing boundaries, structure, and responsibility into how that AI shows up in your course.

Read More
Eyeglasses next to a smartphone displaying the ChatGPT AI app on a patterned surface.

Building an Ai Tutor that Aids Learning

Students are already using AI. Pretending otherwise does not help. This essay explains why I built an AI tutor designed to support thinking rather than replace it, and what that choice revealed about learning in an AI-mediated classroom.

Read More
Scroll to Top