People are already being asked to trust artificial intelligence in healthcare without being given the skills to question it. A year ago, that sentence would have sounded a little dramatic. Today, it feels almost boringly obvious.
Tools like ChatGPT Health are now designed to sit right between patients and their medical records, translating lab results, summarizing visits, and helping people figure out what questions to ask next. For many patients, that feels like relief. For others, it feels like authority. Often, it feels like both at the same time.
When I wrote about ChatGPT Health in early January 2026, my concern was not that the technology would fail. It was that it would work too well in a system where a lot of people already struggle to understand what their health information actually means. Fluent explanations are comforting. They can also quietly short‑circuit judgment.
And OpenAI is not alone here. Just days after ChatGPT Health began rolling out, Anthropic announced Claude for Healthcare, positioning its models as tools for clinical, research, and patient-facing contexts. So that means that in the first two weeks of 2026, multiple major AI systems are now being explicitly framed as appropriate intermediaries in health-related decisionmaking. This convergence tells us this is a directional shift.
Also, in the past week since ChatGPT Health’s launch, I’ve had many conversations with people about how frequently they are using AI for health advice. Some of the responses were what Open AI would want – that they are using it to get more information, not make healthcare decisions. Yet many others are treating it (and trusting it) to help make these decisions. For example, here are some highlights from a text exchange I had with one of my friends and former coworkers about using AI for healthcare (she gave me permission to share these excerpts from the text exchange):
“I’ll tell you what I like about ChatGPT is it remembers everything I’ve asked it so it’s like an ongoing conversation if that makes sense. I can go on there today and it will say ‘given all the other symptoms you have it means this…’ I find that helpful.
I think if doctors would be more helpful and explain things and actually talk to each other, then we wouldn’t need things like ChatGPT. For instance, my mom is battling cancer and has three doctors, and they don’t talk to each other about treatments and what’s going on. One says this and the other says no do this…it’s frustrating!
People want answers, and I think using AI gives them answers. I get that it’s a tool and not a medical doctor, so what AI says shouldn’t be final. But it’s hard doing that because of how the system is set up.”
What these sorts of conversations show is that it is a very human response to look elsewhere for advice when the health system breaks down. So, in a sense, the health system has already trained people to look elsewhere to make sense of everything. When AI steps into that vacuum (and it already has in my view), the issue is no longer whether people should rely on it, but whether the system anticipated that they would.
Economist Impact’s 2025 report, From Promise to Impact: A Roadmap for Equitable AI in Healthcare, reaches the same conclusion from a very different vantage point. Rather than starting with patient experience, it looks at AI across entire health systems. The takeaway? AI can improve care, but without attention to literacy, trust, and governance, it risks widening the very inequities it is supposed to help reduce.
From patient experience to system recognition
My original post on ChatGPT Health focused on what happens when AI tools show up in everyday patient life. When someone uploads a lab report, connects a portal, or asks an AI system to explain what something “means,” that system is no longer just providing information. It is shaping interpretation and decision making. That matters because health decisions are rarely made under ideal conditions. They are made when people are tired, anxious, scared, distracted, or sitting at a kitchen table late at night thinking, I feel like I should know what this means by now.
Economist Impact’s report zooms out and looks at the same issue at scale. Their report is not a takedown of AI. The core message is that equity does not magically happen just because technology is involved. If AI systems are deployed without attention to literacy, engagement, and human oversight, they can harden existing disparities instead of softening them.
What the Economist Impact report is really saying
The report lays out a roadmap that follows AI across its lifecycle, from design and deployment to engagement and outcomes. You do not need to read it as a policy document to see how it fits. Here’s my key takeaways from the report:
- Equity has to be designed in from the start. Systems trained on incomplete or unrepresentative data do not just perform poorly. They perform unevenly. And uneven performance almost always lands hardest on people who already face barriers to care.
- Trust is not a branding exercise. It does not come from a reassuring interface or a carefully worded disclaimer. It comes from people understanding what a system can do, what it cannot do, and who is still responsible when something goes sideways.
- Human judgment does not disappear just because AI enters the conversation. Clinicians are still making decisions. Patients are still making decisions. Caregivers are still making decisions. The real question is whether AI supports those judgments or quietly replaces them.
What I liked most about the report is that nearly all of their roadmap fits perfectly onto what happens at the patient level. A system that produces smooth, confident explanations without making uncertainty visible invites over‑trust. And a system that blurs the line between what is directly in the medical record and what is inferred invites confusion. Because of this, the health equity problem is impossible to ignore.
AI literacy as a condition of equitable care
One of the best things I took away from the Economist Impact report is the idea of treating literacy and engagement as outcomes, not assumptions. Access to AI tools is not the same thing as the ability to use them safely. This is where AI literacy becomes a health equity issue.
A person’s health literacy has never just been about reading ability. It is about whether people can find, understand, and use information to make decisions in the real world. AI literacy adds a new layer. It is the ability to understand what an AI system is doing, how much confidence its output deserves, and when to slow down and question the answer, instead of just following along.
As we now better understand, a person can be motivated, educated, and deeply invested in their health and still misunderstand how an AI system produces answers. They can also have limited health literacy and limited AI literacy at the same time. In both cases, fluent output combined with personal data creates a powerful nudge to trust what the system says. The Economist Impact’s Roadmap implicitly recognizes this risk. Without shared understanding and meaningful engagement, AI does not level the playing field.
AI health tools expose the equity gap faster than policy can respond
Developing policy frameworks and AI governance regulations take time. Human behavior does not. Consumer‑facing AI tools accelerate everything. They move AI out of institutional settings and into bedrooms, kitchens, and late‑night moments when someone opens a portal alert and just wants it to make sense.
When a system offers to explain your results or your record, expectations are set instantly. Disclaimers help, but they do not undo the basic psychological effect of confident, personalized explanations. What I like about the Economist Impact report is that it acknowledges this tension by emphasizing ongoing governance and accountability rather than one‑time approvals.
This is ultimately about retaining human judgment
AI in healthcare is no longer a hypothetical. It is already influencing how people interpret information and make decisions about their health. When AI becomes the explainer, human judgment does not fade into the background. It becomes the critical safeguard. The question is whether people are equipped to use that judgment—or whether fluent explanations will quietly replace it. That is why health literacy and AI literacy are equity issues. They are not just nice-to-haves. They are critical safety requirements. The Economist Impact report confirms what patient experience already shows: that this is not a user problem. It is a system-design problem.
So if you are in a position to design, deploy, or approve these systems, this is the key question: how are you designing systems to prioritize human judgment instead of subtly phasing it out?



