Judith Miller, a Milwaukee resident, experienced a common dilemma when she received her lab test results online and had questions about her elevated carbon dioxide levels. While she waited for clarification from her doctor, she turned to the AI assistant Claude to interpret her results. Miller found that the AI provided her a clear understanding of what her test results meant, alleviating her anxiety as she awaited further consultation.
This scenario highlights a growing trend wherein patients have unprecedented access to their medical records, thanks to federal regulations mandating the timely release of health information through online portals like MyChart. More patients are increasingly relying on large language models (LLMs) such as ChatGPT, Anthropic’s Claude, and Google’s Gemini to make sense of their medical information.
However, this reliance comes with significant risks. Healthcare professionals and patient advocates caution that AI-driven chatbots might produce inaccurate information and compromise the privacy of sensitive health data. A 2024 KFF poll revealed that a majority of adults—56%—expressed skepticism about the accuracy of AI-generated health-related information.
Experts acknowledge that while LLMs can be incredibly powerful and offer valuable insights, they are not infallible. Adam Rodman, an internist and chair of a steering group on generative AI at Harvard Medical School, points out that the quality of AI’s advice can vary significantly based on how the patient frames their inquiries. This sentiment was echoed by Justin Honce, a neuroradiologist, who emphasized the challenges individuals face in verifying the accuracy of AI responses.
Statistics suggest that the use of AI for health inquiries is on the rise, with one in seven adults over the age of 50 incorporating AI into their healthcare decisions, compared to one in four adults under 30. This shift reflects a broader trend of utilizing the internet for proactive health management, a practice that has evolved from simply consulting websites to engaging AI in generating personalized health insights.
However, caution is advised when interacting with these AI tools. Liz Salmi, a communications director at OpenNotes, conducted a recent study on the accuracy of AI interpretations. The findings indicated that AI chatbots perform better when patients pose questions clearly and precisely, simulating a clinical discussion. However, privacy concerns remain paramount, as patients may inadvertently share personal information that could be misused by technology companies.
Another significant concern is the phenomenon of AI “hallucinations,” where LLM outputs appear accurate but contain inaccuracies. Salmi emphasized the importance of incorporating new digital health literacy strategies—such as cross-verifying AI claims and actively consulting healthcare professionals—into patient behaviors involving AI.
Physicians, too, are leveraging AI for their own purposes. For example, Stanford Health Care has developed an AI assistant to aid clinicians in drafting interpretations of lab results for patients. A recent study revealed that while AI-generated summaries of radiology reports helped clarify information for many patients, they also sometimes led to confusion due to oversimplification or misrepresentation of the details.
In Miller’s case, after discussing her concerns with her doctor and suggesting additional tests after her AI consultations, her follow-up results returned normal. Ultimately, she found that her experience with AI empowered her to manage her health better. She remarked, “It’s a very important tool in that regard. It helps me organize my questions and do my research and level the playing field.” This underscores the dual-edged nature of integrating AI into healthcare—while it can enhance understanding and patient empowerment, it also raises critical concerns that need careful navigation.