In a concerning security incident, healthcare giant Optum has restricted access to its internal AI chatbot, which was inadvertently made accessible to the public online. A security researcher discovered that this chatbot, used by Optum employees to inquire about handling patient health insurance claims, was available through a standard web browser.
While the exposed AI chatbot, referred to as the “SOP Chatbot,” did not appear to contain sensitive personal data, its exposure comes at a precarious time for its parent company, UnitedHealth. The conglomerate has faced scrutiny over the use of AI tools to potentially override medical decisions made by doctors and deny patients’ claims. The finding, brought to light by Mossab Hussein, chief security officer at cybersecurity firm spiderSilk, highlighted how the chatbot was hosted on an internal Optum domain yet was accessible via its public IP address without requiring a password.
The duration for which the chatbot was accessible remains unclear. However, it was taken offline shortly after TechCrunch contacted Optum regarding the vulnerability. Andrew Krejci, an Optum spokesperson, defended the exposed tool, stating it was merely a demo developed as a proof of concept that had never been put into production.
Krejci further clarified that this demo was intended to evaluate the chatbot’s responses to queries based on a limited sample set of standard operating procedures (SOPs) and confirmed that no protected health information was involved in the chatbot’s training. The spokesperson emphasized that the chatbot was designed simply to enhance access to existing SOP documents and did not entail any decision-making capabilities.
Despite the assurances, the incident raises alarms, especially as AI chatbots typically generate responses based on their training data. Optum’s chatbot had been trained on internal documents pertaining to health insurance claims, effectively guiding employees in queries regarding claims processing. Records show that the chatbot had been utilized hundreds of times by Optum employees since its inception in September, documenting interactions ranging from handling claim disputes to eligibility inquiries.
Interestingly, employees sometimes engaged the chatbot beyond its intended scope, attempting to inquire about jokes or even modify its output to escape its programming constraints. At one point, the chatbot even generated a seven-paragraph poem reflecting on the theme of claim denial, showcasing the unpredictability of such AI models.
Optum, part of UnitedHealth Group, is currently under significant criticism for its reliance on AI to adjudicate health claims. Reports have surfaced regarding patients facing anguish and frustration over denied coverage, prompting legal actions against UnitedHealthcare. Allegations have arisen regarding an AI model with a staggering 90% error rate being employed instead of actual medical professionals to make critical healthcare decisions. As the company grapples with these allegations, it finds itself at a crossroads between the benefits of AI in healthcare and the risks it potentially brings.