Shadow AI in Healthcare: Risks and Insights

Jan 24, 2026 | AI Trends

In a recent survey published by Wolters Kluwer Health, it has come to light that a significant number of healthcare workers are utilizing artificial intelligence (AI) tools that have not received approval from their organizations. This phenomenon, known as “shadow AI,” raises concerns regarding patient safety and data privacy risks. According to the survey, over 40% of medical professionals and administrators reported awareness of colleagues’ use of these unauthorized AI products. Alarmingly, nearly 20% admitted to having used such tools themselves.

Dr. Peter Bonis, chief medical officer at Wolters Kluwer, highlighted the potential consequences of using unapproved tools. While these tools may offer utility to individual users, the lack of vetting from health systems poses questions regarding their safety and efficacy. Users might not fully recognize the risks involved or understand the implications of potential inaccuracies associated with these tools.

Cybersecurity Risks and Healthcare Concerns

Experts warn that shadow AI presents a substantial security risk across various industries, with the healthcare sector being particularly vulnerable. The clandestine nature of shadow AI means that organizational leaders and IT departments lack oversight of how these tools are utilized, creating opportunities for cyberattacks and data breaches. Healthcare organizations are already frequent targets for cybercriminals due to their valuable data and the critical nature of care delivery.

The risks associated with shadow AI usage are heightened in healthcare settings where accuracy is paramount. Misleading or inaccurate information from AI tools could directly harm patients, an issue that about a quarter of providers and administrators cited as their primary concern regarding AI in healthcare.

Drivers Behind Shadow AI Usage

Despite the risks, AI technologies continue to capture interest in the healthcare sector, driven by the potential to process vast amounts of data and enhance workflow efficiency. In the survey, more than half of administrators and 45% of care providers reported using unauthorized AI tools primarily due to their ability to streamline workflows. Additionally, nearly 40% of administrators and 27% of providers were drawn to these tools for their superior functionality or because no approved alternatives were available.

Curiosity and the desire for experimentation have also played a role, with over 25% of providers and 10% of administrators citing these reasons for their engagement with shadow AI.

Awareness of AI Policies in Healthcare

Another key revelation from the survey is that many healthcare workers are unaware of their organizations’ AI policies. Administrators tend to be more involved in shaping these policies compared to care providers. A noteworthy 29% of providers claimed they were aware of their organizations’ main AI policies, which contrasts sharply with just 17% of administrators who reported the same level of awareness. The disparity underscores a potential knowledge gap, especially in light of the adoption of AI scribes, which record conversations between clinicians and patients—tools that may have contributed to some familiarity with AI policies among providers.

This survey illustrates a critical juncture for healthcare organizations as they navigate the benefits and risks associated with emerging AI technologies. Clarity and education regarding AI policies are essential to mitigate the dangers posed by shadow AI while harnessing its potential to improve patient care and operational efficiency.