
By 2030, Gartner has predicted that over 40% of global organizations will suffer from security and compliance incidents linked to the use of unauthorized AI tools. This projection underscores the growing prevalence of shadow AI in the workplace.
A survey conducted earlier this year revealed that 69% of cybersecurity leaders report evidence or suspicion of employees utilizing public generative AI (GenAI) tools at work. The risks associated with these tools, including intellectual property loss and data exposure, are increasingly apparent. Notably, Samsung had to implement a ban on internal GenAI usage in 2023 after employees shared sensitive information with ChatGPT.
To mitigate these risks, Gartner suggests that CIOs establish clear enterprise-wide policies governing AI tool usage, conduct regular audits to monitor shadow AI activities, and incorporate risk evaluations for GenAI into their Software as a Service (SaaS) assessment processes. This strategic approach reflects Gartner’s recognition of the challenges organizations face as they navigate the introduction of generative AI tools.
Gartner’s insights align with several other studies indicating similar trends. For instance, Strategy Insights reported that over one-third of organizations in key markets including the US, UK, Germany, the Nordics, and Benelux face difficulties in monitoring unauthorized AI use. Furthermore, a study by RiverSafe indicated that 20% of UK firms experienced potential exposures of sensitive corporate data due to employee usage of GenAI.
A separate survey conducted by 1Password last month found that 27% of employees admitted to using non-sanctioned AI tools, highlighting the need for organizations to enforce stricter compliance standards amid this growing trend.
Gartner also identified concerns beyond unauthorized AI usage, suggesting that the legitimate use of GenAI may inadvertently lead to technical debt. Their analysis predicts that by 2030, 50% of enterprises will face delayed AI upgrades and increased maintenance costs due to unmanaged technical debt associated with GenAI. This delayed upgrade can pose significant security risks if inadequately managed.
As noted by distinguished VP analyst Arun Chandrasekaran, while enterprises are attracted to the quick delivery promises of GenAI, the high costs associated with maintaining, fixing, or replacing AI-generated content could negate the expected returns on investment. He emphasizes the importance of establishing standards for reviewing and documenting AI-generated assets and tracking technical debt metrics through IT dashboards.
Further complicating matters, over-reliance on GenAI could lead to issues such as ecosystem lock-in and a degradation of skill sets. Chandrasekaran advises organizations to ascertain where human judgment and craftsmanship are critical, designing AI solutions that complement, rather than replace, these skills. He urges CIOs to prioritize open standards, open APIs, and modular architectures in their AI strategies to prevent dependencies on a single vendor.