In a rapidly evolving technological landscape, the adoption of AI services in the cloud is outpacing the development of appropriate security controls. A recent report from cybersecurity firm Wiz reveals that while nearly 90% of organizations are utilizing AI cloud services, less than 15% have established AI-specific security measures.
The findings from Wiz’s report, titled AI Security Readiness: Insights from 100 Cloud Architects, Engineers, and Security Leaders, indicate that 87% of organizations are employing AI services, including platforms like OpenAI and Amazon Bedrock. However, a significant number of respondents, 31%, identified the lack of AI security expertise as their primary concern regarding security — making this the most frequently mentioned challenge.
The report highlights an alarming disconnect between security teams and the systems they are tasked with protecting. It stresses that many security professionals are being expected to secure AI technologies that they may not fully comprehend. As a result, this gap in expertise is creating potential vulnerabilities, thereby expanding the risk surface in many organizations. The report emphasizes the necessity of tooling and automation as critical components needed to mitigate this skills gap.
Despite the clear challenges, only 13% of organizations have adopted AI-specific security posture management (AI-SPM) tools. Instead, the majority continue to employ traditional security controls that may not be adequate for the unique challenges introduced by AI, such as model access vulnerabilities and the risk of poisoned training data. The reliance on practices like secure development (53%), tenant isolation (41%), and audits for identifying shadow AI (35%) illustrates this trend.
The proliferation of hybrid and multi-cloud infrastructures is further complicating security measures. The report states that 45% of organizations are utilizing hybrid cloud environments and 33% are employing multi-cloud strategies. Notably, 70% of organizations still depend on endpoint detection and response (EDR) tools, which are ideally suited for more centralized environments. Alarmingly, a quarter of respondents admitted they lack visibility into the AI services operating within their own ecosystems.
Organizations must recognize that security needs extend beyond mere technological enhancements. The desired features for AI security tools suggest a significant focus on addressing operational and workflow concerns, indicating that effective communication and comprehensive strategies are key to advancing AI security measures.