As we progress through 2025, one term that has come to encapsulate the business landscape is ‘artificial intelligence’ (AI). A marked presence has been established by AI across various business operations, spanning from human resources to client interactions and beyond. In the insurance sector, firms are diligently evaluating the multitude of benefits and potential challenges connected to AI, seeking the most effective and ethical means of adoption.

Research conducted by McKinsey & Company underscores AI’s impactful role within insurance businesses, highlighting a remarkable improvement ranging from 10-20% in the success rates of new agents, alongside a 10%-15% uptick in premium growth.

Particularly in the realm of cyber insurance, a dynamic risk landscape is emerging, posing significant challenges for underwriters. Bryan Barrett, the regional underwriting manager for cyber at Munich Re Specialty, conveyed to Insurance Business that his team is not only recognizing the evolving risks but also questioning whether the industry is adequately equipped to underwrite these new exposures. He emphasized, “AI is evolving every day; understanding how AI is created is crucial as it can significantly impact organizations and users.”

The need for effective AI governance is becoming increasingly pressing. As Barrett points out, implementing AI often involves numerous organizational touchpoints that can introduce security vulnerabilities. Leaders in organizations must prioritize closing these gaps while ensuring compliance when gathering information from applicants, especially given the confidentiality surrounding proprietary technologies.

Despite the rapid growth of AI, governance frameworks have yet to keep pace. Barrett cites the emergence of litigation related to discrimination, intellectual property infringement, and data privacy violations as clear indicators of the stakes involved. Organizations are urged to establish robust AI governance boards to devise responsible frameworks that encompass policies, training, and rigorous security measures.

Liken the process of AI development and deployment to parenting; just as a child learns from their environment, so too does AI require careful guidance through its learning process. Barrett elaborated, stating, “AI is constantly learning but does not always process information correctly. Without careful monitoring, it remains susceptible to errors.” This analogy underscores the essential human oversight necessary during AI’s growth trajectory.

Furthermore, the potential pitfalls surrounding AI—such as governance, training, and ethical model development—are critical considerations. The efficacy of AI solutions improves significantly when foundational training is ethical and adheres to legal standards. Organizations are warned that improper training can lead to severe legal repercussions, including claims of infringement and reputational harm.

Ultimately, the overarching objective of integrating AI is to enhance efficiency and simplify processes. Barrett foresees AI making significant contributions to underwriting risk, claims adjustment, and operational tasks across various sectors, including manufacturing and insurance. AI’s advancement is intended to improve customer experiences while providing firms the tools necessary to navigate complexities in risk management.

At Munich Re Specialty, strategies are being refined to mitigate risks through collaborative efforts, exemplified by the Reflex Cyber Risk Management™ program, which provides essential services such as cybersecurity training and risk monitoring.

Barrett insists that AI is a permanent fixture in the business landscape: “AI will continue to evolve and gain autonomy. While there are bound to be growing pains, organizations must proceed with caution as governance remains crucial.”