AI Is Reshaping Professional Indemnity Risk—and Insurers Are Responding

A shift in the concerns of professional indemnity insurers is under way, with artificial intelligence now eclipsing cyber threats as a primary risk focus. This change reflects the growing use of AI across sectors such as law, accountancy and architecture—bringing operational efficiencies but also new liabilities.

According to a leading law firm, insurers are increasingly asking tough questions about AI deployment. How are outputs from generative AI being verified? Is confidential client information properly protected? These questions go to the heart of the professional services risk profile in an age of rapid AI adoption.

The firm, which has used AI legal tools and developed its own chatbots since 2017, acknowledges the benefits but also the risks. Recent court rulings have exposed the dangers of inadequate human oversight. In Wikeley v Kea Investments Ltd, a litigant submitted an AI-drafted memorandum containing fabricated legal citations. Similar cases in England and Wales have seen lawyers sanctioned for relying on AI-generated references without proper checks—failures that brought reputational damage and regulatory scrutiny.

The liabilities extend beyond the legal profession. Engineers, architects and auditors also face potential claims if flawed AI outputs result in financial loss or harm. Yet many firms lack formal governance or audit structures for AI use, and few offer staff proper training on AI risks—a gap that makes insurers wary.

Confidentiality is another critical concern. Most commercially available AI tools do not guarantee data privacy. Client information entered into these systems may be reused in training, risking inadvertent disclosure. Some firms use proprietary AI tools to retain control, but these are often less advanced than public models. There are also concerns about copyright breaches in AI-generated content, adding further exposure.

Insurers are now demanding stronger safeguards. These include clear policies on tool usage, mandatory training, human oversight, and governance frameworks. New Zealand’s Public Service AI Framework offers a model, emphasising safe, transparent and accountable AI practices.

Looking ahead, insurers are likely to require documentation of AI usage procedures, evidence of risk-appropriate task allocation, security protocols, and staff training. Questions will probe whether AI is used for low-risk tasks or critical decision-making, whether systems are externally sourced or built in-house, and what human oversight is in place. Firms unable to provide satisfactory answers could face higher premiums, reduced coverage or exclusions specific to AI risks.

This emerging scrutiny mirrors broader concerns. Reuters has highlighted the risk of privacy breaches, reputational harm and regulatory exposure from AI and deepfake technologies. Legal malpractice experts warn that failing to verify AI work or safeguard client data may breach ethical standards and void cover. Trade bodies such as the Lloyd’s Market Association note that while AI improves efficiency, it also introduces new forms of professional liability.

Senior executives, too, are under pressure. Directors and officers (D&O) may face claims if AI usage is poorly disclosed or governed, raising the stakes for board-level oversight and transparency. The wider message is clear: the AI megatrend is transforming the professional liability landscape. While AI offers substantial promise, its adoption demands rigorous governance, ongoing education and robust risk frameworks. For the UK, this presents a chance to lead in responsible AI deployment—securing both innovation and trust in professional services.

Created by Amplify: AI-augmented, human-curated content.

Related topics