As organisations scale AI adoption, they are also exposing new and poorly understood cyber risks - from data leakage and model manipulation to insecure integrations, agentic behaviour, and third party dependencies.

This invitation only roundtable brings together a small group of senior experts from cybersecurity, AI, and enterprise technology to explore what secure AI adoption looks like in practice. Hosted in partnership with the Laboratory for AI Security Research, with contributions from the Alan Turing Institute and the Global Cyber Security Capacity Centre at the University of Oxford, the session focuses on real world challenges as organisations move from AI experimentation to deployment.

We will be focusing on how CNI companies in particular can securing their AI adoption. How organisations can protect AI systems once adopted, manage data and model risks, and align with emerging guidance such as the DSIT AI Cyber security Code of Practice.

This session is designed for senior leaders from CNI organisations, who are responsible for AI adoption, cybersecurity, risk, and technology strategy.