AI’s transformative promise is colliding with a front-line security reality: tools such as China-based DeepSeek are forcing UK Chief Information Security Officers (CISOs) to weigh innovation against the risk of catastrophic data exposure.
Absolute Security’s UK Resilience Risk Index 2025, based on a poll of 250 CISOs, finds 60% expect cyber-attacks to increase as AI adoption accelerates. Respondents cited stretched patching timelines, failing endpoint controls and governance gaps, and called for stronger oversight of third-party models.
Many firms have responded by banning or suspending certain AI tools while drafting new policies, training staff and recruiting AI specialists. Few are abandoning AI altogether — instead building safer adoption pathways, with C-suite training and governance now a boardroom priority.
The concern is far from abstract. DeepSeek’s rise has prompted bans and investigations overseas. Australia barred the app from government devices, citing an “unacceptable risk” to official systems, while Germany’s data protection authority has pressed Apple and Google to block it over possible EU privacy breaches. Multiple countries have since launched probes.
At the heart of the issue is the risk of sensitive data leaving organisations unchecked. “Predominantly it’s about data sovereignty and data governance, where is your data, who has access to it?” said Andy Ward, SVP International at Absolute Security. Supplying DeepSeek with corporate files, he warned, was like “printing out and handing over” confidential information.
Academics and industry researchers have echoed the concern, warning that speed without safeguards heightens risks ranging from AI-driven phishing to automated reconnaissance for cyber-attacks. Recent retail breaches, including incidents at Harrods, have underscored the stakes.
Absolute’s report urges firms to pivot from prevention-only thinking to resilience, investing in recovery planning, tighter oversight and staff education. CISOs are also asking for government action:
- - Clear rules on data flows to overseas AI providers, with contractual and technical safeguards.
- - A national incident-recovery playbook tailored to AI-enabled threats.
- - Accredited UK labs to certify AI models for enterprise use.
- - A sustained skills pipeline for governance and model-risk teams.
Regulators are signalling readiness to intervene, with bans on government devices and probes into cross-border data flows already under way. For the UK, early action could protect sensitive systems while helping British firms compete internationally by exporting trustworthy AI standards.
Security leaders stress their calls are not anti-innovation. They argue that robust governance and resilience will enable faster, safer access to productivity gains while reducing regulatory and supply-chain risk. As the report concludes, AI will transform industry — the question is whether the UK can build the safeguards that let innovation flourish securely.
Created by Amplify: AI-augmented, human-curated content.