The UK’s conversation about AI has taken a decisive turn. According to new research from Absolute Security, four in five UK Chief Information Security Officers (CISOs) believe the government must regulate DeepSeek before it triggers a national cyber crisis. The finding, from the 2025 UK Resilience Risk Index, reflects growing alarm over how AI tools are outpacing organisational and policy readiness.
The report, based on a May 2025 Censuswide survey of 250 enterprise CISOs, captures a country navigating the dual realities of accelerating AI capability and rising cyber complexity. Sixty percent of respondents expect cyber-attacks to increase due to AI, with the same proportion reporting that AI is already complicating privacy and governance frameworks.
Organisations are reacting with caution. More than a third (34%) have implemented full bans on AI tools, while 30% have removed them entirely from their environments. Forty-six percent admit their teams are not yet equipped to handle AI-driven threats—an urgent readiness gap many believe must be closed through national policy, clearer governance and accelerated upskilling.
Despite the risks, investment in capability is growing. Eighty-four percent of CISOs say they are prioritising AI talent in 2025, and 80% are committing to executive-level training. As one executive put it, the speed of DeepSeek’s development is outstripping defences—making regulation not just prudent, but essential to maintaining public trust.
Security concerns are compounded by geopolitical context. Hosted in mainland China, DeepSeek raises questions over data sovereignty and the risk of confidential information being exposed to foreign jurisdictions. “Uploading confidential data to a cross-border model could effectively export it to China,” said Andy Ward, SVP International at Absolute Security. “This is not theoretical—organisations are already adjusting their risk models and banning tools in response.”
That caution is spreading. Australia has banned DeepSeek from government devices, with other states weighing similar measures. In the UK, officials are reviewing how tools like DeepSeek align with national security and data-governance standards. As the London Evening Standard reports, the government is applying a security-by-design lens to its assessments and emphasising due diligence around cross-border data flows.
Technology Secretary Peter Kyle has called on Western democracies to lead with responsible governance ahead of a global summit in Paris. Speaking to the Guardian, he framed the UK’s strategy as one that balances innovation with security and transparency—pointing to initiatives like the AI Action Plan and growth zones for data centres as part of a broader national framework.
Some policymakers caution against an overly restrictive approach. The BBC reports that while AI tools such as DeepSeek carry clear risks, they also represent significant productivity potential. The challenge, they argue, is to manage deployment safely without surrendering leadership in AI development to overseas competitors.
The data suggest a practical path forward for the UK: – Enact clear rules and oversight, with guidance on data use, governance, monitoring and cross-border risks. – Invest in AI skills and governance, ensuring technical and executive leaders understand the ethics and risks of AI deployment. – Balance speed with safeguards, avoiding blanket bans while embedding strong protections around sensitive data and infrastructure. – Learn from global examples, adapting best practices from countries managing AI risks at state level.
DeepSeek may be the immediate focus, but the wider message from the UK’s CISOs is clear: the age of agentic AI demands a new regulatory architecture. With the right combination of policy, talent and leadership, the UK can lead in building a secure and responsible AI ecosystem—where innovation serves both progress and protection.
Created by Amplify: AI-augmented, human-curated content.