The UK’s push to embed artificial intelligence (AI) into critical national infrastructure (CNI)—including power grids, water systems, and transport networks—is raising red flags among cybersecurity and academic experts who warn the technology remains too immature to manage its associated risks.
Francesca Boem of UCL cautions that AI’s integration into infrastructure systems introduces complex cyber-physical vulnerabilities. She warns adversaries could exploit AI decision-making to destabilise energy or water systems, particularly as AI is used in sensing, forecasting, and automation.
AI’s dual dependence on software and hardware increases its exposure to threats like data poisoning, prompt injection, and model manipulation. Experts like Richard Allmendinger (Manchester Business School) stress that even small data tweaks can cause disproportionate impacts, from outages to contamination.
Operational complexity further compounds the risk. Noel Chinokwetu of Orange Cyberdefense highlights that many sectors are still aligning IT and OT systems. He cautions against rushing AI adoption, especially with known issues such as hallucinations in LLMs like ChatGPT.
Despite initiatives such as AI sandboxes by the Office for Nuclear Regulation and the FCA-Nvidia partnership for financial AI testing, experts say coverage remains insufficient. Without rigorous real-world testing, AI failures in CNI could propagate rapidly and dangerously.
A recent study shows nearly 75% of CNI organisations fear AI-enabled threats, including phishing, automated hacking, and adaptive cyberattacks. To mitigate risks, experts call for strict human oversight, clear role delineation between AI and operators, robust security frameworks, and transparent regulatory standards.
While AI offers transformative potential for UK infrastructure, safe deployment will depend on security-first strategies, strong regulation, and responsible design to avoid systemic vulnerabilities.
Created by Amplify: AI-augmented, human-curated content.