UKAI

High Court warns of serious penalties for AI misuse in legal filings

Demis Hassabis, CEO of Google DeepMind, has urged UK policymakers and business leaders to adopt “smart regulation” of artificial intelligence during a speech at the inaugural SXSW event in London. He warned that ineffective oversight could lead to unintended consequences and a loss of public trust. “AI is the most important technology humanity is working on. We should be making sure we do it properly – in a way that’s safe, that gets public buy-in, and that unlocks economic value,” said Hassabis.

His intervention comes amid growing pressure on the UK government to close gaps in its regulatory framework. Business leaders, campaigners and lawmakers are calling for clear rules to guide the safe and responsible use of AI. A recent report by the Ada Lovelace Institute highlights serious deficiencies in the governance of biometric technologies, warning that the UK’s fragmented approach risks turning it into a “wild west” for facial recognition, jeopardising privacy and civil liberties.

Facial recognition technology used by UK police and retailers has drawn particular concern. Law enforcement scanned nearly five million faces in 2024, resulting in over 600 arrests. Critics, including Privacy International, argue that current laws lack the safeguards needed to protect human rights.

Hassabis criticised the “move fast and break things” ethos of Silicon Valley, arguing that AI demands caution. “For something this fundamental, it is important to try and have as much foresight ahead of time as you can,” he said. He called for an ethical framework grounded in public engagement and long-term thinking.

The need for reform is echoed in Parliament. Conservative peer Lord Chris Holmes has proposed a new AI authority to enforce standards on safety, transparency and accountability. Others, including Lord Tim Clement-Jones, have raised concerns about opaque algorithmic decisions in areas such as social welfare and immigration, where the lack of transparency limits redress.

While the UK has followed a “principles-based” regulatory model, ministers have signalled a shift towards binding legislation. Following the King's speech last July, plans were announced to impose legal duties on AI developers via the upcoming Digital Information and Smart Data Bill. An AI Safety Institute has also been tasked with strengthening oversight and compliance.

Hassabis likened the urgency of AI regulation to the climate crisis, suggesting the creation of global oversight bodies similar to the Intergovernmental Panel on Climate Change. He stressed the importance of international cooperation as AI’s rapid development poses profound societal risks.

The Ada Lovelace Institute has called for a unified regulatory approach that addresses the concentration of power among a few dominant firms while prioritising the public good. Survey data shows that 72% of the UK public support AI regulation, with 88% backing government action to mitigate harm once systems are deployed.

As the UK charts its regulatory course, the focus is shifting from rule-making to trust-building. With industry leaders and lawmakers aligned on the need for action, the decisions made now could shape the global future of AI governance.

Created by Amplify: AI-augmented, human-curated content.