New law criminalises non‑consensual AI‑generated intimate images

England and Wales introduce legislation targeting AI tools that produce sexualised images of individuals without consent, drawing on academic research and responding to public safety concerns.

In a significant legal milestone, a new law has come into force in England and Wales making it a criminal offence to generate intimate images of a person without their consent using AI tools. The legislative change directly responds to recent incidents where the AI chatbot Grok on X was used to create sexualised depictions, prompting a societal outcry over safety, consent and technological abuse.

The law’s foundation lies in the scholarly work of Professor Clare McGlynn, whose research has been instrumental in shaping this legislation. Her findings on the harms of non‑consensual intimate image generation underpinned ministerial efforts to close legal loopholes exploited by AI. The policy demonstrates a rare and swift translation of academic insight into statutory protection, tightening the UK’s response to emerging digital harms.

As the law takes effect this week, it reflects broader societal concerns around AI misuse and online safety. Policy-makers emphasise that criminalising such AI-generated abuse sends a clear message: technological advancement must not come at the expense of human dignity and consent. Those found in breach may face criminal prosecution, signaling a firm stance against digital exploitation.

This development positions the UK as a leader in anticipating and addressing societal risks posed by AI. By embedding consent and safety into legal frameworks, the government demonstrates a commitment to ethical boundaries, even as technology evolves. The law could serve as a template for other jurisdictions grappling with similar challenges.

The new law affirms that consent remains paramount, even in the digital age.

This article has been produced by Generative AI.

Related topics