16 Mar 2026

Tackling Misogyny in AI: UKAI’s report sets out a practical agenda for responsible AI

UKAI has published a new report, Tackling Misogyny in AI, setting out the findings and recommendations from a parliamentary roundtable held at the House of Lords late last year. Bringing together parliamentarians, regulators, industry leaders, technologists, academics and civil society organisations, the discussion focused on one urgent question: how do we ensure that AI does not entrench and amplify misogyny at speed and scale?

The report forms part of UKAI’s wider commitment to addressing some of the most pressing challenges around AI, and to supporting the development of a responsible AI industry in the UK. It also builds on UKAI’s earlier work on Taking Responsibility for Diversity and Bias in AI, extending that conversation into a sharper examination of gendered harms, accountability and enforcement.

The report makes clear that misogyny in AI is not a future risk. It is already happening, particularly within generative AI applications. Participants highlighted the rapid growth of AI-enabled harms including deepfake sexual imagery, online harassment, and discriminatory automated decision-making in areas such as recruitment, finance, healthcare and criminal justice. The report argues that women and girls often experience these harms first, acting as early warning signs of wider systemic failure.

A central message from the roundtable is that misogyny in AI cannot be dismissed as a narrow technical problem or treated simply as another form of bias. AI systems can reproduce, legitimise and intensify harmful norms, particularly when they are trained on biased data, deployed without adequate safeguards, or designed without meaningful diversity in the teams shaping them.

But the report is also constructive, recognising the potential of AI-powered solutions to address some of these challenges. Rather than presenting the problem as inevitable, it sets out a practical agenda for action. Its recommendations span regulation, standards, public procurement, transparency, digital literacy, law enforcement and industry practice. Together, they amount to a roadmap for reducing harm while supporting innovation that is safe, trustworthy and fit for society.

Among the report’s key themes are the need for clearer accountability across the AI supply chain, stronger protections against AI-enabled abuse, better tools for transparency and redress, and a more serious focus on representation and responsibility in the development of AI systems. It argues that voluntary action alone will not be enough, and that the UK has an opportunity to show leadership by embedding responsibility into the way AI is built, bought and governed.

As adoption accelerates across the economy and public services, questions of safety, fairness and accountability are becoming impossible to ignore. UKAI believes that AI should be a force for good, and that addressing these challenges is essential to building public trust and unlocking the full benefits of AI to drive economic growth and social progress across the UK.

Tackling Misogyny in AI is part of that wider mission. It reflects UKAI’s ongoing work to help build a responsible AI industry in the UK, one that is innovative, competitive and grounded in clear social responsibility.

To explore the full findings and recommendations, members can download the full report here: https://ukai.co/resource-report/tackling-misogyny-in-ai.html