Safeguarding the Future: Tackling Misogyny and Building Fair and Responsible AI
The “headlong race for market supremacy and maximum profit” comes at the cost of “safety, sustainability, human rights and ethics”, argued author and activist Laura Bates at a Misogyny and AI roundtable in the House of Lords.
“Safety, sustainability, human rights and ethics have too often been forgotten in a headlong race for market supremacy and maximum profit”, argued Laura Bates, feminist author and activist, at the UKAI Tackling Misogyny in AI roundtable event at the House of Lords on November 20th.
The event, chaired by Baroness Thangam Debbonaire, brought together leaders in AI, activism, business and politics, with the aim of raising real concerns, sharing potential solutions, and setting out a path forward for feminist, inclusive AI development in the UK.
As one speaker pithily put it: “being a feminist doesn't mean anything mean or nasty. It just means you believe in equality of opportunity for women. That's it, no magic”. The roundtable aimed to identify key initiatives that will be passed to the Government and industry as recommendations.
Laura Bates on Marginalisation and Misogyny
Bates described how social media is still a place where “women and marginalised groups endure daily abuse, harassment, bias and misinformation”.She expressed fear and concern for the current global moment of technological transformation. “We are poised on the precipice”, with every sector, from education to personal relationships, adopting AI tools at an “unprecedented” rate. Without “a meaningful regulatory framework”, technological developments could result in “catastrophic harm” and a landscape where technology will “only serve those already at the table”.
Building a Regulation and Diversity Framework
In the light of Bates’ opening comments, many in the roundtable supported the idea that AI companies should publish gender impact assessments; face penalties for misogynistic activities and be assessed for bias in audits undertaken by independent authorities.The roundtable suggested that public recognition of an AI firm’s ethical practice and inclusive policies could serve as a “badge of honour”. A well-acknowledged, industry-led code of ethics supported by certification could incentivise companies to strive for the highest ethical standards.
Parity and Transparency in Data Systems
Another crucial recommendation that surfaced during the roundtable was to independently verify and scrutinise data systems. From data collection to training processes, transparency should be the priority. Speakers noted the “misogyny in, misogyny out” pattern, one especially prevalent to AI, which learns through replication and repetition. If inputs are biased, then the system will learn to ‘think’ in biased, under representative ways, so vigilant processes of verification will be vital.
Societal AI Literacy
The roundtable agreed that a national AI and digital literacy campaign will be vital, to ingrain AI literacy and education in the curriculum for students from the offset. Students will need education in recognising bias, understanding AI systems, and using technology responsibly. Speakers suggested that this could be addressed through public information campaigns and changes to the curriculum.
Policing AI: More Stringent Measures
Citing the Denmark model, where proposed legislation (due to become law by the end of the year) demands that users will have legal rights over their image, voice and facial features explicitly to combat AI-generated deepfakes and unauthorised use of personal likeness speakers at the roundtable placed emphasis on criminalising non-consensual deepfake pornography. Speakers felt that creation, distribution and possession of deepfake pornography should all be deemed criminal offences. Under both data protection and copyright law, personal likeness and data should, crucially, be treated as “owned assets”.Several speakers also floated the concept of using AI to police AI, a suggestion which gained broad support across the group. Regulatory AI “watchdogs” should monitor, detect and report misogynistic activity, bias, and harmful content. Human, ethical oversight would need to preponderate and oversee the watchdogs’ operation to ensure and maintain transparency and fairness.
Final Thought
Bates concluded the roundtable with enthusiasm for the suggestion of industry-led codes of conducts, but also with a warning that we need to take responsibility and act with urgency to address the potentially deadly impacts of deep-rooted misogyny in AI.UKAI CEO, Tim Flagg echoed her sentiment, “there are risk and challenges that we must address”. As an industry, and as a roundtable, he argued, it is “our responsibility to address those risks”.
The findings from the roundtable will be written up into a report which will offer specific guidelines and recommendations to address these challenges, designed for industry and government.