UKAI

Responsible AI Working Group

Supporting senior leaders to turn responsible AI principles into practice across real systems and business-critical decisions.

Why This Group Exists

AI is moving from pilot projects to live systems that influence decisions, shape customer experiences and guide internal processes. As adoption accelerates, scrutiny is intensifying. Organisations are being asked to prove not only that their systems work, but that they are fair, safe, explainable and accountable.
 
The Responsible AI Working Group was created to help members meet that challenge. It brings together senior leaders responsible for how their organisations define, implement and govern responsible AI. The focus is practical and immediate, not abstract principles, but how responsibility is applied in live projects and business-critical contexts.

Why this matters

Guidance on responsible AI exists, but members consistently report a gap between principle and practice. Standards are often high level, while the challenges of implementation across bias assessment, explainability, escalation routes, oversight mechanisms are real and immediate.

This group exists to close that gap. It provides a trusted space where members can share what works, test approaches with peers, and co-develop frameworks that are both credible and usable. Importantly, the work does not stay in the room. Outputs are channelled across the UKAI network and into the national conversation, shaping parliamentary roundtables, informing policy briefings, driving fireside discussions, and strengthening the resources available to members across every sector.

What the group focuses on

Members work together to generate a stream of activity that supports both practical adoption and wider influence across the UKAI network. This activity is surfaced through roundtables, fireside discussions, workshops and policy engagement, and results in outputs such as:
  • Preparing members for emerging regulation in the UK and globally
Outputs
The group’s work produces outputs designed to be directly useful for members and influential across the network. These include:
 

Guidance notes and frameworks tailored to different sectors

Templates, toolkits and decision-support resources for deployment teams

Case studies and deep-dive workshops on live member challenges

Contributions to UKAI consultation responses and regulatory engagement

Podcasts, white papers and reports that showcase member leadership and raise visibility across the ecosystem

Over time, the group may also lay the groundwork for a Responsible AI Charter, a member-led commitment that translates principles into practical standards across industries.

Who takes part

Membership is selective and deliberately small, open only to UKAI companies with a direct stake in defining and governing responsible AI. This ensures that those shaping the agenda are the people responsible for how AI is being deployed inside their organisations. The group is designed to be candid, cross-sector and outcome-focused, with insights feeding directly into UKAI’s wider programme.

Matt Holmes – Chair of the Responsible AI Working Group

Matt Holmes is CEO of Intellect Frontier, where he leads on human-centred AI assurance. His work focuses on how people interact with large language models, helping organisations test for safety, behaviour and social impact in real-world contexts. As the UK partner of Civitaas, Intellect Frontier brings to market one of the first human-in-the-loop testing frameworks for LLMs, already used under the US ARIA programme. Matt is actively engaged with policymakers to ensure organisations are ready for the next generation of AI regulation, making his perspective central to the Responsible AI Working Group.

How to Get Involved

The Responsible AI Working Group is for organisations committed to making AI not only powerful but principled. Its work equips members with practical tools, supports the wider UKAI community, and ensures industry voices help shape how responsibility is defined in both practice and policy. The group will continue to evolve as expectations, regulations and technologies change, keeping members ahead of the curve.
 
If you would like to be involved, please get in touch.