Making AI Regulation Work in Healthcare: Insights from UKAI’s MHRA Call for Evidence Response
On 2 February 2026, the UKAI Life Sciences Working Group submitted a response to the Medicines and Healthcare products Regulatory Agency Call for Evidence on the regulation of artificial intelligence (AI) in healthcare. This response formed part of a broader programme of work, which included a roundtable held on 30 January 2026, co chaired by Curia and the UKAI Life Sciences Working Group.
The roundtable brought together representatives from NHS organisations, regulatory bodies, industry, technology providers, and professional and policy groups. Participants included Dr Mani Hussain of the Medicines and Healthcare products Regulatory Agency and a member of the National Commission into the Regulation of AI in Healthcare. Discussion focused on how AI is currently being used, assessed, governed and, in some cases, not adopted within healthcare settings.
The starting point was the application of existing regulatory frameworks to AI enabled medical devices. The discussion considered how these frameworks operate in practice for software based and AI enabled systems, particularly where systems change after deployment or perform differently across settings. Medical device regulation, pharmacovigilance, clinical governance, and professional accountability were identified as the principal mechanisms currently used to manage safety and performance. A significant part of the discussion focused on risk-based proportionality. Attention was given to distinguishing between different uses of AI, recognising that systems used for administrative or low risk support raise different regulatory considerations from those that materially influence diagnosis, treatment, or clinical decision making.
Post deployment oversight was also discussed. Consideration was given to monitoring performance over time, managing updates and changes, and identifying issues once systems are in routine use. Existing vigilance concepts were examined in this context, including reporting mechanisms, version control, and the management of performance changes or unexpected behaviour.
Responsibility across the AI lifecycle was explored. Manufacturers, healthcare organisations, and clinicians were recognised as having defined roles, with responsibility linked to function, context, and use. The importance of clarity in these roles was noted in relation to appropriate use and confidence in adoption.
The submission and roundtable together represent a practical examination of how AI is currently regulated and used within healthcare, and where greater clarity may be required. The accompanying documents set out the detail of both the submission and the roundtable discussion.
For the UKAI Life Sciences Working Group, this work represents a point of continuity rather than conclusion, anchoring ongoing collaboration with Curia, the Medicines and Healthcare products Regulatory Agency, and industry around the shared task of making AI governance workable within the realities of healthcare.