Over the past year, the Financial Conduct Authority (FCA) has accelerated its efforts to regulate artificial intelligence (AI) within the UK financial services sector, advancing a strategy that fosters innovation while maintaining rigorous safeguards. This approach reflects a wider ambition for the UK to lead in responsible AI, particularly in financial services, where opportunities abound but risks require careful oversight.
In April 2024, the FCA published its AI Update, aligning with the Government’s pro-innovation stance on AI regulation. The update reaffirmed the FCA’s focus on safe, responsible AI use, balancing innovation with consumer protection and market integrity. It also aligned the FCA’s framework with the Government’s five AI principles: security, transparency, fairness, accountability and mechanisms for contestability and redress. In May 2024, the FCA explored AI’s potential to combat market abuse through a three-month Market Abuse Surveillance TechSprint. This initiative demonstrated advanced AI tools, including large language models to reduce false positives and anomaly detection to flag suspicious trading, showcasing how AI can strengthen market integrity.
A major milestone followed in October 2024 with the launch of the FCA’s AI Lab—an ecosystem to support safe innovation. The Lab includes the Supercharged Sandbox, AI Live Testing, AI Spotlight, AI Sprint and the AI Input Zone, providing a secure space for firms to trial AI solutions under regulatory supervision. Stakeholder engagement has been central. From November 2024 to January 2025, the AI Input Zone gathered feedback on AI use cases and regulatory hurdles. This revealed four common themes: the need for regulatory clarity, building consumer trust through risk awareness, promoting cross-functional and international collaboration and the value of sandbox environments for responsible testing.
In January 2025, the FCA hosted an AI Sprint event with 115 participants from diverse sectors, informing its evolving regulatory approach. Discussions highlighted challenges around bias, explainability, data quality and compliance. The FCA published a summary of this feedback in April 2025, embedding these themes within its strategy.
The FCA has adopted a research-led approach to mitigate risks. A review into AI bias flagged the risk of machine learning disadvantaging protected or vulnerable groups, though it noted confidence in mitigation techniques with continued vigilance. Research into large language models, such as OpenAI’s GPT, found these tools could simplify complex information for consumers—provided robust oversight is in place.
Firm-level governance remains a priority. The FCA expects boards and senior managers to oversee AI systems, ensuring they are explainable, monitored and compliant with existing principles covering skill, care, diligence and management accountability. Failures could trigger enforcement action. Consumer protection is paramount; discriminatory AI or biased pricing may prompt regulatory sanctions, as highlighted in previous guidance on Consumer Duty risks.
Financial crime and market integrity also feature prominently. While AI helps prevent crime, it can introduce vulnerabilities in automated trading. The Bank of England has raised concerns over financial stability risks. Firms must implement controls to prevent unintended market manipulation or systemic issues as AI use in trading grows.
Operational resilience and third-party risk are becoming more pressing. AI systems critical to services must be managed as rigorously as traditional IT, with thorough vendor checks and contractual safeguards when outsourcing. The FCA has also signalled that senior managers overseeing AI could face personal accountability for regulatory breaches.
Supporting these initiatives, the FCA partnered with NVIDIA in June 2025 to launch the Supercharged Sandbox, giving firms enhanced computational power to test AI models safely from October 2025. In September 2025, it plans to launch a live AI testing service, offering a collaborative environment for trialling AI models under regulatory oversight.
Jessica Rusu, Chief Data, Intelligence and Information Officer at the FCA, has publicly reaffirmed the regulator’s commitment to using AI both internally and across the market to modernise regulatory practices and foster innovation.
The FCA is now taking a visibly tech-positive stance—championing AI’s potential while upholding robust safeguards. Its framework remains technology-agnostic but expects firms to embed AI within strong governance structures that ensure transparency, fairness and accountability. As the UK positions itself as a leader in responsible AI for financial services, firms are advised to manage AI risks proactively with clear oversight, solid consumer protections and strong operational controls to meet evolving regulatory expectations and reduce enforcement risks. This approach signals positive momentum towards a thriving AI ecosystem that benefits consumers, markets and the wider economy.
Created by Amplify: AI-augmented, human-curated content.