UKAI

FCA embeds AI oversight into UK financial regulation

The Financial Conduct Authority has published its AI Update 2025, confirming that artificial intelligence will be regulated within existing accountability frameworks rather than through a new set of prescriptive rules.

Building on its 2022 discussion paper, the FCA said its current architecture—including the Consumer Duty, the Senior Managers and Certification Regime (SM&CR), and SYSC rules on governance and operational resilience—provides a strong foundation for managing AI-related risks. These include bias, discrimination, data quality, governance, and the resilience challenges posed by third-party AI providers.

The UK’s stance diverges from the EU’s more prescriptive AI Act, emphasising outcomes over specific technologies. Responsibility for AI systems cannot be outsourced, the FCA stressed: boards and senior managers remain accountable under SM&CR for AI-driven decisions.

The regulator is also deploying AI internally. Its Advanced Analytics Unit scans about 100,000 websites daily for scam activity, while market abuse surveillance tools use machine learning to detect complex trading manipulation. Synthetic datasets are applied to model scam propagation and test interventions against Authorised Push Payment fraud.

In sanctions screening, the FCA has built a synthetic data tool to stress-test firms’ controls, and it offers synthetic datasets via its Digital Sandbox to support safe AI experimentation. Its Synthetic Data Expert Group, made up of banks, vendors and academics, is exploring broader uses of such data in fraud detection, machine learning and secure data exchange.

Looking ahead, the FCA and Bank of England will run a third Machine Learning Survey to update insight on adoption and risk, while the AI Sandbox and Digital Hub will be expanded as testbeds. Horizon-scanning will track large language models, deepfakes and quantum risks.

The FCA has also launched its AI Live Testing programme, allowing firms to trial AI products with real customers in controlled environments. From October, a new ‘Supercharged Sandbox’ developed with Nvidia will provide firms with advanced compute power, technical expertise and enhanced datasets.

Internationally, the FCA continues to work through the Digital Regulation Cooperation Forum, the Global Financial Innovation Network, IOSCO, the Financial Stability Board, OECD and the G7 to shape global AI standards.

“The message for boards is clear: AI oversight is integral to existing accountability frameworks, and they bear ultimate responsibility,” the FCA said.

The FCA’s mix of principles-based supervision, innovation testbeds and in-house AI use is being presented as a forward-looking model that supports innovation while protecting consumers—an approach intended to keep the UK a leading centre for AI-enabled financial services.

Created by Amplify: AI-augmented, human-curated content.