UKAI

Bounded autonomy: AI agents reshape finance and raise urgent governance stakes

Autonomous AI agents are rapidly evolving from digital assistants into independent actors capable of executing complex financial transactions—introducing a governance challenge as profound as the technology itself. Kathryn McCall, Chief Legal and Compliance Officer at Trustly, is among those urging a structured, cautious approach to deployment, grounded in transparency, accountability and ethical safeguards.

McCall champions the principle of "bounded autonomy": clear limits on what AI agents can do, layered governance structures, and human oversight at key decision points. In practice, this means permitting AI to initiate tasks like invoice creation, but requiring human approval for critical actions such as payments. The goal is to manage the growing risks around financial privacy, compliance and security as AI agents assume greater operational control.

Unlike traditional financial software, AI agents often behave unpredictably. Their non-deterministic nature introduces vulnerabilities including prompt injection, adversarial attacks and data leaks. McCall highlights the potential “blast radius” of AI failure—how much damage can result when things go wrong. To mitigate this, she proposes infrastructure safeguards like sandboxing, isolated containers, time-limited access and emergency kill switches, particularly for high-stakes actions such as cross-border payments.

The regulatory landscape remains patchy. While comprehensive AI-specific laws are still emerging, existing regulations on data privacy, anti-money laundering and payment processing already apply. McCall warns that companies must not view the current regulatory gap as a green light to innovate without responsibility. GDPR and financial accountability standards must be reinterpreted for the age of AI agents.

The broader industry is undergoing dramatic change. AI is improving customer service, accelerating credit decisions and supporting fraud detection—but also enabling more sophisticated financial crime. Industry experts stress the need for explainability, ensuring AI decisions are transparent and understandable to humans from the outset.

Risks extend beyond finance. Legal liability, cybersecurity, bias and physical or financial harm all increase as AI agents gain autonomy. Experts recommend full risk assessments, clearly defined accountability structures and responsible AI principles built into corporate practice. PwC and others advocate detailed operational guidelines and strong stakeholder engagement to uphold trust in AI-driven systems.

This shift elevates the corporate legal function. McCall argues that legal officers must lead the effort to map AI behaviour to legal exposure and mitigation frameworks, ensuring ethics and recoverability remain core values. Automation may streamline processes, but it must never replace human responsibility.

Despite the risks, autonomous AI offers significant opportunities for customer experience, efficiency and compliance. With rigorous, principle-led governance and proactive oversight, the UK can lead in responsible AI innovation. Embedding bounded autonomy, explainability and regulatory foresight into system design is a critical step towards building trusted, resilient AI in financial services.

Created by Amplify: AI-augmented, human-curated content.