As artificial intelligence (AI) systems expand across society, integrating them within the framework of the rule of law has become a pressing challenge. The principles of the rule of law—rooted in documents such as the Magna Carta, which championed fair trials and protection from arbitrary detention—now confront the complexities of AI-driven decision-making. Constitutional theorist Albert Venn Dicey emphasised the supremacy of law, equality before it and the safeguarding of individual rights. Yet AI’s opaque operations risk undermining transparency, fairness and explainability—the very foundations of legal certainty.
Since the Magna Carta, legal instruments such as the writ of habeas corpus and constitutional milestones like the US Constitution have aimed to protect freedoms and enforce accountability. This tradition has evolved into global commitments, including the Universal Declaration of Human Rights (1948), which enshrines the rule of law as a core international principle. But while AI thrives on global data flows, the rule of law remains largely anchored in national jurisdictions. This mismatch has sparked debate over the need for harmonised, cross-border rules that centre on human rights and accountability.
Recent international efforts show growing recognition of this need. The Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, signed in September 2024 by more than 50 countries including EU states, is the first legally binding international treaty on AI governance. It aims to safeguard democratic values and address risks such as algorithmic bias, misinformation and threats to public institutions. The treaty complements the EU’s Artificial Intelligence Act, which came into force in August 2024. This legislation categorises AI by risk, mandates transparency and accountability, and supports cooperation through the European Artificial Intelligence Board.
Alongside Europe’s initiatives, the United Nations has called for a global regulatory approach. A September 2024 report by a UN expert group warned of the dangers of unregulated AI, including increased digital inequality, and urged frameworks to ensure fair benefit-sharing from AI progress. These calls echo national tensions, particularly in the United States, where major tech firms have pushed for a 10-year moratorium on state-level AI regulation. They argue a unified federal approach would support innovation and avoid fragmented rules. But bipartisan critics warn the pause could delay vital safeguards and entrench corporate dominance.
In the UK, the Data Use and Access Bill is under scrutiny for potentially allowing automated AI decisions without meaningful human oversight. The bill has fuelled concern that AI may advance faster than legal protections can keep pace, threatening human rights and judicial safeguards. The recent signing of the international treaty on AI by the US, EU and UK marks a move towards binding standards that prioritise accountability, privacy and legal redress for AI-related harms. It reflects a shared intent to prevent a patchwork of national laws that might slow innovation while upholding fundamental rights.
The evolving regulatory landscape recognises that while AI holds transformative potential, it must operate within frameworks that guarantee transparency and fairness. Policymakers face the dual challenge of enabling innovation while protecting citizens from arbitrary or biased AI-driven decisions. With international collaboration increasing, there is hope that AI governance will develop as a shared global responsibility—preserving democratic values while supporting responsible AI development. This balance is essential if the UK and its partners are to lead in shaping a future where AI serves society without compromising legal principles.
Created by Amplify: AI-augmented, human-curated content.