Tribunal orders HMRC to reveal if AI used in R&D tax relief decisions

A First-tier Tribunal has ordered HM Revenue & Customs to disclose whether it used artificial intelligence when rejecting research and development (R&D) tax relief claims — a decision that could reshape how taxpayers and advisers interact with the UK’s flagship innovation incentive.

According to the Financial Times, Judge Alexandra Marks found arguments for transparency “compelling” after a freedom-of-information request and directed HMRC to respond by a September deadline. The request, lodged in December 2023 by Tom Elsbury, a tax specialist and co-founder of Novel, asked HMRC to confirm whether it had used large language models or generative AI in decision-making. “The public should know if AI is concluding or forming a decision in tax enquiries,” Mr Elsbury told the FT, warning of particular sensitivity where claims touch on defence-related work.

The tribunal’s order overturns a November 2024 decision by the Information Commissioner’s Office, which had accepted HMRC’s argument that confirming or denying AI use could prejudice tax assessment or collection. That earlier ruling set the legal backdrop to the dispute.

R&D tax relief supports companies developing advances in science and technology by allowing qualifying expenditure to attract enhanced tax relief. The boundary between routine innovation and qualifying R&D is technical and often fact-specific, explaining HMRC’s detailed enquiries. Whether and how AI is used in determining outcomes goes to both process and trust: transparency could strengthen public confidence, improve understanding of decision-making and encourage higher standards of explainability in tax administration. HMRC and the ICO have argued that disclosure could expose enforcement methods and harm operational effectiveness.

The case comes amid a rise in AI-driven R&D claims. In July 2024, a First-tier Tribunal ruled that a project to build an AI-enabled KYC verification and risk-profiling system constituted qualifying R&D, finding it sought an appreciable advance and resolved genuine technological uncertainties. Such decisions underline the value of clarity from HMRC as AI becomes more common in claims.

HMRC said it is “carefully reviewing the decision and is considering our next steps” ahead of the September deadline. The ICO will not appeal. HMRC could comply, add protective caveats, or seek further legal avenues. For technology firms and advisers, greater transparency on automated decision-making would be welcome. Clear disclosure, coupled with explainability, human oversight and robust record-keeping, would help companies manage risk, defend legitimate claims and avoid deterring investment. Firms such as Novel argue that certainty and documented processes reduce the risk of mistakes or disadvantaged applicants.

Operational concerns remain. HMRC’s reliance on exemptions to protect assessment methods suggests disclosure must be carefully scoped — for example, confirming whether AI is used in defined decision-paths, explaining human oversight, or publishing anonymised case studies and governance standards rather than operational detail. The tribunal’s ruling creates scope for such a calibrated approach.

The dispute reflects a wider policy debate over AI in public administration. The UK has an opportunity to set precedents by insisting on transparency where safe and useful, clarifying technical definitions for R&D, and developing guidance that balances innovation, privacy, security and the needs of tax administration.

Key developments to watch include HMRC’s compliance with the September deadline, any legal challenge, the emergence of sectoral standards on automated decision-making in tax, and further tribunal rulings on how R&D rules apply to AI and software projects. If handled well, the case could strengthen the UK’s reputation for responsible AI governance and a predictable environment for innovation.

Created by Amplify: AI-augmented, human-curated content.

Related topics