UKAI

AI adoption held back by trust and governance concerns, says Forrester

Many organisations are accelerating their adoption of artificial intelligence to drive innovation and gain a competitive edge. But a new study by Forrester Consulting, commissioned by security automation firm Tines, finds that serious barriers continue to slow progress and expose businesses to risk.

Surveying over 400 IT leaders across North America and Europe, the research highlights that while AI’s potential is widely recognised, challenges around governance, security, trust and organisational alignment are limiting its effective scale.

Governance and security emerge as the leading concerns. Thirty-eight per cent of respondents said these challenges are the primary blockers to broader AI adoption. Traditional governance frameworks were not designed to manage AI’s complexity and shifting regulatory demands, leaving businesses vulnerable to compliance failures, data loss and security breaches. This is further compounded by fears of reputational damage and operational disruption.

Trust and transparency are also key hurdles. Forty per cent of IT leaders reported that employee scepticism around AI-generated outcomes was holding back adoption. Siloed projects and fragmented tools reduce transparency and confidence, undermining investment in AI initiatives. Without explainable, consistent results, AI risks stagnation.

Organisational dynamics present another barrier. Despite 86 per cent of IT leaders believing their teams are well positioned to lead AI efforts, many said they are underestimated by other departments. Over a third said IT’s role is seen as reactive rather than innovative. Nearly half cited poor alignment between IT and the wider business as a major obstacle to unified AI strategies.

The report points to orchestration — the integration of systems, tools and teams — as central to overcoming these barriers. By linking fragmented efforts, orchestration can provide secure, transparent and compliant workflows. Almost three-quarters of respondents stressed the need for end-to-end visibility across AI systems, and nearly half said they wanted partners offering centralised orchestration to reduce silos and build trust.

To succeed, the study recommends that IT teams increase visibility into AI initiatives, align across departments and adopt low-code or no-code tools to scale more efficiently. Articulating AI outcomes in terms of return on investment and operational efficiency is key to winning executive support and funding.

The focus on governance reflects growing academic concern around the risks of autonomous and agentic AI systems. Researchers have identified novel threat vectors stemming from AI’s cognitive functions, persistent memory and tool integration — all of which outstrip the capacity of traditional security models. New governance frameworks designed specifically for AI are needed to address these threats and prevent systems from becoming liabilities.

In the public sector, existing compliance models are often too siloed and episodic to oversee AI responsibly. Researchers argue for adaptive institutional designs that combine governance, operational visibility and continuous auditing.

At a global level, proposals for hardware-based guarantees and cryptographically enforced governance are emerging to manage AI’s geopolitical risks, underlining the need for multi-layered international approaches. Today’s fragmented governance landscape — where risk, compliance and ethics are handled in isolation — adds to the difficulty of scaling AI responsibly. Unified frameworks are now being developed to integrate regulatory requirements, risk management and actionable controls into agile systems that can evolve alongside new regulations.

The Forrester study presents a cautiously optimistic outlook. With the right tools, alignment and oversight, IT teams can lead scalable, secure and trusted AI efforts. This not only strengthens the UK and international position in responsible AI but also fosters innovation in a sustainable and secure way.

Created by Amplify: AI-augmented, human-curated content.