As AI systems begin to initiate actions, trigger workflows and operate across tools with increasing autonomy, organisations across sectors are encountering a new operational reality. Agentic behaviour is already present inside products, services and internal operations, yet it often emerges gradually and unevenly, shaped by tooling choices and integrations rather than explicit design. In many cases there is limited shared language, unclear delegation boundaries and uncertainty about where accountability sits once systems begin to act.

As systems move from recommendation to execution, the consequences become more tangible. Decisions may be taken faster than traditional oversight models can accommodate. Authority may be delegated implicitly rather than deliberately. Visibility across teams and platforms can weaken. The risks are not abstract. They affect internal confidence, external trust and regulatory scrutiny.

This roundtable will establish a shared view of what agentic behaviour looks like in practice today, surface where systems are already acting within workflows including where this may be occurring without full organisational visibility, and explore how delegation, oversight and intervention are being approached as autonomy increases. It will identify common pressure points and early signals, including the minimum questions organisations should now be asking internally.

The session will help set the direction for more focused work ahead, with the Agentic AI Working Group shaping how these themes evolve across technical, oversight and accountability priorities.