UKAI

Sage launches AI Trust Label to set new standard for ethical transparency

Sage has launched its AI Trust Label, a new transparency initiative aimed at increasing customer confidence in the ethical use of artificial intelligence across its products. The label outlines how Sage’s AI systems function and aligns with global standards, including the NIST AI Risk Management Framework.

Aaron Harris, Sage’s Chief Technology Officer, described the label as both a “quality seal” and an “ingredients label,” offering clarity on data sourcing, model development and training processes. “We're being transparent with our customers on the facts around AI in each product,” he said.

The label will appear across user interfaces, including settings, dashboards and onboarding screens, to ensure that transparency is embedded throughout the user experience. The rollout will begin later this year in selected AI-powered products in the UK and US, supported by further disclosures on Sage’s Trust and Security Hub.

Sage is also calling for an industry-wide certification framework for ethical AI use. The company hopes its label can serve as a blueprint, particularly for small and medium-sized enterprises navigating inconsistent regulations. “A coordinated effort would establish universally recognised benchmarks for ethical AI development,” said Harris.

This move is part of Sage’s broader commitment to accessible, responsible AI. The firm is working with Amazon Web Services to develop AI tools that support the compliance and operational needs of SMBs, combining advanced technology with ethical design.

With scrutiny of AI practices growing, Sage’s initiative signals a shift towards more accountable innovation. By launching the AI Trust Label and advocating for shared standards, the company is helping shape a more transparent and ethical future for AI adoption.

Created by Amplify: AI-augmented, human-curated content.