Live facial recognition (LFR) is becoming an everyday presence in UK public life. From high-street shops to large-scale events, AI-assisted vigilance is expanding fast—generating more than 10,000 suspect alerts a week in retail alone, according to new data from Facewatch.
The firm, which provides biometric alerts to subscribing stores, logged over 43,000 alerts in July 2025—more than double the figure for the same month last year. The sharp rise comes as shoplifting and abuse continue to rise, with many retailers turning to rapid, non-confrontational tools to protect staff and deter crime.
“July’s record numbers are a further stark warning that retailers and their employees are facing unprecedented levels of criminal activity,” said Facewatch chief executive Nick Fisher. Alerts, he added, allow staff to act swiftly and calmly. The technology has been welcomed by some frontline workers; one charity shop experimenting with LFR described a renewed sense of safety after a spate of verbal abuse and theft.
The system works by notifying staff when someone on a watchlist enters the store, allowing non-intrusive monitoring and decision-making. But the broader rollout of facial recognition has not gone unchallenged.
At Notting Hill Carnival this August Bank Holiday, the Metropolitan Police will deploy LFR cameras outside the event boundaries to identify people on police watchlists, locate missing persons and enforce public protection orders. The operation will be backed by around 7,000 officers each day, with LFR matches reviewed by police before any action is taken. Non-matches are deleted immediately.
The Met says the goal is a safer environment with minimal disruption, but civil liberties groups warn the move risks normalising “mass surveillance.” Big Brother Watch has called the Carnival deployment invasive and discriminatory, urging the government to scrap the technology or, at minimum, legislate clear oversight.
The episode reflects a broader debate about the role of AI in policing and public safety. The Met argues the technology has matured since early trials, with greater accuracy and safeguards, and cites hundreds of arrests across 2025. But critics remain concerned about bias, transparency and the speed at which biometric surveillance is spreading.
An FOI disclosure confirms LFR was not used at Notting Hill in 2023—making this year’s deployment a notable shift. The force says data handling now meets stringent standards, but campaigners insist the policy framework remains incomplete.
The picture in retail is no less complex. Facewatch's July figures show rising adoption, but also raise questions about public consent and oversight. Privacy advocates argue that biometric tools, while useful in deterring theft, should be governed by clear limits and open reporting of performance and error rates.
Meanwhile, the underlying infrastructure powering this expansion is increasingly centralised. Cloud providers such as Amazon, Microsoft and Google supply the majority of European AI compute, prompting new discussions in Brussels about digital sovereignty and dependency.
For the UK, the moment is pivotal. Biometric systems are delivering safety gains—but the country’s ability to lead in responsible AI will depend on embedding public trust. That means independent oversight, transparent safeguards, and ongoing dialogue between law enforcement, industry and civil society.
The direction is clear: facial recognition is here. Whether it can serve both security and rights will depend on how the UK navigates the fast-evolving balance between innovation and accountability.
Created by Amplify: AI-augmented, human-curated content.