UKAI

Sir Billy Connolly warns fans over AI impostors as deepfake fraud surges

Sir Billy Connolly has warned fans to beware of impostors using artificial intelligence to replicate his voice and image online. Posting on his official website, the 82-year-old comedian revealed that scammers have created fake social media and email accounts targeting followers — and even reached his wife, Pamela Stephenson. “There could well be criminal intent,” Connolly said, noting that he only uses his website and an official Facebook page, not platforms such as Instagram, X, Snapchat or Telegram.

The episode is a vivid example of how easily accessible AI tools are fuelling a new wave of impersonation fraud. Industry analysts explain that today’s deepfake systems can clone a likeness or voice from just a few seconds of footage, then embed it into fake messages, videos or livestreams aimed at extracting money or personal data. Often, the deception begins with harmless-seeming messages or posts, before escalating to more serious requests.

This technical ease is underpinned by the rapid expansion of cloud and AI infrastructure. Research from Synergy shows that Amazon, Microsoft and Google now account for roughly 70% of Europe’s cloud computing market, worth €61 billion in 2024 and projected to grow 24% this year. That centralisation is triggering concern in Brussels and national capitals about digital sovereignty and dependency.

These structural trends have direct implications for scams like those targeting Connolly. Centralised services make deepfakes easier to scale and harder to trace. But they also fund the detection tools now being developed to fight back. Governments and industry are exploring certification schemes, sovereign clouds and cross-border safeguards to diversify infrastructure and reduce risks.

Despite the risks, AI is also being used to improve public services. The Guardian reports that Chelsea and Westminster NHS Trust is piloting an AI system that drafts discharge summaries from patient records. Hosted on the NHS Federated Data Platform, the tool is part of the Prime Minister’s AI Exemplars programme and has been praised by Health Secretary Wes Streeting as helping clinicians spend more time with patients and less on paperwork.

Experts say technical and policy levers can help tip the balance towards responsible AI use. Platforms should make official accounts easier to verify, and public figures can help by stating clearly which channels they use—just as Connolly has done. Meanwhile, Europe is pushing to expand local cloud capabilities, though analysts warn smaller providers are unlikely to challenge the dominance of hyperscalers soon.

Device manufacturers are also responding. Google’s upcoming Pixel 10 will feature upgraded on-device AI, reducing reliance on central servers and improving privacy. Across sectors, industry is grappling with the double-edged nature of AI—its transformative promise and its need for redesigned safeguards and clearer accountability.

For consumers, the advice is simple: treat unsolicited messages with caution, check whether an account is official, ignore requests for money or private information, and report suspicious activity to platforms and police. For regulators and public bodies, Connolly’s case underscores the urgency of investing in fraud detection, public education and diverse AI infrastructure.

The UK already has working examples of responsible AI use. The NHS pilot shows how well-governed systems can enhance services without undermining trust. If policymakers, industry and civil society continue to prioritise transparency, verification and on-device privacy, the UK can lead in ensuring AI works for the public good—not against it.

Created by Amplify: AI-augmented, human-curated content.