Mark Sewards, the Labour MP for Leeds South West and Morley, has unveiled what he and collaborators describe as the UK’s first “virtual MP” — a voice-enabled chatbot modelled on his own voice and manner to give constituents another way to raise issues, ask about policy and leave messages outside normal hours.
According to the BBC, the prototype, developed with local start-up Neural Voice, can answer routine questions, offer advice and forward messages to his team. “Give AI Mark a try,” Mr Sewards told constituents, adding: “The AI revolution is happening and we must embrace it or be left behind.” He stressed it was a supplement, not a replacement, for his duties: “We have to embrace the opportunities that are represented by AI and what better way to learn about it than to become it.”
The chatbot — dubbed “AI Mark” or “Sewardsbot” in coverage — is voice-enabled and records interactions so staff can analyse recurring themes. Euronews and LBC report that conversations are transcribed and stored for review, aiming to help the office identify local priorities and respond more efficiently, providing a 24/7 channel for those unable to reach the office during working hours.
Neural Voice positions the project within a wider product strategy, citing earlier experiments such as “AI Steve” in Brighton Pavilion, and claiming the technology can scale engagement and capture citizen feedback for policymakers. Those claims remain untested in terms of long-term public value.
Early trials show mixed results. A Guardian reporter found the avatar used Mr Sewards’s voice and handled broad topics, but stumbled on regional dialect and some practical constituency tasks. The cartoon-style interface was praised as approachable, though advice was sometimes misdirected. The prototype carries a clear disclaimer that responses are AI-generated.
Such performance underlines both the potential and the limits of voice AI in politics. While it could help those who find typing or visiting an office difficult, it must cope with accents, idioms and the specifics of casework to be genuinely useful.
Privacy, data security and the loss of human contact have been key concerns. Dr Susan Oman, senior lecturer in data, AI and society, told the BBC there was a risk people could feel less listened to: “There is the risk here that as an MP you are trying to be more efficient and more present for your constituents, but the knock-on effect is they feel less listened to.” Victoria Honeyman, lecturer in British politics at the University of Leeds, warned that routing emotionally fraught or complex cases through a bot could distress vulnerable people and undermine confidence when errors occur.
Media outlets from LBC to the Washington Post have placed the Sewards pilot in the context of a growing trend of politicians trialling AI tools. Social media reaction has been split between those welcoming the increased access and those warning that no bot can replicate human judgement and empathy. Experts say projects like this should be treated as controlled pilots with transparent governance. Suggested safeguards include: clear labelling and consent; robust privacy and retention policies; immediate routes to human advisers for complex matters; testing with older users and regional dialects; and independent evaluation of accuracy, bias and security.
The Sewards experiment illustrates both the utility and the constraints of conversational AI in public life. If managed transparently and improved iteratively, it could help set norms for how MPs and public bodies use such tools — supporting the UK’s ambition to lead in responsible AI. Done poorly, it risks eroding trust; done well, it could widen access and make constituency services more responsive.
Created by Amplify: AI-augmented, human-curated content.