University of Surrey: New report sets out practical steps to protect elections from the risks posed by AI

Watch more of our videos on ShotsTV.com 
and on Freeview 262 or Freely 565
Visit Shots! now
Spotting deepfakes and questioning the provenance of information is essential to safeguard democracy as millions of voters head to the polls this year, according to a new report by the University of Surrey’s Institute for People-Centred AI.

The report calls for campaigns teaching the public to spot AI-generated content. It also demands greater funding for research into detecting deepfakes.

Dr Bahareh Heravi, Reader in AI and the Media at the Surrey Institute for People-Centred AI, said: “Misinformation at election time is nothing new. Yet, AI makes it easier than ever before to sow false information among voters.

Hide Ad
Hide Ad

“That’s why we must give voters the tools to tell fact from fiction. Greater media literacy can only strengthen our democracy.”

Spotting deepfakes and questioning the provenance of information is essential to safeguard democracy as millions of voters head to the polls this year, according to a new report by the University of Surrey’s Institute for People-Centred AI. Picture by LIONEL BONAVENTURE/AFP via Getty ImagesSpotting deepfakes and questioning the provenance of information is essential to safeguard democracy as millions of voters head to the polls this year, according to a new report by the University of Surrey’s Institute for People-Centred AI. Picture by LIONEL BONAVENTURE/AFP via Getty Images
Spotting deepfakes and questioning the provenance of information is essential to safeguard democracy as millions of voters head to the polls this year, according to a new report by the University of Surrey’s Institute for People-Centred AI. Picture by LIONEL BONAVENTURE/AFP via Getty Images

Among the report’s other key recommendations:

Wider use of content verification – including clear labelling for AI-generated material.

A ‘fact-checkers code’ to encourage media companies to investigate and report misinformation.

Laws should be made to make social media companies responsible for content on their platforms.

Hide Ad
Hide Ad

Funding for UK-based research into AI tools which could help detect misinformation and disinformation.

The report also calls for greater leadership from politicians on all sides.

Dr Andrew Rogoyski, Director of Innovation and Partnerships at the Institute for People-Centred AI, said: “This is a crucial year for the world’s democracies, with AI set to play a critical role whether we like it or not. Yet, so far, politicians have taken a back seat, letting academics and tech firms lead the conversation.

“With so much opportunity arising from AI, it’s unhelpful to let the negative applications like fakery and disinformation grow in use.

Hide Ad
Hide Ad

“We need our leaders to show up in this debate. They should demand action to help their constituents navigate democracy in the age of digital media and AI.

“They should also show personal leadership. Perhaps by pledging not to use AI to mislead voters in this crucial election year?”