The rise of “deepfake”
On Tuesday night the final part of the BBC’s gripping drama, The Capture, was aired.
It’s been a series that has baffled and enthralled both critics and audiences alike with its ending, in particular, dividing opinion. Central to the plot, though, has been an examination of the working practices of the counterintelligence services and how adept they have become at using video technology.
In the words of the show’s writer and director, Ben Channan, whilst writing the script he was increasingly conscious of “how much better and faster visual effects were becoming – and how it was possible to manipulate video.” Also aware of the current debates around fake news and facial recognition software, Channan was keen to get his drama broadcast as soon as possible, because he says, he’s talking about what’s happening right now in the real world.
And in that observation, Channan is undeniably right. As BBC journalist Rory Cellan Jones wrote this week, recently published research shows that there has been a huge surge in the number of “deepfake” (deepfake is a portmanteau of ‘deep learning’ and ‘fake’) videos appearing online. The research, carried out by cyber-security company Deeptrace, found that there were now 14,698 deepfake videos online, compared with 7,964 in December 2018.
Simply speaking, deepfake videos can be defined as when artificial intelligence technology is used to merge created material onto already existing footage. Thus, there is the opportunity to present content that never actually happened; people can be made to look like they are saying and doing things that they did not do or say.
Quite obviously, this is of great concern to the world’s politicians and at present the US government is funding the Defense Advanced Research Project Agency (DARPA) in its attempt to develop technologies “for the automated assessment of the integrity of an image or video”. The ideal is to produce technology which will “automatically detect manipulations and provide detailed information about how these manipulations were performed”.
In the UK, through the Online Harms White Paper there is a more holistic response to the “prevalence of illegal and harmful content online” and the government proposes to “establish a new statutory duty of care to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services.”
Added to this both Facebook and Google appear to be committed to financing research projects into producing technology that can help all users detect when “AI has been used to alter a video in order to mislead the viewer”.
The need for progress in this area appears to be crucial. Speaking in September, Hao Li Professor of Computer Science at the University of Southern California, said that manipulated images and videos that appear “perfectly real” will be accessible to everyday people in “half-a-year to a year.” Can you imagine the chaos and confusion of a Presidential election where there is the possibility of a “created” Donald Trump competing for space with the real thing?
That maybe a flippant point to make but as Tom Van de Weghe of Stanford University has pointed out, the threats of deepfake are real and imminent: “they can be used to create digital wildfires. They can be used by any autocratic regime to discredit dissidents. They can be used to convince people that a dead leader is still alive. They can generate false statements.” Problems are exacerbated he argues, in areas where mainstream media is scarce, or where governments are destabilised. In areas, then, where the distinctions between fake and real news are increasingly blurred.
Image credit: RadioTimes.com