Video Archives — Carrington Malin

December 6, 2019
Donald-Trump-deepfake-video.jpg

Deepfake videos are becoming more abundant and increasingly difficult to spot.

Deepfake videos are back in the news again this week as China criminalised their publication without a warning to viewers. California also recently introduced an anti-deepfake law in an attempt to prevent such content from influencing the US elections in 2020.

Deepfakes are videos that make fake content look convincingly real, produced by software using machine learning algorithms. Videos like this started to pop up online a few years ago and since then, regulators around the world are scrambling to prevent the spread of the malicious content. While deepfake laws mean different things in different jurisdictions, what has changed to make deepfakes an urgent priority for policymakers? And will such laws be sufficient to keep pace with the spread of fake information?

First, there is the sheer quantity. The number of deepfake videos is growing fast as new technology makes it easier to create them. Deeptrace, an Amsterdam-based cybersecurity company, found the occurrence of deepfake videos on the web increased 84 per cent from December 2018 to July this year. The company identified 14,698 deepfakes online during this time.

In 2018, internet media group Buzzfeed grabbed attention with a video it dubbed “a public service announcement”: a deepfake video of US president Barack Obama “speaking” about fake news, voiced by American actor Jordan Peele. At first glance, the video appeared authentic, but on closer inspection it was clear to see the video had been manipulated.

Racking up nearly 7 million views on YouTube to date, the Buzzfeed stunt was a stark warning about the dangers of deepfakes — where anyone can appear to say anything. While the results so far have been more crude and relatively easy to identify, future deepfake videos are likely to be much harder for the human eye to identify as fake. The artificial intelligence (AI) used to make deepfakes is getting better, making it more and more difficult to distinguish a deepfake video from an original. In fact, machine learning algorithms already allow deepfake applications to mimic facial movements that are virtually undetectable as fake to human viewers.

This combination of easy-to-use deepfake software and the increasing sophistication of those applications, means that we’ll see the overall quality of deepfakes increase and we’re soon likely to see tens of thousands of different deepfakes, perhaps hundreds of thousands. Experts believe that technology to make deepfake videos that seem to be perfectly real will be widely available within a year.

So, how will we be able to tell what’s real and what’s fake?

When we see a video news report of a head of state, politician, doctor or subject matter expert saying something, how will we be able to trust that it’s authentic? This is now the subject of concern for leaders in business, technology, government and non-governmental organisations.

Undetectable deepfakes have the potential to mislead the media and the general public and so impact every aspect of business, government and society. As the risk of malicious deepfakes increases, it could represent a threat to everyone from celebrities to lawmakers, and from scientists to schoolchildren, and perhaps even the world’s legal systems.

Original videos can be manipulated in order that spokespeople say things that undermine their credibility. Likewise, inadvisable remarks made by public figures can be edited out or video evidence of a crime removed.

What’s more, the deepfake revolution is just beginning. As new technologies continue to develop, it is thought to be only a matter of years before it will be possible to create deepfakes in real-time, opening up opportunities for bad actors to deceive global audiences and manipulate public perceptions in a few moments. With a few more years of technology development, it’s also conceivable that it will become possible to create deepfakes at scale, altering video to deliver different versions of it to different audiences.

In today’s digital world, it’s not necessary that deepfakes fool mainstream media to have a significant impact. With nearly 5 billion videos watched on YouTube per day and another 8 billion through Facebook, deepfake producers have an eager global audience that is not yet accustomed to questioning whether trending videos are real or fake.

Facebook and Google are both developing AI to automatically detect deepfakes. But this technology currently lags far behind the development of deepfake tech itself. Until anti-deepfake software catches up, it’s likely the average internet user may have no way of knowing if a video is real or fake.

As scary as the future may sound, the most dangerous time for deepfakes may actually be the present.

This story was first published by The National