Is the Deepfake Phenomenon Your Number One Cyber Risk?

February 19, 2020

What is a deepfake? Deepfake is a linguistic blend of “deep learning” and “fake” and is described as a synthetic medium in which a person in an existing image or video is replaced with someone else’s likeness. There are three main elements to creating deepfakes – large datasets, machine learning and computing power plus one key element to circulate – internet platforms.

The American biologist, Paul R. Ehrlich said it best “To err is human, but to really foul things up you need a computer”.

Popular examples of deepfakes.

The earliest examples of deepfakes involved pornography and quickly moved to include revenge porn, unfortunately becoming one of the latest methods of image based sexual abuse. Mainstream movie productions have used a version of deepfake technology for many years by either inserting one actor’s face for another, replacing an older actor’s face with their younger image or even creating a completely new digital actor.

FaceApp is an example of an entertainment type deepfake application available for Android and iOS. These apps use AI based neural networks to create realistic transformations and can make someone look older or younger, change facial expressions and change gender.

A voice deepfake was used in 2019 to scam the CEO of a UK based energy company. He thought he was on the phone with the German parent company chief executive and followed instructions to transfer $243 000 to a Hungarian supplier. The money vanished and the perpetrator has never been caught.

Click here to view manipulated YouTube footage of Bill Hader, a comedian with a remarkable ability for celebrity impressions, impersonating Al Pacino and Arnold Schwarzenegger. The seamless transition of faces is quite scary.

Not all deepfakes are created with malicious intent and most circulated deepfakes are created as parodies, but as with most things we enjoy they do often become tainted and eventually curtailed in the interests of privacy violations. Many websites such as Twitter, Google and Facebook have now either banned or limited accounts for posting deepfake content.

What are the risks and how to respond?

We have already seen the effect of deepfakes at a country level especially with regard to politics. Relatively easy targets are politicians such as Donald Trump, Nancy Pelosi and a bit closer to home the Gabonese president, Ali Bongo (with the strange and possible deepfake video released by his government). Disinformation has always been a political risk and fake news has long been used to manipulate elections – now deepfakes are the next wave. National security and political stability are high level risks – even more so for developing countries with a fragile political environment.

At a company or organisation level you do not only need to worry about the more obvious reputational risk e.g. a CEO could falsely announce bad news about the company with disastrous effects on market image and share prices. Organisations now need to worry about fake communications to employees aimed at obtaining login details thus exposing the corporate network. Employee education is a key element in being able to identify a deepfake as early as possible. The sooner the deepfake is detected the sooner the company’s PR team can counter with the correct information and control the narrative.

At a personal level never assume that seeing is believing. Always check who is posting the video and what their intention is, then most importantly do a bit of research. People are quite careless about what they share on social media so do try to find some related information and make an informed decision before believing it or forwarding it.

Whereto from here?

Technology is being developed to spot deepfakes before they become too hard to distinguish from the real thing, but I wouldn’t hold my breath if I were you as it appears to be a losing battle. To know how to spot a deepfake you need to know how to create a deepfake which presents a new risk in itself. For years we have had computer software allowing us to manipulate photos and videos and now with self-learning artificial intelligence systems the speed of analyzing data has increased the output capability substantially.

Digital literacy needs to play a more important role. The paradox is that while we should not believe everything we see or hear we do need to use technology to corroborate information and research responsibly. A balance of both healthy distrust and curiosity is needed.

Benjamin Franklin said “It is the first responsibility of every citizen to question authority”. In the age of deepfakes I think we can lose the last word of that quote.

Author – Warrick Asher