Soundarya Kubendran

An episode of the latest season of the speculative fiction series, Black Mirror, explored the mounting risks of advanced technology in the entertainment industry. The episode depicted a pop star being replaced by her digital avatar. However, I wouldn’t call that speculative fiction anymore, with recent developments in technology having demonstrated the possibility of such scenarios in the near future.

The technology I was referring to is Deepfake, a portmanteau of the terms ‘deep learning’ and ‘fake’. Deepfake has been splashed across news since 2017 when an explicit video with faces of celebrities doctored onto other actors was posted online. This sparked a conversation on the internet about the dangers of Deepfake – they can be used to manipulate facts in politics, propagate fake news and harass individuals. Is this technology as dangerous as it is perceived, or does it have limitations like every other technology? To get to the bottom of this, we need to understand how it works and what sort of algorithms are used.

Deepfake is a technology that uses deep learning technology to fabricate entirely new scenes or alter existing videos. Although face-swapping has been prevalent in movies, they required skilled editors and CGI experts. For example, after the death of actor Paul Walker in 2013, the rest of the scenes were created with the help of his brothers and the VFX team. On the other hand, Deepfake uses machine learning systems to make the videos appear genuine and is usually difficult to identify by the layman. Deepfakes can be created or edited by anybody without editing skills.

Generative Adversarial Networks (GANs) are used in creating deepfake videos. GANs are a class of machine learning systems that are used for unsupervised learning. It was developed and introduced by Ian J. Goodfellow and his colleagues in 2014. GANs are made up of two competing neural network models – a generator and a discriminator – which can analyze, capture and copy the differences within a dataset. The generator creates fake videos and the discriminator detects if the generated videos are fake. The generator keeps creating fake content until the discriminator is no longer able to detect whether the created content is fake. If the dataset provided to the model is large enough, the generator can create very realistic fake content. FakeApp is one such application that can be easily downloaded by users to create deepfakes. The website generates a newrealistic facial image of non-existent people from scratch every time the page is refreshed.

The potential of this technology is concerning. Like Peter Singer, cybersecurity and defense-focused strategist and senior fellow at New America said, “The technology can be used to make people believe something is real when it is not,”. It can be misused by political parties to manipulate the public and feed them misinformation. It can also become a weapon for online bullying and harassment by releasing doctored videos.

To raise awareness about the risks of misinformation, a video was released with Barack Obama’s face morphed onto filmmaker Jordan Peele. Fake videos of Mark Zuckerberg and Nancy Pelosi have also been doing rounds on the internet.

Deepfake technology is already on the US government’s radar. California has recently banned the use of deepfakes in politics to stop it from influencing the upcoming election. The SAG-AFTRA (Screen Actors Guild-American Federation of Television and Radio Artists), which has been at the forefront of battling these technologies, commended the governor for signing the bill. The Pentagon, through the Defense Advanced Research Projects Agency (DARPA), is working with several of the country’s biggest research institutions to combat deepfakes. DARPA’s MediFor program (Media Forensics department) awarded non-profit research group SRI International based in California, three contracts for researching new ways to automatically detect manipulated videos and deepfakes. Researchers at the University at Albany also received funding from DARPA to study deepfakes. This team found that analysing the blinks in videos could be one way to detect a deepfake from an unaltered video because there are not many photographs of celebrities blinking. Some researchers have suggested the idea of watermarking the deepfakes to avoid misleading people. But as we know, watermarks can be easily removed. Interestingly, Blockchain could be a part of the solution. Registries of authentic data are being created and stored on Blockchain. Photos and videos can be verified against these registries of data. This is particularly useful for journalists or activists to ensure the credibility of what they are sharing.

There are a few limitations of the technology behind Deepfakes, at present. Firstly, GANs require a large dataset to train the model to generate photo-realistic videos. That’s probably why politicians and celebrities are more targeted. Also, to run this model, heavy computing power is needed which can be expensive. At about $0.50 a GPU hour, it costs around $36 to build a model just for swapping person A to B and vice versa, and that doesn’t include all the bandwidth needed to get training data, and CPU and I/O to pre-process it. FakeApp uses TensorFlow, a Machine Learning framework which supports GPU-accelerated computation using NVIDIA graphics cards. Even though the application allows to train models without a GPU the process might take weeks, instead of hours.

As far as positive applications of this technology go, it can help filmmakers save a lot of money by morphing the face of popular actors onto the bodies of lesser known ones. Another interesting application which could result in a unique viewing experience would be to give the viewers a selection of actors to choose from to digitally place in a movie. However, this could put the jobs of actors at risk.

As of today, the negatives of this technology far outweigh the positives. If strict regulations are not put in place, then it may as well turn our lives into a Black Mirror episode.


About the Author:

Soundarya is a developer at GAVS and loves exploring new technologies. Apart from work, she loves her music, memes and board games.