Deepfakes?
Cybercrime is continuously evolving and progressing simultaneously with the advances in technology. Thanks to these advances, a new and quite frightening trend, called deepfake, has recently surfaced. A deepfake is a media operated software that can take a person’s face and place it on another person in an already existing video or image. This may sound quite trivial and not creating additional risks as the concept isn’t exactly a new one when we take into consideration computer generated imagery, in movies for example. It is very common to paste an actors digitized face on their stunt double, for example, to maintain the films illusion and suspension of disbelief. The perplexing question that we have here is: what does this ever improving technology entail for cybercrime? What does it entail for spectrums regarding pornography, politics, and fraud?
Deepfakes pose a great threat to society as we know it and undermine the perception of truth and what is real and what is fictional. Once we start messing with what we know must be true due to how we perceive it, a line is crossed that affects credibility on a whole other scale. If we are unable to distinguish between what is real and what is a synthesis of the truth, or distinguish threads of falseness in a fabric of truth we, at a societal level, are in danger.
To understand this developing epidemic we must examine first how it came about. What’s interesting about the development of deepfakes is that it took place across two fields: the academic one and the amateur one. While the academics were researching and developing this technology for greater purposes, the amateurs were applying it to more mundane aspects, such as pornography. Deepfakes lie in the computer vision field, a subcategory of computer science. This interdisciplinary scientific field researches how we can make computers achieve a greater level of understanding and extract information from digital videos and images alike. In a much more concise way, “this is what happens with deepfake, a set of techniques used to synthesise new visual products” (Floridi, 320). Its origins come about in 1997 in a video rewrite program titled ‘Driving Visual Speech with Audio’, by researcher Christoph Bregler et al. The concept was to take a video of a person speaking, and have them say something else without raising doubts that spoken words did not match the facial traits and gestures: “Video rewrite reorders the mouth images in the training footage to match the phoneme sequence of the new audio track” (Bregler, 1). They succeeded, and they were the first to make a technological breakthrough in the field, creating a working facial reanimation that would match the new written text.
Working off of this technology, today, anyone with access to the appropriate software can synthesize images and videos to create something new, something altered: alternative reality. “Modified or photoshopped images are certainly nothing new, but the difference between deepfakes and these prior image manipulation techniques is that there is no reliable way to detect deepfakes today” (Wang, 2019). “The most popular algorithm capable of generating deepfake images is Generative Adversarial Networks (GAN)” (Wang, 2019), this software may be operated by anyone with basic programming skills and thus is universally applied to developing deepfakes. “One network is the ‘generative’ model that aims to generate similar data to the original training set. The other is the ‘discriminatory’ model whose goal is to classify whether a particular data set is synthetic or original” (Wang, 2019). The term ‘deepfake’ originated from Reddit, wherein creators of deepfakes would share their work. These would involve celebrities faces plastered onto pornstars featured in pornographic videos, and a trend picked up in 2017 to put Nicolas Cage’s face onto other movie stars and this went viral. By then, the accessibility of deepfakes and its power was undeniable, although not yet frightening. Applications such as FakeApp, launched not long ago, allow you and your friends to mess around with photographs and videos, swapping each other’s faces. In 2016, Snapchat launched the face swap filter which would swap your face with the person standing next to you. What used to be a mere synthesis of code was now being replicated and perfected by both amateurs and industries, and most importantly it was being made available to all the public.
Pornography
Pornography was one of the first things to be affected by deepfakes. As aforementioned, it became a popular trend to plaster a celebrity’s face onto that of a pornstar. Needless to say, this was non-consensual, and it became highly viral porn, and primarily impacted actresses. Because of this fast growing trend, a number of websites began to pop-up with the primary intention to distribute deepfake non consensual porn. These porn deepfakes of famous women although may not have been intentionally used to undermine powerful women, it did so. Celebrities such as Emma Watson, who is a spokesperson for the United Nations, were found to be among the most common targets.
Then there is the issue of ‘revenge porn’. The accessibility and the simplicity of such software, that by 2016 were made available to run on every platform and operating systems, made everyday men and women the targets of revenge porn. “Imagine seeing yourself in a sexually explicit video in which you have never participated. This is a distinct possibility today for a female celebrity or a regular woman living in the age of deepfakes” (Wang, 2019). This is a blatant violation of the victims’ rights to their own image and their privacy. Revenge porn can ruin someone’s career and life, as one can imagine. The implications of deepfake porn should stimulate lawmakers to discuss to what extent does one’s intellectual property of synthetic data belong to, the creator or the original owner? Deepfake, in general, is bound to open new legal turfs for many actors to battle and fight for damages and reparations, possibly opening to an additional taxonomy item of cybercrime.
Politics
“A picture may be worth a thousand words, but there is nothing that persuades quite like an audio or video recording of an event” (Chesney, 147). The biggest and most worrisome issue regarding deepfakes is their impact on politics. “What is the future ahead of us? Digital technologies seem to undermine our confidence in the original, genuine, authentic nature of what we see and hear” (Floridi, 320). If one cannot trust what they see and hear anymore, what does that entail for democracy, for politics which affect countries retrospectively and societies? Nothing good.
“For an alarming vision of the possible impact of deepfakes on the information landscape, check out a fake video of Barack Obama warning about the dangers of fake videos” (Bates, 64). In 2017, actor and director Jordan Peele used artificial intelligence to warn the media on the dangers of deepfakes. The video was produced by Peele’s production company using a combination of Adobe After Effects and the FakeApp aforementioned. The video was created to have Barack Obama deliver a public service announcement. Peele, as Obama, states “It may sound basic, but how we move forward in the age of information is going to be the difference between whether we survive or whether we become some kind of fucked-up dystopia” (Video).
The video serves as a grave warning for banality of the distortion and distribution of information. This technology poses a great threat for politics. Imagine a video, wherein we see “the Israeli prime minister in private conversation with a colleague, seemingly revealing a plan to carry out a series of political assassinations in Tehran…In a world already primed for violence, such recordings would have a powerful and potential for incitement” (Chesney, 147).
Fraud
Deepfakes now serve as a new tool for social engineering. Criminals want to make money, and they want to make it fast. What better way to make money and protect your identity at the same time by simply pretending to be someone else. The perfect crime, the criminal is untouchable and often untraceable. Access to deepfakes enables criminals to commit crimes first hand, whilst masking their identity by changing their voice or face using deepfake softwares. “The Wall Street Journal reported that some clever attackers built a deepfake voice model used to convince an employee of a UK energy company that they were speaking to the company’s CEO. The employee then authorized a wire transfer of some $243,000 to a secret account” (Eddy, 2). Above is an example of a great scam, but criminals can target anybody, such as regular people, for smaller payouts. “It’s not hard to imagine a scammer using videos posted to Facebook or Instagram to create deepfake voice models to convince people their family members need a lot of money sent by wire transfer” (Eddy, 4). It’s not hard to imagine a person falling for such a trick, afterall, they’re just trusting what they see and hear, and why shouldn’t they?
Deepfakes make that anyone can become a criminal and anyone can become a target. We can’t tell people to question everything they see and hear, it’s not possible; it undermines everything, every bond, every stability, the trust element which keeps society together. To live with such distrust would eventually drive people mad, fall for every possible conspiracy theory, living in a continuous state of suspicion, fear, hostility. If we can’t trust what we perceive with our own eyes and ears, what can we rely on?
Implications
Who will be our friends, and who the foes? The world would not just distrust ‘Fake News’ from named providers, but the whole world being potentially fake, and anything digital deemed unreliable: any piece of information, if not experience first hand, without any technology intermediaries, would be dismissed as unproven, unreliable, construed, altered, a pervasive deep state conspiracy perpetrated by others, and the others will be different depending on who you’re talking to. This, if you just think it over for a second, could dismantle all society’s assumptions which allow for collaboration, trades, relationships, exchanges, progress, and ultimately peace and stability.
Deepfakes are everywhere. You might not even know whether you’re listening to or looking at a mere synthesis of a twisted truth. The implications of such software and the accessibility of it are grand. We can’t tell people to be cautious, more than they already are. There will be always victims as although some criminals are dimwitted, some are not, and understand well the potential of this technology for the furtherance of their crimes and expand their reach. The threats such technology poses are grand and will undoubtedly grow as more advances are made both by academics and mere amateurs. After the investigation of the impact of deepfakes on pornography and its negative effect on women, politics and the dangers of fake news appearing real, fraud and the simplification and maximization of crime potential, it is evident that deepfakes pose a large variety of threats to all. Deepfakes do provide, to a great extent, a new uncharted ground for cybercrime, enabling endless possibilities for the furtherance of crime and the securitization and anonymity shield such software enables. Anyone who possess the power to alter reality as we know it, can fool the whole world, from the safety of their home, behind their computer, with the ability to reach millions and manipulate the whole world.
Works Cited
- Bates, Mary Ellen. “Say What? ‘Deepfakes’ Are Deeply Concerning.” Online Searcher, vol. 42, no. 4, 2018.
- Chesney, Robert, and Danielle Citron. “Deepfakes and the New Disinformation War: The
- Eddy, Max. “Scammers Go Phishing with Deepfakes.” Pc Magazine, vol. Oct2019, P33, 2019.
- Floridi, Luciano. “Artificial Intelligence, Deepfakes and a Future of Ectypes.” Philosophy & Technology, vol. 31, no. 3, 2018, pp. 317–321., DOI:10.1007/s13347-018-0325-3.
- Wang, Chenxi. “Deepfakes, Revenge Porn, And The Impact On Women.” Forbes, Forbes Magazine, 1 Nov. 2019, http://www.forbes.com/sites/chenxiwang/2019/11/01/deepfakes-revenge-porn-and-the-impact-on-women/#582c39b41f53.
- Bregler, Christoph, et al. Video Rewrite: Driving Visual Speech with Audio. 1997.
- “Jordan Peele Uses AI, President Obama in Fake News PSA.” YouTube, YouTube, http://www.youtube.com/watch?v=bE1KWpoX9Hk.