In 1981 the critically unacclaimed Looker presented the story of a nefarious corporation digitizing and then murdering super models. This was, one assumes, to avoid having to pay them royalties. In many ways, this film was a leader in technology: it was the first commercial film to attempt to create a realistic computer-generated character and the first to use 3D computer shading (beating out Tron). Most importantly, it seems to be the first film to predict a technology for replacing real people with digital versions—and to predict that it would be used with nefarious intent.
While the technology for creating digital versions of real people is not yet perfect, it is quite good and will continue to be improved. While one might think that such creations would require the resources of Hollywood, the software to create such deep fakes is readily available—thus opening the door to anyone to create their own digital deceits.
As should be expected, the first use of this technology was to “deepfake” the appearance of celebrities onto the bodies of porn actors. While obviously of concern to the impacted celebrities, the creation of deepfake celebrity porn is probably the least harmful aspect of deepfakes. Sticking within the realm of porn, deepfakes could be created of normal people in efforts to humiliate them and damage their reputations (and perhaps get them fired). On the other side of the coin, the existence of deepfakes could enable people to claim that real images or videos of them are not real. One can easily imagine cheaters using the deepfake defense and the better deepfakes get, the better the defense. This points to the broad problem with the existence of deepfakes: when the technology is good enough and widespread enough, it will be difficult to tell what is real and what is deepfake. This is the core moral problem with the technology and its potential for abuse is vast. One obvious misuse is the creation of real fake news—videos of events that never occurred and recordings of people saying things they never said.
It can be argued that there are legitimate uses of deepfake style technology—think of movies and video games. This is certainly a reasonable point: if the people being digitized provide informed consent, this is merely an improved version of the CGI that has been used to recreate actors in movies and video games. However, this argument misses the point of the concern: it is not the technology that is the problem, it is the use to which it is put. To use an analogy, one can defend guns by arguing that there are legitimate uses (such as self-defense, hunting and target shooting) but this obviously does not defend homicides committed by using guns. The same holds for the deepfake technology: the technology itself is morally neutral, although it can clearly be used for evil ends. This makes it problematic to control or limit the underlying technology—even if it were possible to do so. One need merely think about how easy it is to acquire software from anywhere in the world to see that it would be almost impossible to control access to this technology—it would be on par, one imagines, with trying to prevent access to pirated movies and software. Because of this, limiting access is not a viable option to defending against deepfakes.
From a philosophical perspective, deepfakes present an epistemic nightmare worthy of the classic skeptics. While not on the scale of the problem of the external world (how do we know the allegedly real world is really real?), the problem of deepfakes presents a basic epistemic challenge: how do you know that the video is real and not a deepfake? The problem can be seen as having two parts. The first is discerning that a fake is a fake. The second is discerning that the real is real. Fortunately, the goal here is practical—we need not be certain, we just need to be sure enough. This, of course, does raise the problem of sorting out how confident we need to be in each situation—but this is nothing new: law and critical thinking have long addressed the matter of required levels of proof.
On the philosophical side, the old tools of critical thinking will still serve against deepfakes—although awareness of the technology will obviously be essential to applying such tools. For example, if a video appears of Mike Pence cruelly torturing his family’s pet rabbit, then it would be reasonable to conclude that this is a deepfake, especially if the real rabbit seems fine. Whatever one might think of his politics, Pence does not seem to be the sort of person who would do such a thing. There is also the general point that deepfakes do not create physical evidence and cannot (yet) be witnessed occurring in the real world—something that can be relevant to sorting out the real from the fake. Naturally, fully addressing the critical thinking needed to address deepfakes would require at least a book chapter—so I will not do so here. Such a work should, of course, include a discussion of how the possibility of deepfakes can be misused to argue that what is real is fake.
On the technological side, there will be an ongoing arms race between the software used to create deepfakes and the software used to detect them. One obvious concern is that nations will be working hard to both defeat and create deepfakes—so there will be plenty of funding for both. Actually, I suspect that countries like China and Russia will focus more on creating deepfakes than defeating them, mainly because they have considerable control of their media. In contrast, deepfakes will no doubt infest American social media soon—perhaps in time for the 2020 election.