In 1981 the critically unacclaimed Looker presented the story of a nefarious corporation digitizing and then murdering super models. This was, one assumes, to avoid having to pay them royalties. In many ways, this film was a leader in technology: it was the first commercial film to attempt to create a realistic computer-generated character and the first to use 3D computer shading (beating out Tron). Most importantly, it seems to be the first film to predict a technology for replacing people with digital versions and to predict that it would be used with nefarious intent.

While the technology for creating digital versions of real people is still a work in progress, it is quite good and will continue to get better. While one might think that such creations would require the resources of Hollywood,  the software to create such deep fakes has been readily available for years, thus opening the door to anyone to create their own digital deceits.

As should be expected, the first use of this technology was to “deepfake” the faces of celebrities onto the bodies of porn actors. While obviously of concern to the impacted celebrities, the creation of deepfake celebrity porn is probably the least harmful aspect of deepfakes. Sticking within the realm of porn, deepfakes could be created of normal people in efforts to humiliate them and damage their reputations (and perhaps get them fired). On the other side of the coin, the existence of deepfakes could enable people to claim that real images or videos of them are not real. One can easily imagine cheaters using the deepfake defense and the better deepfakes get, the better the defense. This points to the broad problem with the existence of deepfakes: when the technology is good enough and widespread enough, it will be difficult to tell what is real and what is deepfake. This is the core moral problem with the technology and its potential for abuse is considerable. One obvious misuse is the creation of fake news in the form of videos of events that never occurred and recordings of people saying things they never said.

It can be argued that there are legitimate uses of deepfake style technology, such as movies and video games. This is a reasonable point: if those being digitized provide informed consent, this is just an improved version of the CGI that has long been used to recreate the appearance of actors in movies and video games. However, this argument misses the point: it is not the technology that is the problem, it is the use. To use an analogy, one can defend guns by arguing that there are legitimate uses (such as self-defense, hunting and target shooting) but this does not defend homicides committed with guns. The same holds for deep-fake technology: the technology itself is morally neutral, although it can clearly be used for evil ends. This makes it problematic to control or limit the underlying technology, even if it is possible to do so. It is easy to acquire the software and almost impossible to control access to this technology. Controlling it would be on par with trying to prevent access to pirated movies and software. Because of this, limiting access is not a viable option.

From a philosophical perspective, deepfakes present an epistemic problem worthy of the skeptics. While not on the scale of the problem of the external world (how do we know the allegedly real world is really real?), the problem of deepfakes presents a basic epistemic challenge: how do you know that a video or audio recording is real and not a deepfake? The problem can be seen as having two parts. The first is discerning that a fake is a fake. The second is discerning that the real is real. Fortunately, the goal here is practical in that we do not need epistemic certainty, we just need to be reasonably confident in our judgements. This does raise the problem of sorting out how confident we need to be in each situation, but this is nothing new and law and critical thinking have long addressed the matter of required levels of proof.

On the philosophical side, the old tools of critical thinking will still serve against deepfakes, although awareness of the technology will be essential. For example, if a video appears of Taylor Swift killing cats, then it would be reasonable to conclude that this is a deepfake. Whatever one might think of Taylor Swift, she does not seem to be a cat killer. There is also the general point that deepfakes do not create physical evidence and Life Model Decoys (probably) do not exist.  Naturally, fully addressing the critical thinking needed to address deepfakes goes far beyond the scope of this essay.

On the technological side, there will be an ongoing arms race between software used to create deepfakes and software used to detect them. One concern is that nations will be working hard to both defeat and create deepfakes—so there will be plenty of funding for both. Interestingly, there seems to have been little use of deepfake technology in American politics, perhaps because it was judged to be either unnecessary or too risky to use.

My Amazon Author Page

1 thought on “Deep Fakes Revisited

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>