The Ethics of Deepfake Pornography
Artificial Intelligence (AI), and in particular so-called Deep Learning algorithms, provide technology users the flexibility to edit and manipulate digital video content. Similar technologies are widely used on popular apps like Snapchat, which has a Face Swap feature that allows users to switch faces with one another in live videos. They have also provided Hollywood filmmakers the ability to add deceased actors into new movies. But Deep Learning is also increasingly used for another purpose: to generate pornographic content – commonly known as Deepfake pornography.
Due to recent technological developments, even those with modest coding experience are now able to manipulate pornographic videos so they appear to feature non-pornographic celebrities, or even a user’s friends or acquaintances. All that is required is a few minutes of video content of a person’s face, or a larger number of photos, which are then “pasted” onto other videos with strikingly realistic results. Considering the amount of personal content uploaded by social media users, in combination with the fast enhancement of AI technology, the social significance of Deepfake content is rapidly growing. This poses an urgent need for ethical analysis: wherein lies the potential harm in generating, distributing, and consuming deepfake content? Are such technologies inherently tools of enhanced oppression, or do they provide any potential for sexual emancipation? How do the ethical implications extend and complicate the frameworks already debated by feminist scholars related to non-Deepfake pornography?
For decades, pornography has been a contentious topic among feminist scholars. Anti-pornography scholars have argued that it is by its nature oppressive to, and violent against, women. Newer waves of feminist scholars have argued for the potential of a feminist pornography that works toward sexual equality. Undoubtedly, the advent of the Internet and video technologies has complicated these debates, as the barriers to production and distribution of pornography have been lowered. Among other phenomena, amateur pornography and “revenge porn” have proliferated, bringing issues of consent to the foreground.
The advent of Deepfakes adds additional complexities to the matter, demanding continued assessment of the ethical dimensions at work. Most notably, it forces us to reconsider questions of personal identity, collective and individual harm, objectification and commodification, and the demarcation between imagined and visualized information. The goal of this project is therefore to 1) outline an ethical framework to unpack these questions, and 2) to examine the role of such a framework in a larger political context. In pursuing this goal, we bring classical feminist scholarship into dialogue with information ethics (IE), enabling new problems to be identified, as well as potential solutions to be realized.