Home IssueArtificial Intelligence EU Proposals Will Fail to Curb Nonconsensual Deepfake Porn

EU Proposals Will Fail to Curb Nonconsensual Deepfake Porn

by Patrick Grady
by

Existing and proposed laws will fail to protect EU citizens from nonconsensual pornographic deepfakes—AI-generated images, audio, or videos that use an individual’s likeness to create pornographic material without the individual’s consent. Policymakers should amend current legislative proposals to better protect victims and, in the meantime, encourage soft law approaches.

Although deepfakes can have legitimate commercial uses (for instance, in film or gaming), 96 percent of deepfake videos found online are nonconsensual pornography. Perpetrators superimpose the likeness of an individual—most often an actor or musician, and almost always a woman—onto sexual material without permission. Sometimes perpetrators share these deepfakes for purely lewd purposes, while other times it is to harass, extort, offend, defame, or embarrass individuals. With the increasing availability of AI tools, it has become easier to create and distribute deepfake nonconsensual pornography.

There are no specific laws protecting victims of nonconsensual deepfake pornography, and new proposals will fall short.

The Digital Services Act (DSA) obliges platforms to demonstrate the procedures by which illegal content can be reported and taken down. However, this will have little impact on the spread of nonconsensual pornographic deepfakes since the bill fails to classify nonconsensual deepfakes as illegal. The DSA obliges the largest platforms to undertake risk assessments, deploy mitigation measures, and subject themselves to audits to ensure they enforce their terms and conditions—but 94 percent of deep fake pornography is hosted on dedicated pornographic websites, not on mainstream platforms (mainstream platforms have already adopted policies to stop the spread of this content). And the EU dropped a proposal in the DSA during last-minute negotiations that required porn sites hosting user-generated content to swiftly remove material flagged by victims as depicting them without permission.

The Artificial Intelligence (AI) Act, likely to pass into law in 2023, requires creators to disclose deepfake content. But, in cases of well-known individuals, disclosure of pornographic deepfakes would hardly deter perpetrators, since the demand for the content does not depend on its authenticity, nor would it surprise viewers who may have already assumed the content to be fake.

The Directive on Gender-Based Violence proposed in 2022 is the most promising legislative solution. The bill would criminalize nonconsensual sharing of intimate images, and perpetrators could face jail time. Article 7b includes in-scope material that users produce or manipulate, “making it appear as though another person is engaged in sexual activities.” Unfortunately, this phrasing covers neither nudity that is not explicitly sexual nor sexual imagery that is not wholly nude. Given that deepfakes involving both can be harmful if shared, all “intimate material” manipulated and shared without consent should be in scope. Moreover, the bill includes only material “made accessible to a multitude of end-users”—this line should be removed, as even sharing nonconsensual pornographic deepfakes with a single person can cause great harm, such as significant psychological impacts and reputational injury.

A report by Europol—the EU’s law enforcement agency—argues that the EU should invest in deepfake detection systems. However, the AI Act would require systems used by law enforcement to fulfill costly and burdensome compliance requirements, including conformity assessments, risk management, and data governance processes. Law enforcement AI tools used explicitly for deepfake detection should be exempted from these burdens to encourage the development and use of deepfake detection tools.

Nimbler soft law approaches should supplement adjustments to the AI Act and the Directive on Gender-based Violence. Policymakers should encourage sites susceptible to hosting nonconsensual pornographic deepfakes to work with experts on best practices for reporting and taking down content, develop their own state-of-the-art detection tools, and use age-estimated tools to remove deepfakes of underage users (which are never consensual). The EU has already worked with industry to produce a self-regulatory code of practice on disinformation. Creating a similar code for nonconsensual pornography would be a positive step forward. Furthermore, public awareness campaigns can educate the public about the legality and dangers of deepfake nonconsensual pornography and include information on how to report such content and seek help if they are victims.

Nonconsensual pornographic deepfakes will require a combination of hard and soft law approaches. The EU’s move to criminalize online sexual violence is late but laudable. It should amend the Directive to ensure it protects victims. More immediately, civil society and industry experts should encourage platforms to better self-regulate their services.

Image credit: David J

You may also like

Show Buttons
Hide Buttons