The use of deepfake pornography has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn.
The term "deepfake" was coined in 2017 on a Reddit forum where users shared altered pornographic videos created using machine learning algorithms.
[1] Deepfake pornography was originally created on a small individual scale using a combination of machine learning algorithms, computer vision techniques, and AI software.
One of the earliest examples occurred in 2017 when a deepfake pornographic video of Gal Gadot was created by a Reddit user and quickly spread online.
[7][8] In January 2024, AI-generated sexually explicit images of American singer Taylor Swift were posted on X (formerly Twitter), and spread to other platforms such as Facebook, Reddit and Instagram.
"[15][19] The controversy drew condemnation from White House Press Secretary Karine Jean-Pierre,[20] Microsoft CEO Satya Nadella,[21] the Rape, Abuse & Incest National Network,[22] and SAG-AFTRA.
[24] Later in the month, US senators Dick Durbin, Lindsey Graham, Amy Klobuchar and Josh Hawley introduced a bipartisan bill that would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made non-consensually.
[25] It emerged in South Korea in August 2024 that many teachers and female students were victims of deepfake images created by users who utilized AI technology.
[26][27][28] On Telegram, group chats were created specifically for image-based sexual abuse of women, including middle and high school students, teachers, and even family members.
Perpetrators use AI bots to generate fake images, which are then sold or widely shared, along with the victims’ social media accounts, phone numbers, and KakaoTalk usernames.
Investigations revealed numerous chat groups on Telegram where users, mainly teenagers, create and share explicit deepfake images of classmates and teachers.
[30] On September 21, 6,000 people gathered at Marronnier Park in northeastern Seoul to demand stronger legal action against deepfake crimes targeting women.
[31] On September 26, following widespread outrage over the Telegram scandal, South Korean lawmakers passed a bill criminalizing the possession or viewing of sexually explicit deepfake images and videos, imposing penalties that include prison terms and fines.
One promising approach to detecting deepfakes is through the use of Convolutional Neural Networks (CNNs), which have shown high accuracy in distinguishing between real and fake images.
This algorithm utilizes a pre-trained CNN to extract features from facial regions of interest and then applies a novel attention mechanism to identify discrepancies between the original and manipulated images.
[44] A newer version of bill was introduced in 2021 which would have required any "advanced technological false personation records" to contain a watermark and an audiovisual disclosure to identify and explain any altered audio and visual elements.
[54][55] In September of that same year, Google also added "involuntary synthetic pornographic imagery" to its ban list, allowing individuals to request the removal of such content from search results.