What is going on with the Taylor Swift deepfake porn scandal?

AI-generated porn, driven by misogyny, is inundating the internet, and Taylor Swift is among its recent high-profile victims

Credit: Getty
Share this article

For almost 24 hours last week, deepfake pornographic images of Taylor Swift rapidly spread across X, the social media platform that’s formerly Twitter. 

One particularly egregious post garnered more than 45 million views, 24,000 reposts, and received hundreds of thousands of likes and bookmarks. The verified user responsible for sharing the explicit content eventually had their account suspended for violating platform policies – but only after 17 hours. 

The social media platform’s slow response aside, this episode highlights the escalating issue of AI-generated fake pornography and the challenges associated with preventing its rapid dissemination.

What happened? 

While the origin of the images is unclear, some sources suggest the images may have originated from a Telegram group, while others believe that the images could have emerged from Celebrity Jihad, a tabloid. 

Subsequently, the images gained considerable traction as they circulated on various social-media platforms, including X, Reddit, and Instagram.

As users began to discuss the viral post, the explicit images began proliferating, reposted across multiple accounts. The situation was then exacerbated by a wave of additional graphic deepfakes. Notably, in certain regions, the term "Taylor Swift AI" gained prominence as a trending topic, further promoting the images to broader audiences.

A Swift response 

Despite the explicit images violating X's content policy, they still gained substantial traction on the platform. X's response to the spread was notably delayed, with only a few accounts being suspended initially. As the images continued to spread rapidly, Swifties (the pop star’s fanbase) took matters into their own hands.

They flooded X with photos and videos of her concert performances in order to bury the explicit deepfake images. Simultaneously, the fanbase collaborated to report accounts responsible for disseminating the unauthorised images. They also flooded the hashtags associated with the circulating explicit content with messages promoting clips of Swift's performances, effectively diverting attention away from the deepfakes.

X subsequently temporarily blocked searches of "Taylor Swift" in order to curb the spread.

Concerns about X and content moderation

Since Elon Musk assumed control of X in 2022, the platform has encountered difficulties in content moderation, marked by staff reductions and a relaxation of rules. Despite efforts to suspend several accounts, the explicit images persisted, prompting concerns about the platform's efficacy in addressing such challenges.

In the previous year, Ella Irwin, the former Chief of Trust and Safety at X, resigned, citing a divergence in principles. In a statement made after her resignation, she noted that “there was an understanding that hate speech, for example, violent graphic content, things like that, were not promoted, advertised, or amplified”.

Broader issue of deepfake abuse and its gendered nature

While not all applications of AI-generated imagery are harmful (case in point: the funny viral deepfakes of Pope Francis in a puffer jacket), a darker side emerges when deepfakes are crafted for malicious intent. 

Women, in particular, bear the brunt of this misuse. According to DeepTrace, a staggering 96 percent of all online deepfakes constitute non-consensual fake videos of women, primarily targeting well-known actors and musicians. These images are often exploited for malicious activities such as cyberbullying, sexual extortion, and image-based sexual abuse.

This underscores the urgent need for robust measures and legislations – both international and within the nation – to address the malicious application of deepfake technology, protecting individuals, particularly women, from the consequences of such misuse.

What’s happening next?

Rumours are circulating that Taylor Swift is considering legal action against the dissemination of the deepfake images. 

The gravity of the situation has drawn attention even from The White House, where press secretary Karine Jean-Pierre emphasised the need for social media networks to play a more proactive role in preventing the spread of such images, calling the situation alarming.

Several Congressional representatives have also introduced bills aimed at combatting deepfake pornography, though it remains to be seen if there are enough comprehensive measures to address the growing threats posed by deepfake technology and the spread of non-consensual imagery online.

Share this article