Recently, there has been a disturbing trend on X (formerly known as Twitter), where explicit and offensive images featuring the famous musician Taylor Swift have been widely circulated. These images have been shared millions of times, prompting criticism from various quarters, including the singer’s fans and the White House.
One particularly shared post containing these images garnered over 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks within 17 hours before it was eventually removed from the platform. This incident is reminiscent of the Barbara Streisand effect, as the images gained more attention and were shared more widely due to the outrage expressed by fans and other users on X. The hashtag “Taylor Swift AI” trended on the platform for two days.
Fortunately, Swifties, Taylor Swift’s dedicated fanbase, overshadowed the controversial search term by creating alternative posts with similar keywords. Despite these efforts, reports suggest that some explicit images still linger on the platform.
Where did the Taylor Swift AI images come from?
404 Media discovered that the AI-generated Taylor Swift images were linked to a specific Telegram group focused on distributing abusive images of women. The group utilized a free Microsoft text-to-image generator as one of their tools.
It’s important to note that these images don’t exactly qualify as “deepfakes” based on the term’s original definition. Originally, deepfakes referred to images or videos created using adversarial networks trained on one face and replacing it on another body. In this case, rather than using AI to superimpose Taylor Swift’s face onto an existing explicit image, these images were generated entirely from scratch using generative AI.
The Telegram group recommended that its members use Microsoft’s AI image generator, Designers. Additionally, the group shared prompts to help users bypass the protective measures implemented by Microsoft. Before the images gained widespread attention, group members advised others to use “Taylor ‘singer’ swift” instead of “Taylor Swift” to evade restrictions. While 404 Media couldn’t replicate the specific images posted on X, they found that Microsoft’s Designer successfully generated images of “Taylor ‘singer’ Swift” even though it didn’t work with “Taylor Swift.”
How the White House, SAG-AFTRA and fans reacted to the images
The White House expressed deep concern on Friday regarding the widespread distribution of offensive and sexually explicit images of Taylor Swift on X, the platform formerly known as Twitter. The images were reposted millions of times, prompting condemnation from various quarters, including the singer’s fans and the White House.
One highly shared post containing these images garnered over 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks in a 17-hour period before being removed from the platform. Unfortunately, some explicit images still linger on X despite efforts to counteract them.
Investigations traced the origin of these AI-generated Taylor Swift images to a Telegram group dedicated to abusive images of women. The group utilized a free Microsoft text-to-image generator called Designers. Notably, these images aren’t traditional “deepfakes” but rather generated using generative AI, deviating from the original definition of deepfakes.
The White House, expressing alarm, emphasized the pivotal role of social media companies in enforcing their rules to prevent the dissemination of misinformation and non-consensual, intimate imagery. White House Press Secretary Karine Jean-Pierre urged Congress to consider legislative action.
SAG-AFTRA, a prominent American labor union representing media professionals, also condemned the deepfake images, advocating for legislative measures to prevent such exploitation. The Elon Musk-owned social media platform issued its own response without directly referencing Taylor Swift or AI deepfakes.