Press "Enter" to skip to content

X takes drastic steps to combat Taylor Swift deepfakes, blocks all searches around pop-star

Searches for Taylor Swift on Elon Musk’s X platform have been temporarily blocked after sexually explicit deepfake images of the pop star surfaced and spread widely across the platform.

The incident sheds light on the ongoing challenges social media platforms face in addressing deepfakes—realistic images and audio generated using artificial intelligence, often utilized to depict public figures in compromising situations without their consent.

Attempts to search for terms such as “Taylor Swift” or “Taylor AI” on X resulted in an error message over the weekend. This measure aims to restrict access to AI-generated explicit content involving the renowned singer. Joe Benarroch, Head of Business Operations at X, stated that this action is temporary and taken with an abundance of caution to prioritize safety.

Notably, Elon Musk acquired X for $44 billion in October 2022, and since then, he has reportedly reduced resources dedicated to content moderation, citing a commitment to free speech ideals.

The incident highlights the broader challenge faced by platforms such as X, Meta, TikTok, and YouTube in tackling the abuse of increasingly realistic and easily accessible deepfake technology. Tools in the market now enable individuals to create videos or images resembling celebrities or politicians with just a few clicks.

While deepfake technology has been available for years, recent advances in generative AI have made these images more realistic and easier to produce. Experts express concerns about the rise of deepfakes in political disinformation campaigns and the alarming use of fake explicit content.

In response to the circulation of false images, White House Press Secretary Karine Jean-Pierre emphasized the importance of social media companies enforcing their own rules and urged Congress to legislate on the matter.

On Friday, X’s official safety account declared a “zero-tolerance policy” towards “Non-Consensual Nudity (NCN) images” and assured active removal of identified images along with appropriate actions against responsible accounts. However, depleted content moderation resources hindered X from preventing the widespread viewing of the fake Swift images before removal, prompting the platform to take the drastic step of blocking searches for the global star.

A report indicates that these images originated on anonymous platforms like 4chan and a Telegram group dedicated to sharing abusive AI-generated images of women. Telegram and Microsoft, the alleged tool provider, have not yet responded to requests for comment. The incident adds urgency to discussions around the regulation of deepfake technology and its potential consequences.

(With inputs from agencies)

X takes drastic steps to combat Taylor Swift deepfakes, blocks all searches around pop-starRead More

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *