Prashant Kumar

New Technology Helps Celebrities Fight Back Against AI Deepfakes

In an age where technology advances at a lightning pace, the rise of artificial intelligence has brought both innovation and challenges. One of the most concerning developments in recent years has been the proliferation of AI-generated deepfakes, particularly impacting public figures and celebrities. These sophisticated manipulations, capable of swapping faces and altering voices in videos convincingly, have raised serious concerns about misinformation, privacy, and the potential for damaging reputations.

Scarlett Johansson took legal action against Lisa AI, the app maker, upon discovering her voice and likeness were used without her permission to promote an artificial intelligence app online. The video featuring her has been removed, but numerous similar “deepfakes” persist on the Internet, including one involving MrBeast endorsing $2 iPhones without authorization.

Advancements in artificial intelligence have made it increasingly challenging to differentiate between real and fabricated content, with approximately half of respondents in recent AI surveys from Northeastern University, Voicebot.ai, and Pindrop admitting difficulty in distinguishing synthetic from human-generated material.

Celebrities, in particular, face a significant challenge in combating AI-generated impersonations, constantly grappling to stay ahead of these misleading digital reproductions.

Fortunately, emerging tools offer hope in detecting such deepfakes, potentially making it tougher for AI systems to produce them. Ning Zhang, an assistant professor of computer science and engineering at Washington University in St. Louis, emphasized the transformative power of generative AI but also stressed the necessity of establishing defensive mechanisms against its misuse.

Scrambling signals

Zhang and the research team are working on a groundbreaking tool named AntiFake, which is designed to combat deepfake abuses.

Their innovation disrupts the signal, hindering AI-based synthesis engines from producing accurate copies, as explained by Zhang. The inspiration behind AntiFake stemmed from the University of Chicago’s Glaze, which safeguarded visual artists from their creations being used by generative AI models.

Although still in its early stages, the project is set to be presented at a significant security conference in Denmark later this month. Its scalability remains uncertain for now.

The concept involves uploading your voice track to the AntiFake platform before sharing a video online. The platform, available as a standalone app or through the web, scrambles the audio signal, perplexing AI models. The altered track maintains normalcy in the human ear but disrupts the system, complicating its attempts to create an accurate voice clone.

The tool’s website showcases numerous examples of real voices transformed by this technology, assuring users retain full rights to their tracks, which AntiFake will not use for alternative purposes. However, Zhang cautioned that the tool might not offer protection for individuals whose voices are already extensively accessible online. AI bots can access a broad spectrum of voices, from actors to public figures, needing only a few seconds of speech to generate a high-quality clone.

Acknowledging the limitations of any defense mechanism, Zhang highlighted AntiFake’s forthcoming availability in a few weeks, asserting that it will provide people with a proactive means to safeguard their speech.

Deepfake Detection

Meanwhile, alternative solutions, such as deepfake identification, have emerged.

Various technologies for detecting deepfakes involve embedding digital markers within video and audio content, allowing users to discern whether they are AI-generated. Google’s SynthID, Intel FakeCatcher and Meta’s Stable Signature are examples of such tools. Others, pioneered by companies like Pindrop and Veridas, can discern falsified content by analyzing minute details, such as synchronising spoken words with the movement of a speaker’s lips.

“There are specific nuances in human speech that machines struggle to replicate,” explained Vijay Balasubramaniyan, the founder and CEO of Pindrop.

However, Siwei Lyu, a computer science professor at the University at Buffalo specializing in AI system security, highlighted a flaw in deepfake detection: it typically functions only on previously disseminated content. At times, unauthorized AI-generated videos can circulate online for days before being recognized as deepfakes.

“Even if the time gap between a deepfake appearing on social media and its identification as AI-generated is mere minutes, the potential for harm remains significant,” remarked Lyu.

I view this as the natural progression in safeguarding this technology against potential misuse or exploitation,” remarked Rupal Patel, an applied artificial intelligence professor at Northeastern University and a vice president at Veritone, an AI company. “My concern is that we don’t inadvertently eliminate valuable aspects in fortifying these protections.”

Patel emphasizes the remarkable capabilities of generative AI, such as aiding individuals who have lost their voices, like actor Val Kilmer, who relies on a synthetic voice post his battle with throat cancer.

Highlighting the necessity of extensive, high-quality datasets for producing these outcomes, Patel cautioned against overly stringent restrictions that could hinder developers’ access.

“I believe it’s about striking a balance,” Patel concluded.

In the realm of preventing misuse of deepfakes, obtaining consent stands as a crucial element.

In October, U.S. Senate members unveiled discussions around a new bipartisan bill known as the “Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2023,” commonly referred to as the “NO FAKES Act of 2023.” This proposed legislation aims to hold accountable those responsible for creating deepfakes without proper authorization to use people’s likenesses.

Yael Weitz, an attorney associated with the New York-based art law firm Kaye Spiegler, highlighted that the bill intends to establish a consistent federal law, addressing the current disparity in right of publicity regulations across various states.

At present, only half of the U.S. states have laws regarding the “right of publicity,” granting individuals the exclusive authority to permit the use of their identity for commercial purposes. These laws differ in the level of protection they offer, creating inconsistencies. However, the implementation of a federal law might still be a considerable time away.

This story underwent editing by Jennifer Vanasco, with audio production handled by Isabella Gomez Sarmiento.

Leave a Comment