New Technology Helps Celebrities Fight Back Against AI Deepfakes

In an age where technology advances at a lightning pace, the rise of artificial intelligence has brought both innovation and challenges. One of the most concerning developments in recent years has been the proliferation of AI-generated deepfakes, particularly impacting public figures and celebrities. These sophisticated manipulations, capable of swapping faces and altering voices in videos convincingly, have raised serious concerns about misinformation, privacy, and the potential for damaging reputations.

Scarlett Johansson took legal action against Lisa AI, the app maker, upon discovering her voice and likeness were used without her permission to promote an artificial intelligence app online. The video featuring her has been removed, but numerous similar “deepfakes” persist on the Internet, including one involving MrBeast endorsing $2 iPhones without authorization.

Advancements in artificial intelligence have made it increasingly challenging to differentiate between real and fabricated content, with approximately half of respondents in recent AI surveys from Northeastern University, Voicebot.ai, and Pindrop admitting difficulty in distinguishing synthetic from human-generated material.

Celebrities, in particular, face a significant challenge in combating AI-generated impersonations, constantly grappling to stay ahead of these misleading digital reproductions.

Fortunately, emerging tools offer hope in detecting such deepfakes, potentially making it tougher for AI systems to produce them. Ning Zhang, an assistant professor of computer science and engineering at Washington University in St. Louis, emphasized the transformative power of generative AI but also stressed the necessity of establishing defensive mechanisms against its misuse.

Scrambling signals

Zhang and the research team are working on a groundbreaking tool named AntiFake, which is designed to combat deepfake abuses.

Leave a Comment