Prashant Kumar

An AI app cloned Scarlett Johansson’s voice for an ad—but Deepfakes aren’t just a problem for celebrities

The line between reality and manipulation continues to blur in the age of rapid technological advancements. An unsettling demonstration of this phenomenon recently surfaced when an AI application successfully replicated Scarlett Johansson’s voice for an advertisement. While this instance may seem innocuous on the surface, it sheds light on the deeper concerns surrounding the proliferation of deepfake technology and its implications far beyond the realm of celebrities.

Actress Scarlett Johansson has initiated legal proceedings against an AI application that utilized her name and an AI-simulated version of her voice in an advertisement without obtaining her consent, as reported by Variety.

The 22-second advertisement appeared on X (formerly known as Twitter) on Oct. 28, presented by the AI image-generating app Lisa AI: 90s Yearbook & Avatar, as per Variety. This promotional content showcased images of Johansson alongside an AI-generated voice resembling hers, endorsing the app. Nonetheless, a disclaimer in small print beneath the ad clarified that the AI-generated content “has nothing to do with this person.”

Variety’s sources from Johansson’s representatives confirmed her non-affiliation with the app and stated that legal measures are being pursued. CNBC could not access the ad, and it has been removed. Lisa AI and a representative for Johansson have yet to respond to CNBC Make It’s request for a comment.

Although numerous celebrities have faced challenges due to deepfake technology, its implications extend beyond the realm of celebrities and can pose issues for ordinary individuals. Here’s what you should be aware of concerning this matter.

Microsoft Chief’s Warning Against AI Content

Microsoft’s president, Brad Smith, issued a cautionary statement regarding AI-generated content following the emergence of deepfakes featuring prominent figures like Pope Francis, Elon Musk, and Donald Trump earlier this year. During a speech in Washington, as reported by Reuters, Smith highlighted his concerns surrounding the proliferation of AI-generated deepfakes.

In his address, Smith emphasized the urgency of addressing the challenges posed by deepfakes. He particularly underscored the risks associated with foreign cyber-influence operations, citing ongoing activities by entities such as the Russian, Chinese, and Iranian governments.

Smith stressed the necessity of taking proactive measures to safeguard against manipulating genuine content aimed at deceiving or defrauding individuals through AI technology. He advocated for the implementation of AI licensing frameworks that impose obligations to ensure the protection of various domains, including security—both physical and cyber—and national security.

Additionally, Smith proposed developing enhanced or updated export controls to prevent the theft or misuse of AI models that could potentially breach a country’s export control regulations. He emphasized the importance of these controls to prevent unauthorized usage or dissemination of such advanced AI technologies.

What is Deepfakes?

Perhaps you’ve witnessed Barack Obama allegedly labeling Donald Trump as a “complete dipshit,” or Mark Zuckerberg supposedly boasting about his possession of “total control of billions of people’s stolen data.” You may have even seen Jon Snow delivering a heartfelt apology for the disappointing conclusion of Game of Thrones. If you answered yes to any of these scenarios, you’ve encountered a deepfake. Serving as the modern-day equivalent of Photoshopping, deepfakes utilize artificial intelligence known as deep learning to create fabricated visuals of fictitious events, hence the term “deepfake.” Interested in manipulating a politician’s speech, starring in your beloved movie, or showcasing professional dance moves? Then it might be time to delve into creating a deepfake.

How Do You Spot a Deepfake?

Spotting deepfakes becomes increasingly challenging with advancements in technology. Researchers in the US found in 2018 that deepfake-generated faces typically lack natural blinking patterns. This was a significant discovery initially, as most images used to train these algorithms depict individuals with their eyes open, resulting in the algorithms not learning about typical blinking behavior. However, this apparent weakness was quickly addressed by developers, leading to the emergence of deepfakes that included blinking, highlighting the adaptive nature of this technology.

Detecting low-quality deepfakes is easier. Signs may include poor lip synchronization, inconsistent skin tones, flickering around the edges of manipulated faces, and difficulties in accurately rendering fine details like hair, especially individual strands along the hairline. Inaccuracies in rendering jewellery, teeth, or irregular lighting effects such as uneven illumination and reflections in the iris can also expose a deepfake.

Various entities, including governments, universities, and tech companies, are investing in research to develop tools for deepfake detection. Recently, the inaugural Deepfake Detection Challenge commenced, supported by major players like Microsoft, Facebook, and Amazon. Teams worldwide are competing to enhance deepfake detection methods.

Facebook took steps by banning deepfake videos likely to deceive viewers by making it seem like someone said words they didn’t actually say, particularly in the lead-up to the 2020 US election. However, this policy specifically addresses misinformation produced using AI, leaving “shallowfakes” (refer to below) still permissible on the platform.

Leave a Comment