Prashant Kumar

Celebs Like Scarlett Johansson Declare War on AI Deep Fakes—Are They Doomed?

In an age where technology’s capabilities seemingly know no bounds, the rise of artificial intelligence (AI) has birthed a troubling phenomenon: deep fakes. These sophisticated, AI-generated videos manipulate facial expressions, voices, and mannerisms to create realistic but entirely fabricated content. Among those leading the charge against this digital menace is acclaimed actress Scarlett Johansson, who has taken a stand against the proliferation of deep fakes.

Scarlett Johansson’s legal team declared on Wednesday that the actress intends to take legal action against an AI company for unauthorized use of her name and likeness in an advertisement, joining a chorus of celebrities and politicians perturbed by the rise of AI-generated imposters. However, as these public figures strive to combat deceitful online impersonations through legal means, they might encounter growing hurdles due to the internet’s borderless nature.

In Johansson’s case, as Variety reports today, she plans to pursue legal recourse against Lisa AI. This image-generating application recently ran an ad on Twitter featuring a deepfake of Johansson zealously endorsing the product. The parent company of Lisa AI, Convert Yazılım Limited Şirket, as per the app’s terms of service, operates from Turkey with its headquarters in Istanbul. While legal experts in Hollywood are familiar with international disputes, integrating AI could introduce complexities into the legal proceedings.

In the United States, policymakers are increasingly focused on establishing a federal legal framework to govern AI-generated deepfakes. Courts in countries like India have already ruled against AI deepfake creators. However, not all governments worldwide have shown the same aggressiveness in regulating this innovative technology. For instance, Japan recently announced that using copyrighted materials to train an AI system does not breach copyright laws.

Although the issue of AI and copyright differs from that of AI and deepfakes, Japan’s stance provides insights into its current approach to AI regulation. According to a Law Library of Congress report, Japan has not proposed or implemented laws specifically related to AI. Similar regulations still need to be put in place and anticipated in Turkey in the foreseeable future.

Regarding the incident involving Lisa AI and Johansson, the offending app voluntarily removed the contentious Twitter ad, likely due to the actress’s legal team’s intervention. However, suppose a company operating from a country without AI regulations declines to comply with a similar demand in the future. In that case, public figures in most countries, including the United States, typically need more protections concerning their publicity rights.

In this legal gray area, disputes might increasingly revolve around the online platforms where these deepfakes are disseminated—such as Twitter. However, since Elon Musk assumed control of the platform last year, Twitter has relaxed many policies concerning disseminating false information.

While numerous U.S. senators aim to criminalize any AI-generated depiction of a person made without their consent—regardless of context—Twitter’s current policy appears more lenient. According to the company’s updated policy on misleading media from April, posts featuring fake audio, video, or images are eligible for removal only if they are “likely to result in widespread confusion on public issues, impact public safety, or cause serious harm.”

Although Scarlett Johansson is an immensely famous individual, a counterfeit advertisement supposedly featuring her endorsing a yearbook photo app likely wouldn’t be considered a national emergency.

Are deepfakes legal?

Deepfakes generally operate within the bounds of legality, leaving law enforcement with limited recourse despite their significant potential for harm. Their legality hinges on adherence to existing laws, such as those governing child pornography, defamation, or hate speech.

Only three states have specific legislation addressing deepfakes. Texas prohibits deepfakes intended to influence elections, Virginia outlaws the dissemination of deepfake pornography, and California restricts political deepfakes within 60 days of an election, along with nonconsensual deepfake pornography.

More laws targeting deepfakes are needed primarily due to widespread unfamiliarity regarding this emerging technology, its applications, and associated risks. Consequently, most victims lack adequate legal protection in cases involving deepfakes.

What is a “deepfake” and why does it matter?

Deepfake AI represents a form of artificial intelligence specifically designed to fabricate convincing images, audio, and videos that are deceptive. This term encompasses both the technology itself and the fabricated content, merging “deep learning” with “fake.”

These AI-generated fabrications commonly involve altering existing source material by substituting one individual’s likeness for another’s. Additionally, they can produce entirely new content portraying individuals engaging in actions or making statements they never actually did or said.

The primary threat posed by deepfakes lies in their capacity to disseminate false information appearing to originate from credible sources. An illustrative instance occurred in 2022 when a deepfake video surfaced depicting Ukrainian President Volodymyr Zelenskyy supposedly urging his troops to surrender.

Moreover, concerns have arisen regarding the potential exploitation of deepfakes in influencing elections and propagandistic activities. Despite their significant risks, deepfakes also have legitimate applications, including their use in video game audio, entertainment, customer service support like call forwarding, and receptionist

Leave a Comment