Prashant Kumar

MrBeast, Tom Hanks, Gayle King Warn Of Online Deepfake ads

In 2024, Hollywood remained abuzz with discussions surrounding Artificial Intelligence (AI), notably due to the increasing concerns about AI deep fakes involving numerous celebrities. Tom Hanks, MrBeast from YouTube, and broadcast journalist Gayle King recently stepped up to address the issue, vehemently opposing the misuse of their likenesses in AI-generated content.

Hanks was the first to identify an AI deep fake of himself, alerting his followers on Instagram on October 1. He distanced himself from the video, cautioning his audience and confirming his non-involvement.

The AI rendition of Hanks was specifically crafted to promote what he referred to as a “dental plan.” Similarly, on October 3, MrBeast, also known as James Donaldson, addressed a similar issue on the social media platform X (previously Twitter). He denounced an AI-generated deep fake promoting a fraudulent scheme to win an iPhone 15 Pro. MrBeast’s posts emphasized the urgent need for social media platforms to confront this critical problem.

Users on X shared their experiences, reporting encountering these deceptive ads on platforms like TikTok, further fueling concerns over the widespread emergence of AI deep fakes.

Despite no official legislation concerning AI deep fakes in the United States, lawmakers are contemplating regulations, particularly concerning political deep fakes leading up to the 2024 presidential election. Simultaneously, negotiations between Hollywood studios, actors, and entertainment industry labor unions have been ongoing.

The Screen Actors Guild-American Federation of Television and Radio Artists incorporated AI as a prominent issue in their ongoing strike, where studios proposed scanning background performers for a single day’s pay, subsequently relinquishing complete ownership of their scan, image, and likeness to the companies.

Conversely, the Writer’s Guild strike concluded, reaching negotiated terms concerning the use of AI in written material within the entertainment industry.

Another video on Facebook utilized likenesses of the BBC journalists to present Elon Musk, purportedly pitching an investment opportunity under the guise of his ownership of X, formerly Twitter. Earlier instances of such videos claimed to depict Musk generously giving away money and cryptocurrency.

Upon inquiry by the BBC, Meta, the owner of Facebook, removed the content in question. These videos had previously been flagged with a warning indicating false information, thanks to the efforts of independent fact-checkers Full Fact, who initially highlighted the issue.

A spokesperson from Meta emphasized their zero-tolerance policy towards such misleading content, affirming its swift removal: “We don’t allow this kind of content on our platforms and have removed it.” They further encouraged users to report any similar content violating platform rules for prompt action and investigation.

Considering Applicable Laws

The legal landscape concerning deep fakes in the United States has been swiftly evolving. Individuals and businesses must take into account recent state laws that specifically target synthetic and digitally altered media.

For instance, New York implemented a law in November 2020 that explicitly prohibits the utilization of a “deceased performer’s digital replica” in audio-visual content for 40 years after the performer’s death, particularly if the use is likely to mislead the public into thinking it was authorized. This law could potentially prevent the use of deepfakes, such as those in the Anthony Bourdain documentary “Roadrunner.” In that instance, the film’s director controversially employed deepfake technology to produce three lines that recreated Bourdain’s voice after his passing. However, Bourdain’s widow, Ottavia Bourdain, objected, stating that she did not authorize such usage.

From a political standpoint, Texas passed a law in September 2019 prohibiting the dissemination of deceptive “deepfake videos” intended to harm candidates or sway voter opinion within 30 days of an election. Similarly, California enacted a comparable law the subsequent month, but with a focus on the 60-day period before an election. Additionally, platforms hosting deepfakes will need to consider compliance issues regarding claims of deception.

News Anchors Targeted By Deep Fake Scammers On Facebook

In a viral Facebook video, CNN’s Wolf Blitzer seems to be endorsing a diabetes drug, while another clip features “CBS Mornings” host Gayle King appearing to support weight loss products. However, these videos are manipulated, marking the latest wave of deepfakes using images of trusted news figures in false advertisements, eroding trust in the media.

Recent social media posts have similarly targeted personalities like Fox News’ Jesse Watters, CBC’s Ian Hanomansing, and BBC’s Matthew Amroliwala and Sally Bundock.

A concerning trend involves voice cloning, where even a brief two-minute audio sample can replicate a person’s voice in other videos, ensuring synchronization between the audio and the manipulated mouth movements. Hany Farid, a digital forensics expert from the University of California-Berkeley, previously highlighted this rise in manipulated content.

While some deepfakes remain evident due to their low quality, experts caution that the technology behind them continues to advance, making detection increasingly challenging.

Leave a Comment