Prashant Kumar

Arizona Woman Falls Victim To Deepfake Scam Using Celebrities On Social Media

An Arizona woman fell victim to a deepfake scam leveraging celebrities on social media, highlighting the growing challenge of discerning authentic content online.

Laura Guerra, usually diligent with her online purchases, was swayed when she spotted Oprah Winfrey’s endorsement associated with a product, leading her to drop her usual caution. “I saw Oprah Winfrey on that and thought, ‘well, if she backs it, it should be legit.’ So I ordered it,” Guerra explained.

She purchased a bottle of keto gummies from a website that promised weight loss assistance, believing in Oprah’s endorsement. However, what seemed like a $49 purchase debited $258.77 from her account, charging her for multiple bottles without her consent.

Despite Guerra’s attempts to cancel the order and return the unopened package, the company provided evasive responses. Eventually, after persistent efforts, an employee gave her a return code, but the promised refund never materialized. Feeling deceived, Guerra voiced her frustration, calling the company “thieves” due to their lack of communication and unfulfilled promises.

When Guerra sought assistance from the Let Joe Know team, she discovered the company wasn’t legitimate from the start. This scheme echoes a broader trend where celebrities’ likenesses, including Oprah Winfrey, Tom Hanks, and Elon Musk, are manipulated in fake endorsements to promote products or financial schemes without their consent.

Oprah Winfrey has warned consumers through social media and her publication, Oprah Daily, about fraudsters misusing her image in advertisements. Similar warnings have been issued by other celebrities whose identities have been exploited without authorization to endorse products they have no affiliation with.

Government bodies like the Federal Trade Commission and Better Business Bureau caution consumers about the rising use of artificial intelligence in creating misleading images and videos of celebrities endorsing products.

Some politicians have taken steps to address this issue. Senators Chris Coons, Marsha Blackburn, Amy Klobuchar, and Thom Tillis proposed a bipartisan act to establish a nationwide standard for legal action against those using AI-generated likenesses without permission. Additionally, Representative Yvette Clarke reintroduced the Deepfakes Accountability Act, aiming to criminalize the unauthorized use of a person’s likeness in ways that could harm them.

Reflecting on her experience, Guerra advises fellow consumers to conduct thorough research and not solely rely on a celebrity’s face, emphasizing the importance of caution in the wake of her hard-learned lesson.

Scammers are using voice cloning tech to trick people, can create fake voices of anyone in seconds

In the swiftly progressing realm of artificial intelligence, the emergence of voice cloning technology has ignited both intrigue and apprehension among legislators and security specialists. Already, scammers are leveraging this innovation to dupe unsuspecting individuals. At the same time, notable figures like New York City Mayor Eric Adams have found inventive applications for AI-generated replicas of their voices.

Bruce Reed, the White House Deputy Chief of Staff overseeing the Biden administration’s AI strategy, has expressed unease about the technology, labeling it “alarmingly advanced.” As per a Business Insider report, he highlighted its potential societal repercussions, suggesting that individuals might soon hesitate to answer phone calls if they cannot discern genuine from fabricated voices. According to a McAfee report, voice cloning technology can replicate anyone’s voice with three or four seconds of audio input, reaching an 85 percent similarity.

Exploiting the strides in voice cloning, scammers have swiftly integrated AI technology into their deceitful endeavors. Instances reported by the Federal Trade Commission (FTC) reveal scammers employing voice cloning to orchestrate family emergency scams, creating highly convincing replicas of distressed family members to deceive victims.

How Do We Stop Malicious Deepfakes?

The United States has implemented various laws concerning deepfakes in the past year. These laws target specific issues, such as criminalizing deepfake pornography and preventing the use of deepfakes in electoral contexts. States like Texas, Virginia, and California have enacted legislation against deepfake porn, and a federal law was signed in December as part of the National Defense Authorization Act. However, these laws primarily apply within specific jurisdictions and may not cover cases outside those regions.

Internationally, only China and South Korea have taken substantial steps to prohibit deepfake deception. In the United Kingdom, the law commission is reviewing existing laws on revenge porn to address the creation of deepfakes. Contrastingly, the European Union seems less concerned about deepfakes than other forms of online misinformation.

Despite the United States taking a leading role, there needs to be more proof that these new laws are enforceable and appropriately focused.

Many research labs have developed innovative methods to detect manipulated videos, such as incorporating watermarks or utilizing blockchain technology. However, creating deepfake detectors that can’t be quickly exploited to generate more convincing deepfakes remains challenging.

Nevertheless, tech companies are making efforts to address this issue. Facebook has enlisted researchers from reputable institutions to construct a deepfake detector to enforce its ban. Twitter has also revised its policies and reportedly plans to tag deepfakes that aren’t outright removed. Additionally, in February, YouTube reiterated its stance against deepfake videos related to U.S. elections, voting processes, and the 2020 U.S. census.

However, concerns persist about deepfakes existing beyond these controlled environments. Initiatives like Reality Defender and Deep trace aim to safeguard against deepfakes. Deeptrace utilizes an API acting as a hybrid antivirus/spam filter, screening incoming media and diverting obvious manipulations to a quarantine zone, akin to Gmail filtering out spam. Meanwhile, Reality Defender, developed by the AI Foundation, aims to identify and isolate manipulated images and videos before they cause harm, acknowledging the unfairness of burdening individuals with the responsibility of authenticating media.

Leave a Comment