TLDR
- AI is being used to enhance various types of scams, including voice cloning, personalized phishing, identity fraud, and deepfake blackmail
- Morgan Freeman has spoken out against unauthorized AI voice imitations of himself
- Celebrities like Scarlett Johansson and Drake have faced issues with AI-generated content imitating their voices
- There is growing concern in the entertainment industry about the misuse of AI technology
- Experts recommend vigilance and cybersecurity best practices to protect against AI-powered scams
As artificial intelligence (AI) technology advances, scammers are finding new ways to exploit it for malicious purposes.
From voice cloning to deepfake blackmail, AI-powered scams are becoming more sophisticated and harder to detect.
At the same time, celebrities are speaking out against unauthorized use of their voices and likenesses in AI-generated content.
One of the most concerning AI scams involves voice cloning of family and friends. Scammers can now create convincing fake versions of loved ones’ voices using just a few seconds of audio.
These synthetic voices can be used to create fake distress calls asking for money or help. Experts advise being wary of any unexpected calls from unknown numbers and verifying the identity of the caller through normal communication channels.
Personalized phishing and spam emails are another growing threat. AI language models can now generate customized spam messages using personal information obtained from data breaches.
These tailored messages are much more convincing than traditional generic spam. Users are advised to remain vigilant and avoid clicking on links or opening attachments from suspicious sources, even if the message seems personalized.
Identity fraud is also becoming more sophisticated with AI. Scammers can now create AI personas that sound like a target person and have access to many personal facts used for identity verification.
This makes it easier for them to impersonate individuals when contacting customer service. Experts recommend using multi-factor authentication and being cautious of any suspicious account activity.
Perhaps the most alarming development is the use of AI-generated deepfakes for blackmail.
Advanced image generation models can now create realistic fake nude images of almost anyone, which scammers may use as leverage for extortion. While this is a disturbing trend, experts note that these fake images often lack distinguishing marks and may have obvious flaws. Victims are advised to report such incidents to the authorities.
The entertainment industry has been particularly affected by unauthorized AI imitations.
Recently, actor Morgan Freeman spoke out against AI voice imitations of himself circulating on social media. In a statement, Freeman thanked his fans for their “vigilance and support in calling out the unauthorized use of an A.I. voice imitating me.” He emphasized the importance of maintaining “authenticity and integrity” in the face of such technology.
Thank you to my incredible fans for your vigilance and support in calling out the unauthorized use of an A.I. voice imitating me. Your dedication helps authenticity and integrity remain paramount. Grateful. #AI #scam #imitation #IdentityProtection
— Morgan Freeman (@morgan_freeman) June 28, 2024
Freeman is not alone in his concerns. Other celebrities have faced similar issues with AI-generated content.
Scarlett Johansson’s legal team recently challenged OpenAI over an AI personal voice assistant that sounded remarkably similar to her voice, despite her declining to participate in the project.
Rapper Drake also faced controversy for using AI-generated imitations of Tupac Shakur and Snoop Dogg in a song, leading to a cease-and-desist order from Shakur’s estate.