Earlier this month, Florida mom Sharon Brightwell received a panicked call from her daughter, who said she was involved in a car crash that injured a pregnant woman. But it wasn’t Brightwell’s daughter, April, on the other end of the line; it was a cloned voice of hers.
After the deepfake made up a story about April driving and texting and stating her phone was confiscated by police — hence, the unknown phone number she allegedly called from — a man came on the line pretending to be a public defender and requested $15,000 to bail out April. Brightwell complied, gathering the cash, placing it in a box, and handing it to an Uber package courier at her home. It was only after another call came in, this time asking for $30,000 for the "injured pregnant woman," that Brightwell’s grandson realized his family was being scammed.
"To tell you the trauma that my mom and son went through that day makes me nauseous and has made me lose more faith in humanity," April wrote on a GoFundMe page. "Evil is too nice a word for the kind of people that can do this."
Deepfake scams target more than just seniors. This summer, the voice of Secretary of State Marco Rubio was faked via AI, and scammers (unsuccessfully) used it to attempt contact with foreign and domestic leaders. President Trump’s chief of staff Susie Wilkes was also the target of deepfake voice scammers this year.
These crimes are happening more because the technology to create such voices is getting better, says Matthew Wright, PhD, a professor and Chair of Cybersecurity at Rochester Institute of Technology.
"Also, this is increasing because the technology is getting easier to use," Wright adds. "I think you could have said a few years ago the technology for this probably exists but would have required more technological skill to access. Now, with [voice cloning company] ElevenLab and other services, it’s very easy."
ElevenLab is probably the most prominent software company providing realistic voice cloning for companies to use for various applications, like customer service. A March report from Consumer Reports cited ElevenLab, along with AI voice cloning companies like Lovo and Speechify, as too lax in their oversight of non-consensual voice cloning. The companies "simply require checking a box saying that the person whose voice is being cloned had given authorization," the report stated.
There are humans behind these voice cloning scams, Wright says, and they may be using those aforementioned services to create their deepfakes. Where crooks used to just cold-call vulnerable people to try to steal their money or information, they now have a powerful and convincing new tool in their arsenal.
"A lot of it is organized crime," Wright says. "What I’ve read about is there are organizations kidnapping people in one country, taking them to another, like Malaysia, for example, and they got them holed up in a special facility that’s secured from people leaving it, and just turning them into slaves."
Mashable Light Speed
The first step with many of these scammers is finding vocal examples that they can clone, Wright says.
Keeping yourself safe
While anyone can pull Secretary Rubio’s voice off the internet, how are private individuals being cloned by AI? The short answer: Social media.
"Maybe we have videos of us with our friends or family hanging out, having a good time," Wright says. "If you’ve got that type of social presence, all your settings should be set to private. Not only your settings, because this involves anyone else who has posted videos of you. They don’t need a lot of content. You got 30 seconds of someone’s voice and you can make a pretty good deepfake."
Even a hastily crafted cloned voice can deceive, Wright warns.
"When they’ve done studies of whether people can detect if something is fake, usually with fairly shorter snippets, accuracy is low," he says. "It’s not something, even when it’s your friends, your family, it’s just not reliable to count on being able to tell whether it’s that person."
When answering calls from unknown numbers claiming to be friends or family members, be dubious, Wright says.
"All of the normal things when it comes to scams, and dealing with scams, those definitely apply here,” Wright says. “These calls for urgency and signals that something needs to be done really quickly [is a red flag]. Pleas where someone is trying to put you into an emotional state and then they request something — 'we need that money, we need it now'" is a bad sign.
Wright points out that specific requests for cash should especially raise alarms. Transactions that go through banks offer a measure of security, he says.
"The banks recognize these things and say, 'This is a scam,'" he says. "They’ll help you recognize it."
It’s always a good idea to create a "password" with friends and family members, especially those who may be susceptible to deepfake scams, like older people or those unfamiliar with technology.
"If you have a special secret that would not be available through social media, but just something the two of you know" that could keep you safe, Wright says. "I would go out of my way to set up something specific, which would essentially be a password between you two, so you know if I call and I’m asking for money, or asking you to give up some sort of information, you need to ask for this [secret]. Then you have some sort of shared assurance."
Have a story to share about a scam or security breach that impacted you? Tell us about it. Email [email protected] with the subject line "Safety Net" or use this form. Someone from Mashable will get in touch.