How AI videos of Steve Burton scammed his fan out of $80k
Can you imagine being romanced by a famous TV star online… only to realize it was all a digital trick? That’s exactly what happened to Abigail Ruvalcaba, a woman from LA who lost her life savings after falling for a deepfake of ‘General Hospital’ star Steve Burton. Fortunately, her daughter was able to notice the scam before Abigail had transferred much more money to fraudsters.
This is not the first time scammers use famous people’s identities to deceive their own fans. As deepfake technologies improve, there’s no guarantee of never falling for a scheme. isFake.ai could become your protection.
What happened and how does an AI scam work?
It started so innocently. Abigail first exchanged casual Facebook messages with her Hollywood ‘crush’ in October 2024. The scammer quickly moved the conversation to WhatsApp, where ‘Steve’ sent her video messages featuring Burton’s likeness and voice saying things like, ‘Hello, Abigail. I love you so much, darling. I had to make this video to make you happy, my love’
The relationship escalated quickly with the scammer telling Abigail that he wanted to build a life together, offering her to buy a ‘beach house’ they would both love. How sweet…
The cost of believing an AI-generated fantasy was more than $80,000 sent through various methods including gift cards, cash, Bitcoin, Zelle, and checks.
Even after stealing her life savings, the scammer convinced Abigail to sell her Harbor City condo for $350,000. Her daughter Vivian was able to intervene and expose the scam just before an additional $70,000 was about to be sent.
Talk about a close call.
The tools behind the trick
Scammers aren’t running crude, obvious cons anymore. These sophisticated deepfakes used dozens of Steve Burton’s real images and clips from TV, social media, and interviews.
That’s exactly why deepfake fraud with celebrity identities is much more dangerous. The quality level might be really impressive.
Additionally, every message was custom-tailored to Abigail, referencing details from her life that she shared while they talked.
And no, this is not a thriller movie. It’s just another day in 2025.
What could isFake.ai do?
This tragedy might have been prevented with the right detection tools such as audio and video detection. Voice messages analysis would have revealed synthetic voice artifacts common in cloned audio, inconsistent emotional tones and suspicious background sounds that might not always give out a fake. Especially when you least expect it.
Our video detection tool is taught on many other deepfakes and real human videos. It highlights the frames in which it considers AI was used. Sometimes on a closer check of those frames you might notice facial movements, inconsistent shadows and lip movements that do not sync with the audio. And here it is. The AI.
Here’s isFake.ai verdict on Steve Burton’s deepfake audio from this case (taken from ABC7). Quick and simple. As it should be.
For a proper test here we used a clip that had both the ‘Steve’ and the real victim’s voices. The synthetic part of the audio is highlighted separately, because we taught our tool to tell the real content from the artificial one. The result of 68% reveals that not the whole but only a part of the audio is AI-generated.
Don’t become the next victim
The tech behind these scams only gets sneakier by the day. So if you get a DM from your favorite celeb don’t just trust your heart (or even your eyes and ears). Run it by isFake.ai before you lose anything more than your trust. If Abigail had checked at least one of those videos from ‘Steve Burton’ through isFake.ai, she wouldn’t lose all her life savings.
And we hope you will stay sharp with isFake.ai!



