How Deepfakes of Prominent Figures Are Fuelling Investment Scams in Malaysia
The Securities Commission Malaysia (SC) recently sounded the alarm about deepfake investment scams. These scams reportedly harnessed the power of Artificial Intelligence (AI)—unfortunately for the wrong reasons. Scammers manipulated videos, posing as high-profile figures and trustworthy companies, fooling people into thinking they were legitimate. The videos generated by AI looked convincingly real and made it easier to deceive unsuspecting viewers.
Let’s be real—if you saw your favourite celebrity promoting an investment, you’d probably pause and take a second look, wouldn’t you?
These scams usually start with a deepfake video that appears on popular social media platforms such as Facebook and then spreads via apps such as WhatsApp or Telegram. They take various forms, with some videos advertising “investment opportunities” supposedly backed by reputable companies. The lure is enticing, and once their curiosity is piqued, victims are redirected to a sign-up page requesting their personal details, leading them straight into the scam’s trap. In some cases, victims are tricked into downloading malicious applications that expose them to even greater cyber threats. Others are asked directly to transfer money to the bank account of the fraudster or their courier.
Even though platforms like Meta are actively taking down these videos, the scams are persistent. The SC, together with the Malaysian Communications and Multimedia Commission (MCMC), is trying to fight back, but at the end of the day, it’s up to us to stay alert. If something feels off about an investment offer, it probably is.
Prominent Figures Caught in the Crossfire
Over the years, scammers have increasingly manipulated the likeness of famous figures, and in a recent case, they used Malaysia’s top singer, Datuk Seri Siti Nurhaliza. Scammers created a fake WhatsApp video call featuring her, and it was so convincing that some fans genuinely believed she was involved. Imagine seeing your favourite artist appear in a video call—it’s easy to understand why people were caught off guard and alarmed.
Minister of Communications Fahmi Fadzil didn’t waste any time addressing this. He urged everyone to be more cautious with what they see online, reminding us that with AI, not everything is as real as it looks. While tech like this has great potential, there are bad actors out there who are more than willing to misuse it. The minister is even pushing local media outlets to raise awareness, so hopefully, more people will catch on.
It’s not just Siti who got caught up in this mess—Datuk Fazley Yaakob, the popular singer and chef, also had his voice and image hijacked by scammers. In a TikTok post, Fazley expressed his frustration after discovering that scammers were pretending he needed financial help due to a fake flood at his factory. Shockingly, people believed it! Some of his followers even tried to send him money, thinking they were helping him out. Fazley had to step in and set the record straight, urging his fans to be cautious and not trust everything they see online.
The fact that scammers are misusing AI to this extent is a huge red flag. They’re not just manipulating faces—they’re playing with voices, expressions, and even body language. It’s a whole new level of deception, and it’s scary how convincing it can be.
The Alarming Speed of AI Progress
It’s wild how fast this tech is progressing. According to the Royal Malaysia Police’s Commercial Crime Investigation Department director, Datuk Seri Ramli Mohamed Yoosuf, scammers can now create a convincing deepfake with just a 15-second audio clip. That’s all it takes to generate an entire conversation that never actually took place. It’s no surprise people are getting fooled left and right.
Ramli pointed out that even the person being impersonated might have trouble telling the difference between real and fake videos. It’s that good. Some of Malaysia’s most prominent figures, like the Prime Minister, business tycoon Robert Kuok, and even well-known investment gurus, have been featured in deepfake videos, sparking widespread confusion.
These scams aren’t just a nuisance – they’re causing serious financial damage. As of July 2024, Malaysia’s commercial crime losses reached MYR 1.4 billion, and a huge chunk of that comes from investment fraud tied to deepfakes. While the number of crime cases has dropped slightly, the losses have spiked by 37%, showing just how effective these AI-driven schemes have become.
The use of deepfakes to impersonate high-profile figures isn’t just harming the victims – it’s also shaking public trust. People are now second-guessing everything they see online, and who can blame them? The government is stepping up by working on the National Artificial Intelligence Governance and Ethics Guidelines, which should be ready by the end of 2024. These guidelines aim to regulate the use of AI, keeping the tech’s benefits while minimising its risks.
Public Awareness and Vigilance Against Deepfakes
Even though authorities are doing their part, the real power to stop these scams lies with us. Understanding the risks of deepfakes and staying sceptical of what we see online is our best defence. Cybersecurity experts stress the importance of verifying content, especially when it seems too good — or too real — to be true. AI tools that can detect deepfakes are being developed to help analyse videos and alert people before they fall victim to scams. But no tool can replace the value of awareness and education. The better informed we are, the safer we will be.