AI Deepfake Scams Explode: 87 Rings Busted in Asia?!

87 deepfake scam rings taken down across Asian in Q1 2025: Bitget Report

The advancements in Artificial Intelligence (AI) have not only grown but have also sparked a significant increase in fraud that is facilitated by AI. In the first quarter of 2025, no less than 87 deepfake-based scam networks were taken down. This troubling data, presented in the Joint Anti-Scam Month Research Report compiled by Bitget, SlowMist, and Elliptic for 2025, highlights the escalating threat of AI-powered fraud within the cryptocurrency sector.

The report further discloses a significant surge – approximately 24% – in annual global crypto fraud losses, amounting to $4.6 billion by the year 2024. Notably, almost 40% of high-value scams involved advanced deepfake techniques. These scammers have been found to frequently employ convincing imitations of well-known personalities, founders, and top officials from platforms, with the intention of misleading users.

Gracy, the CEO of Bitget, explained to CryptoMoon that scammers’ ability to quickly create realistic videos using advanced technology, combined with the widespread influence of social media, makes deepfakes particularly effective when it comes to both their impact and authenticity.

Protecting oneself from AI-based frauds isn’t just about upgrading technology; it necessitates a shift in perception. In this era of advanced artificial media including deepfakes that can convincingly replicate individuals and incidents, trust should be cautiously built through transparency, continuous vigilance, and thorough validation at each step.

Deepfakes: An Insidious Threat in Modern Crypto Scams

The study outlines the structure of current cryptocurrency scams, focusing on three primary types: imitations using artificial intelligence (AI) and deepfake technology, strategies based on social engineering, and schemes masked as DeFi or GameFi initiatives that operate like Ponzi schemes. Deepfakes, in particular, are exceptionally deceptive.

As an analyst, I’d rephrase that statement like this: I’ve observed how advanced AI systems can generate text, voice messages, facial expressions, and even mimic actions. For instance, deceptive practices such as creating fraudulent video endorsements of investment platforms by prominent figures like Singapore’s Prime Minister or Elon Musk are methods used to manipulate public trust. These tactics often appear on platforms like Telegram, X, and various social media outlets.

AI technology can now convincingly mimic real-time interactions, making it harder for us to discern between authentic and fraudulent activities. Sandeep Narwal, one of the co-founders at Polygon, issued a warning in a May 13 post on X, detailing instances where scammers had posed as him through Zoom. He explained that multiple individuals had reached out to him via Telegram, querying whether he was engaged in a Zoom call with them and if he required them to install a script.

As a dedicated researcher, I’ve recently taken note of a cautionary statement from the CEO of SlowMist regarding potential deepfake threats on Zoom. To ensure safety and avoid becoming a victim of such scams, it’s essential to scrutinize the domain names of any shared Zoom links carefully.

New Scam Threats Call for Smarter Defenses

As artificial intelligence-driven scams become increasingly sophisticated, both users and platforms must develop innovative approaches to ensure security. With the rise of deepfake videos, deceptive job interviews, and misleading email links, distinguishing fraud has never been more challenging.

Institutions should prioritize regular security training and robust technical safeguards. It’s recommended that businesses conduct phishing tests, fortify their email systems, and keep a watchful eye for data leaks in their code. Establishing a culture that emphasizes verification over trust among employees is the most effective approach to thwart scams at their onset.

Gracy provides a simple method for regular users: “Check, separate, and decelerate.” She explained this as:

“Always verify information through official websites or trusted social media accounts—never rely on links shared in Telegram chats or Twitter comments.”

Additionally, she emphasized the significance of separating potentially hazardous activities by utilizing distinct digital purses while navigating unfamiliar online environments.

Read More

2025-06-10 15:15