High-profile deepfake scams that were reported here at The Register and elsewhere last year may just be the tip of the iceberg. Attacks relying on spoofed faces in online meetings surged by 300 percent in 2024, it is claimed.
iProov, a firm that just so happens to sell facial-recognition identity verification and authentication services, put that figure in its annual threat intelligence report shared last week, in which it described a thriving ecosystem dedicated to hoodwinking marks and circumventing identity-verification systems through the use of ever-more sophisticated AI-based deepfake technology.
Along with the claimed 300 percent surge in face swap attacks – where someone uses deepfake tech to swap out their face for another in real time to fool victims, like what was used to trick a Hong Kong-based company out of $25 million last year – iProov also claimed it tracked a 783 percent increase in injection attacks targeting mobile web apps (ie, injecting fake video camera feeds and other data into verification software to bypass [PDF] facial-recognition-based authentication checks) and a 2,665 percent spike in the use of virtual camera software to perpetrate such scams.
Virtual camera software, available from a number of different vendors, allows legitimate users to, say, replace their built-in laptop camera feed in a video call with one from another app that, for instance, improves their appearance. Miscreants, on the other hand, can abuse the same software for nefarious purposes, such as pretending to be someone they aren’t using AI. Because the video feed is created in a different app and injected via virtual camera software, it’s much harder to detect, iProov chief scientific officer Andrew Newell told us.
“The scale of this transformation is staggering,” Newell claimed in the report’s introduction, adding that iProov is tracking more than 120 tools actively being used to swap scammers’ faces on live calls.
“When combined with various injection methods and delivery mechanisms, we’re facing over 100,000 potential attack combinations,” Newell argued. As far as he’s concerned, that’s a serious problem for “traditional security frameworks” that claim they can detect and prevent deepfake attacks – iProov being a company offering a solution, naturally.
Regardless of the self-interested nature of the report, which iProov somewhat sidesteps by admitting that organizations shouldn’t invest in a single approach and instead integrate “multiple defensive layers,” there’s cause to be worried about a spike in identity-spoofing attacks that rely on real-time video. It might have been the case a few years ago that real-time deepfakes were easy to defeat, but new powerful AI tools are rendering the tried-and-true practice of telling video call participants to look to the side to reveal any tell-tale distortions and other signs of face swapping much less reliable.
Take KnowBe4’s case last year as an example. Despite being a company that trains others on social engineering defense, KnowBe4 was taken in by a fake IT applicant who was actually a North Korean cybercriminal even after multiple video conference interviews with the scammer.
“This was a real person using a valid but stolen US-based identity,” KnowBe4 admitted. “The picture was AI ‘enhanced.'”
While a nation state may have been the one with access to those tools in previous years, online markets catering to black market buyers of the technology are quickly spreading, it is claimed. iProov said it identified 31 new crews selling tools used for identity verification spoofing in 2024 alone.
“This ecosystem encompasses 34,965 total users,” iProov claimed. “Nine groups have over 1,500 users, with the largest reaching 6,400 members.”
“Crime-as-a-service marketplaces are a primary driver behind the deepfake threat, dramatically expanding the attack surface by transforming what was once the domain of high-skilled actors,” Newell told us.
In short, we’re entering a new era in which deepfake spoofs are being democratized and made available to all criminal corners of the internet, just like phishing kits and ready-to-deploy malware of the past. A few years ago, cybersecurity researchers expressed doubt that deepfakes would ever rise to a level to rival common scams such as phishing. That may still be the case, but cybercriminals out for a bigger payday are likely to start preferring deepfakes, Newell said.
“Phishing remains a serious threat and is likely to be lower effort for a threat actor than an attack on the identity system,” Newell noted. “However, the potential damage that can be done is likely to be far greater from a successful attack on the identity system.”
And if you thought it was easy to fool the average user with a typo-ridden phishing email, deepfake videos might be even tougher for them to detect.
For another study released last month, iProov created an online quiz that tested users on their ability to detect deepfakes – both static images and live videos. Across ten questions presented to 2,000 people in the UK and US, only 0.1 percent – two people – were correct on all counts, it is claimed. That’s when they knew they were looking for fakes, mind you. In a real-world situation, people confronted with a deepfake may be far less likely to view it critically.
“Even when people do suspect a deepfake, our research tells us that the vast majority of people take no action at all,” iProov CEO and founder Andrew Bud said of that work. The company noted that only 25 percent of people said they search for alternative information if they suspect a deepfake, and only 11 percent said they critically analyze sources and context of information to see if it raises red flags.
In other words, now you have yet another thing to train your users to be on the lookout for. ®