Fraud attempts and deepfake cyberattacks are on the rise, new research finds, adding that it’s time for products and services to help organizations defend against these advanced identity thefts (opens in a new tab) attacks.
That’s according to ID R&D’s identity verification expert, who says nearly half (42%) of organizations have already experienced deepfake attacks. At the same time, 37% have already experienced injection attacks – incidents where cybercriminals bypass the camera or otherwise introduce digital content into the data stream. In other words, deepfake technology can help bypass biometric authentication in certain scenarios.
For more than half (51%), chatbot fraud is a “credible threat”.
Deepfake is a growing problem
All this understandably worries companies, the researchers found. Nine out of ten (91%) organizations and their customers are worried about deepfakes, and 84% expressed similar concerns about injection attacks.
Citing a Gartner report, ID R&D said we can expect 20% of successful account takeover attacks this year to use deepfake technology.
Deepfake is an AI-based technology that allows users to create compelling videos of people. By providing various videos and other content to the platform, the tool is able to generate a unique video where the person they are impersonating says and does things they have never actually done.
His first app was malicious because people used it to add realistic celebrity faces to adult video content.
Other apps were more entertainment oriented as people started creating fake videos of Donald Trump saying things (most of which he probably would have said anyway).
The videos, while funny, were a great example of how dangerous and damaging deepfake technology can be, and how important it is to have a solution that can distinguish deepfake from genuine video.
For this purpose, ID R&D has created a solution called ID LIve Face Plus. Whether it does a good job or not remains to be seen.