As public engagement with digital content continues to rise, consumers and businesses are increasingly more reliant on technology platforms.
The anonymity of our digital world makes it difficult to know who is behind the screen. This gray space gives would-be fraudsters an opening to threaten both businesses and consumers directly, especially in the realm of deepfakes — artificially created images, video and audio designed to emulate real human characteristics. In recent years, deepfakes have garnered widespread attention. They are an area of growing concern due to their involvement in fraudulent activities.
How AI deepfake technology works
Deepfake tactics enable fraudsters to distort reality by manipulating existing imagery to replace someone’s likeness. This tactic relies on artificial neural networks — computer systems that recognize patterns in data. Developing a deepfake photo or video involves feeding hundreds of thousands of images into the artificial neural network, which “trains” the data to identify and reconstruct face patterns.
With the increased adoption of more advanced AI, the number of images or videos required to train the artificial neural networks has substantially reduced, making it easier for fraudsters to use these tools at scale. Deepfake videos are often used for financial crimes to target individuals, businesses and government regulators. The risks can be particularly acute in emerging markets or those experiencing financial unrest.
Best practices to detect deepfake technology
Tools and best practices can help mitigate fraudsters’ efforts. The most important aspect is vigilance: Fraudsters are relentless and always at work, looking to take advantage of every loophole or weak spot.
The first step is to look at the state of deepfake videos themselves. At this stage, it’s often possible to recognize a deepfake video if you know what to look for. A few signs include the following:
- jerky movement;
- shifts in lighting from one frame to the next;
- shifts in skin tone;
- strange blinking or no blinking at all; and
- poor lip synch with the subject’s speech.
Technologies are also emerging to help videomakers authenticate their videos. For example, a cryptographic algorithm can be used to insert hashes at set intervals during the video. If the video in question is altered, the hashes will change.
Security procedures can go a long way to stop fraudsters. As an emerging threat, deepfakes thrive on the level of technology accessible to fraudsters, especially machine learning and advanced analytics. Businesses can fight fire with fire by using the same capabilities to fight back.
A layered defense strategy is also key, particularly as it relates to how fraudsters distribute or deploy deepfakes. The threat landscape is constantly evolving, so there’s nothing more important than guarding the front door.
As risks and countermeasures continue to evolve, the measures we use now will quickly become obsolete. While the nuances of deepfake technology will continue to shift, organizations’ core best practices should remain the same. With awareness and vigilance, consumers and businesses can stay one step ahead of deepfake technology.
About the author
David Britton leads strategy and thought leadership for Experian’s Global Identity and Fraud group. Britton has more than 20 years of experience in the digital identity and fraud space. He brings a wealth of experience and unique insights on the criminal methodology behind cyber fraud, the evolving digital identity landscape and the operational business challenges.