Security

AI-generated ‘skeleton keys’ fool fingerprint scanners

We’ve had fake videos, fake faces, and now, researchers have developed a method for AI systems to create their own fingerprints.

Not only that, but the machines have worked out how to create prints that fool fingerprint readers more than one time in five. The research could present problems for fingerprint-based biometric systems that rely on unique patterns to grant user access.

The research team, working at New York University Tandon and Michigan State University, used the fact that fingerprint readers don’t scan a whole finger at once. Instead, they scan parts of fingerprints and match those against what’s in the database. Previous research found that some of these partial prints contain features common to many other partial prints. This gives them the potential to act as a kind of skeleton key for fingerprint readers. They are called MasterPrints.

The researchers set out to train a neural network to create its own MasterPrints that could be used to fool fingerprint readers into granting access. They succeeded, with a system that they call Latent Variable Evolution (LVE), and published the results in a paper.

They used a common AI tool for creating realistic data, called a Generative Adversarial Network (GAN). They trained this network to recognize realistic images by feeding it lots of them. They do the same with artificially generated images so that it understands the difference between the two. Then, they take the statistical model that the neural network produces as it learns, and feeds it to a generator. The generator uses this model to produce realistic images and repeats the process so that it can get better at it.

The researchers took these generated images and tested them against fingerprint matching algorithms to see which got the best results. It then used another algorithm to evolve the fingerprint to make those results even better.

In effect, the AI system is using mathematical algorithms to grow human fingerprints that can outsmart biometric scanners.

The team used two datasets to train its fingerprint generator: a set of traditional rolled ink fingerprints, and a set of fingerprints captured by capacitive readers like those found in smartphones. The capacitive fingerprints produced better results.

Biometric systems like fingerprint readers can be set to different security levels by adjusting their false match rate. This is the percentage of incorrect fingerprints that it would approve. The research team tested fingerprint reading algorithms at a 0.1% false match rate, which should mistakenly approve the wrong fingerprint one time in every thousand. The fingerprint reader accepted its generated MasterPrints, which it calls DeepMasterPrints, 22.5% of the time.

The researchers said that the LVE method seemed to be producing partial fingerprint images that contain enough common characteristics to fool fingerprint readers at rates far higher than average. They added that these artificial prints could be used to launch a practical attack on fingerprint readers.

Experiments with three different fingerprint matchers and two different datasets show that the method is robust and not dependent on the artifacts of any particular fingerprint matcher or dataset.

This is all a little worrying, if someone is able to spoof your fingerprints, then they don’t have to steal them (and if they do, you can’t upgrade or change your fingerprints). If someone developed this into a working exploit, perhaps by printing the images with capacitive ink, it could present problems for many fingerprint recognition systems.

Articles You May Like

I’m trying so hard not to gush over Teenage Engineering’s latest gadget
Black Friday: Phishing Emails Soar 237%
Adobe: Thanksgiving US online sales nudge up to $5.6B; Salesforce: $31.7B spent globally
Meta’s EU ad-free subscription faces early privacy challenge
What startup founders need to know about AI heading into 2024

Leave a Reply

Your email address will not be published. Required fields are marked *