Biometrics is coming to replace your password; we should all hope that it does more than just replace the password. Biometrics inherently can eliminate many of the problems with passwords and are subject to the same problems. Like passwords, biometrics can be copied, both purposefully and unintentionally.
Great care should be taken in storing biometrics, but we should all recognize that protecting any data from the malicious insider is nearly impossible. The best option is to store biometrics that cannot be used for account access.
IT organizations will spend a bundle replacing obsolete password technology in favor of more secure authentication methods. Password technology is fundamentally flawed in that a password can be copied and therefore stolen and shared. Because of the replay flaw, passwords don’t identify the user. The same flaw can be said of many other technologies including tokens, smartcards and even biometrics. Without a method of determining if authentication data is genuine, biometrics and other technologies are no more secure than a password.
The Most Innovative and Thorough Technology For Verifying and Identifying A Computer User, Authenticates the Authentication Data.
Biometrics identifies unique characteristics of a person. Determining if a biometrics sample is live at the moment of login is key to securely implementing biometrics. Many static biometrics such as fingerprint, iris scans and other image based biometrics can uniquely identify a person as long as the sample can itself be authenticated to be genuine. Unfortunately, attended operation of scanners is not practical. Let’s face it, fingerprint and iris scanners are also too expensive to consider on a large scale.
Other biometrics technology including face and voice recognition can be captured using inexpensive and widely deployed webcams and microphones. Like the aforementioned scanning technologies malicious insiders and other criminals can obtain copies of pictures and voice recordings leaving security engineers with the problem of authenticating the authentication data.
Any ol’ sample of someone’s voice may be ok to access you recipes online, but for big money transfers, probably not. Determining that a voice sample was spoken for the sole purpose of the current authentication must be at the heart of your authentication technology. Of course, that goes for all modalities, not just voice.
Independent of the problem of determining if biometrics samples are genuine, there is the problem of convenience. Users find it easy to submit samples in the quiet of their well-lit office. In the real world, the roar of jet engines and the dark of the Porsche-office can pose significant challenges for users.
An approach including numerous modalities can mitigate many of these challenges in combination with algorithms that payoff with one of the modalities passing. Implementing “OR” logic can significantly increase the frequency of successful positive identification in the real world as opposed to strict fusion scores. For instance, allowing genuine voice identification or genuine face recognition will allow users to be identified in poorly lit or noisy venues.
Perhaps equally important is the notion of using multiple biometrics engines to run against the same sample. Each vendor offers different advantages. For instance, some voice recognition vendors have better noise suppression while others have focused on removing electronic signatures from samples. Depending on the user’s noisy environment or microphone, different signal processing gives the user a better experience.
Adding face recognition to mix can provide a useful alternative to voice recognition when a good audio sample is difficult to obtain. Facial recognition from a genuine sample is also a significant deterrent to the malicious insider. Having your picture on six o’clock new is not usually a goal of the criminally minded.
Of course, the key to successful user identification must start with genuine samples and not replays. When deciding on any security system, one must consider the possible actions of the malicious insider. We should assume that insiders can gain access to samples. If those samples can be replayed to gain access to accounts, then the system is not secure. We should also assume malicious insider will give self-samples to coconspirators leaving the system with no way to identify perpetrators. Anonymity continues to invite the criminals to the crime scene.