If behavioral authentication could be made to work, it could be a big part of our future.
I was intrigued by an article about behavioral authentication on the Fast Forward Labs blog. Behavioral authentication is a kind of biometric authentication based on aspects of your behavior: timings while typing, for example, but conceivably much more fine-grained, like whether your fingers are centered on the touchscreen’s virtual keys when you press them, how hard you press, how you perform multi-fingered gestures, and much more.
There’s a compelling argument against most forms of biometric authentication. Consider fingerprints. They’re public; you leave them behind on glasses, they can be captured through photography that’s within the capabilities of a good mobile phone, and they can’t be changed if they’re compromised. And if the bad guys have you, they can force you to scan your fingerprint (or just remove a few fingers).
It’s more difficult to steal a retinal scan, but my opthalmologist has mine, and who knows whether her office has good security practices? (There’s a James Bond movie in which someone gets his retinas modified so he can get into Very Secret Places.) And voice: it’s a trivial exercise to get a recording of just about anyone you’re interested in, though voice recognition could be part of a behavioral authentication system.
Behavioral authentication is different, in some odd but interesting ways. Two properties push it much further than anything I can imagine with other authentication mechanisms. First, it can be continuous, as the Fast Forward article points out. It’s not a matter of entering a password or scanning a fingerprint that lets you in. You’re interacting as long as you’re using the device, and the authentication can (and should) continuously be authenticating. Second, you don’t really know what it’s using to authenticate you. Is it the force with which you press? It is timing? Is it something else? Could it be some combination of factors? Could the authentication factors (and their weights) be constantly shifting and changing?
We talk about authentication tokens in terms of “something you know, something you have, or something you are.” Passwords you clearly know. They’re easily forgotten, and surprisingly easy to discover through a variety of attacks. Dongles and other security devices are things that you have; they’re easily lost or stolen. Fingerprints are clearly things that you have, and as I’ve pointed out, they can also be captured.
But behavioral patterns? I don’t know how I type the way I do. My behaviors aren’t something I can give away; I couldn’t tell someone how to reproduce the way I interact with my phone or tablet. I can’t imagine someone asking, “How do I imitate your typing so I can access your phone”—but if they did, I wouldn’t be able to answer. And while I’d agree that my interaction patterns are part of me (they’re something I “am”), I don’t “know” them, nor do I “have” them in a sense that can be taken away.
Furthermore, if the device is constantly authenticating, it can only be used when it’s in my hands. (Fast Forward imagines a scenario in which failing continuous authentication takes you back to a password. In my imaginary future, continuous authentication is all there is: there are no passwords, nothing other than behavioral patterns.) Not only do I not know the password, I can’t unlock my phone or tablet and hand it off to someone else, because they’ll fail authentication immediately. And if the basis for authentication is constantly shifting, if the algorithm is constantly adapting and changing its metrics, I can’t tell anyone anything useful about how it’s authenticating. Even if an attacker installs a next-generation keylogger on my device, one that records all the behavioral data, they still wouldn’t know how to put that data to use.
So, if someone wants to steal my data, they can only do it when I’m present. Sure, coercion is possible. You could force me to unlock my phone, and look over my shoulder as I send you my recipe for the secret sauce; though, the victim’s presence is usually a disincentive to data theft. Furthermore, a behavioral authentication system could conceivably sense stress, and respond appropriately. I wouldn’t just need to be present; an attacker would need to keep me healthy and comfortable. If I’m shaking with fear or hunger, I’m not likely to be able to authenticate.
One problem with any authentication system is that it has to be reasonably reliable. You don’t want people constantly calling the service desk because they can’t get into their phone; indeed, the social engineering opportunities that would arise from constant help desk calls would quickly become a major attack vector. Fast Forward Labs says that touch-based authentication systems can have an error rate on the order of 5%, and that’s not yet good enough. I have no idea how reliable more exotic forms behavioral authentication are; I’d guess that this is still a work in progress.
Power consumption for continuous authentication could also become an issue. A background authentication process that runs whenever the device is in use could be a battery hog.
But perhaps the biggest problem with any authentication system is that you have to get people to use it effectively, and that’s a huge drawback. We’ve all heard the stories about people who never set a password, or set it to “password,” or give their password to their friends. With behavioral authentication, that stops being an issue: it’s always on, running in the background, and you can’t take a shortcut around it.
If it can be made to work, behavioral authentication could be a big part of our future.