Facial recognition is touted as the next big thing for smartphones, but what is it and will it really be worth having?
By the end of this current year – 2012 – just under 20% of smartphones being sold will have software capable of performing facial recognition, or so say technology sector market intelligence firm ABI Research, rising to 665 million devices annually by 2017. That’s a lot of fancy phones looking at a lot of faces. But what will this mean for users?
Facial recognition involves taking the feed from one of the cameras on the phone, often the one that faces the user, and processing it in software to detect the presence of a face and then to recognise that face. This sort of image processing takes a lot of number crunching so it places high demands on the microprocessor in the phone. Only the most recent and high specification phones can meet those demands, and so far not many mobile phone operating systems support the function.
Android’s Ice Cream Sandwich and Jelly Bean operating systems do support facial recognition, but you still need a pretty powerful phone to actually be able to do anything with it. The Galaxy SIII is one of the first to step up to the challenge and adopt the technology. The version 5.1 Apple iOS operating system in the iPhone 4S is said to have a certain level of facial recognition capability built in though it is not being pushed as aggressively as Google are with the technology in Android.
What does it actually mean for users? What will facial recognition do? One possibility is to simply detect that the user is looking at the phone screen and use that to disable the power save screen dimming that usually kicks in when the user doesn’t touch any controls for a while. So no more frustration over the screen dimming if you are reading a dense page of text that takes a while.
Another use is to unlock the phone simply by the user looking into the screen (or to be more exact, the camera above it). This requires the software to be able to tell one face from another, reliably, and this is where current systems fall down. Unusual lighting conditions or a different and particularly bold makeup scheme can fool the software, causing it to fail to recognise its owner. In fact claims of accuracy hover around only 90%. That figure might sound good if it were your mark in an exam, but think of it another way – it means that around one time in every ten goes the phone will fail to recognise its owner and won’t unlock. This problem befell Matias Duarte, Google’s director of Android user experience, when he tried to demonstrate it at a launch in Hong Kong.
Knowing Google’s predilection for tying everything into sharing via social media, it might not be too long before faces can be picked out of photos of a crowd or at a party, identified using one’s own social media albums as references and tagged for all your friends to see. The prospect is rather fraught with possibilities of misuse and infringement of privacy.
We’ll just have to see how things develop, I guess, but this is worth keeping an eye on.