Apple is Setting a new Standard for Security with Face Recognition
By Rae Steinbach
Face recognition is not a new technology. In fact, it has been around in various forms for a few decades. In the past, it was exclusively used in settings like research labs, and most people never encountered it their everyday lives. That is beginning to change with Apple introducing facial recognition as a security feature on the iPhone X. Not only is this add-on an upgrade to security; it also is part of Apple's greater customer experience strategy to compel users to become more brand loyal and heighten engagement.
While Apple may be the first company to offer reliable face recognition technology in a mobile device, it is the result of technology that was developed by many different companies over the course of several years. In this post, we want to take a closer look at some of the developments that helped to make this possible.
The face recognition system on the iPhone X works by building a 3D model of the user's face. This model is then compared to known templates of the owner's appearance. It processes the model, assigns a score based on its similarity to the known model, and if the score is not close enough, it denies access to the device.
For face recognition to work reliably as a security measure, the device needs to be able to detect depth. Earlier versions of face recognition relied on RGB values to approximate the depth at different points on the user's face, but this method proved to be unreliable under some conditions.
One of the key issues is lighting. Under conditions with a lot of light, the pixel data could get washed out. Under conditions where the light is too low, there would not be enough information for the system to make an accurate estimation.
Apple's answer to this problem was to equip the iPhone X with a structured light depth camera. With this, the phone projects thousands of infrared dots on the users face. The dots then reflect back to an infrared camera, and the system uses factors like the time of flight to accurately determine the depth at different points in the image.
Once you solve the problem of determining depth in the image, the next problem is image classification. The face recognition system needs to be able to take the 3D model it constructs and quickly compare it to a reference of the security image to determine whether it is a match.
Image classification has been a problem for machines in the past. The technological breakthrough that helped to clear this obstacle is neural networks. A neural network is a computer system that is modeled to work in a way that is similar to the human brain. Compared to other systems, neural networks are better at tasks like language processing, image classification, and decision-making.
Finally, the device needs to have the processing power that is required for the underlying tasks that make the face recognition system possible. The phone needs to be able to run complex algorithms to support the face recognition system, and it needs to be able to return results in a very short amount of time.
To provide the iPhone X with the necessary processing capabilities, Apple developed the Neural Engine. The Neural Engine is a pair of processing cores that come together to form a neural network on the phone. It is specially designed to handle different deep learning functions on the iPhone X, and it is one of the technologies that help to make the face recognition system possible.
Face recognition is a significant development for mobile device security. Beyond making devices more secure, the technologies that are used to implement face recognition can also be applied to apps. In the future, users can expect to see apps and services that make use of features like the structured light depth cameras and the Neural Engine.