Facial recognition software has been gaining traction lately, edging it’s way to becoming the next big thing.
Technology juggernauts like Google and Facebook have been tapping into the power and revolution of this image recognition software. Countless hours of research and experimentation have translated into major strides in the recognition space, making it a close second to the human brain’s ability to perform the task of recognition.
Facebook has developed a software called DeepFace (not kidding). Despite its unusual name it can look at two photos and decipher with 97.3% accuracy whether the photos contain the same face, regardless of the angle or the lighting. Quite impressive when you compare it to that of human accuracy, which is about 97.5%.
Developed by Facebook’s artificial intelligence research group, the intelligence is supported by a simulation of a learning neural network, which is the essence of the technology. The software incorporates what is called “deep learning,” which is one of the many methods of machine learning. Deep learning involves looking at a large body of data in this case human faces and developing a highlevel abstraction by looking at reoccurring patterns like eyebrows, cheeks, and ears. This deep learning method also consists of a learning process that sees the creation of 120 million connections between those neurons, based on an assemblage of four million face photos.
Complimentary to their facial recognition software, which is still a bit ahead of its time, Facebook also plans to release Facebook Search this month to iPhone users, formerly known as Graph Search, which allows users to search for content on Facebook using natural language, for instance “places my friends like to travel to.” This way users can navigate directly to filtered content, rather than leafing through the endless pages of posted media.
Similarly, Google and Stanford University each have their own rosters of scientists working independently and collaboratively on an artificial intelligence software that is capable of recognizing and describing the content of photos and videos. The intelligence is sophisticated enough to mimic human levels of understanding, and teaches itself to identify entire scenes for example, a group of children playing on a jungle gym. Both groups of researchers merge to weave two types of neural networks together; one focused on recognizing images and the other on recognizing human language.
The researchers found it astonishingly accurate when compared to that of human observations of the same compilations. Modern advances in technology has made it possible to better catalog and search billions of images and hours of video more efficiently. Thus far Google has been relying heavily on image descriptions to accompany a photo in order to tag it accurately, a requirement that cannot be easily regulated and maintain consistency with all the users of the world’s biggest search engine posting content daily.
While this is a huge technological advancement and testament to that old saying “if you can conceive it, you can achieve it.” This caliber of technology begs the assessment of risk, privacy, and vulnerabilities.
As the topic evolves the technological enhancements are steering towards a future of behavioural recognition. Although we’re talking about new technologies, we’re also talking about ancient human responses rooted in the oldest part of the brain. By identifying patterns of behavior as they correlate to facial expressions, which are universal, we will be able to collect psychographic data at an unprecedented rate, and at a much lower cost than what we’ve ever been able to do.
The opportunity for marketers and brands to acquire a deep understanding of their customers’ needs and expectation and collect data in an unobtrusive manner will change the market research as we know it.