Face Recognition System:
Humans have always had the innate ability to recognize and distinguish between faces, yet computers only recently have shown the same ability. In the mid 1960s, scientists began work on using the computer to recognize human faces. Since then, facial recognition software has come a long way. One of many developers for a facial recognition technology. Its software called FaceIt®, can pick someone’s face out of a crowd, extract the face from the rest of the scene and compare it to a database of stored images. In order for this software to work, it has to know how to differentiate between a basic face and the rest of the background. Facial recognition software is based on the ability to recognize a face and then measure the various features of the face.
Every face has numerous, distinguishable landmarks, the different peaks and valleys that make up facial features. FaceIt defines these landmarks as nodal points. Each human face has approximately 80 nodal points. Some of these measured by the software are:
Distance between the eyes.
Width of the nose
Depth of the eye sockets
The shape of the cheekbones
The length of the jaw line
These nodal points are measured creating a numerical code, called a faceprint, representing the face in the database.
In the past, facial recognition software has relied on a 2D image to compare or identify another 2D image from the database. To be effective and accurate, the image captured needed to be of a face that was looking almost directly at the camera, with little variance of light or facial expression from the image in the database. This created quite a problem. In most instances the images were not taken in a controlled environment. Even the smallest changes in light or orientation could reduce the effectiveness of the system, so they couldn’t be matched to any face in the database, leading to a high rate of failure. In the next section, we will look at ways to correct the problem.
Different face recognition techniques can be broadly classified into four categories.
2) Feature based
3) Hidden Markov Model
4) Neural Network based algorithms
Face Recognition using Eigenfaces:
The human capacity to recognize particular individuals solely by observing the human face is quite remarkable. This capacity persists even through the passage of time, changes in appearance and partial occlusion. Because of this remarkable ability to generate near-perfect positive identifications, considerable attention has been paid to methods by which effective face recognition can be replicated on an electronic level. Certainly, if such a complicated process as the identification of a human individual based on a method as non-invasive as face recognition could be electronically achieved then fields such as bank and airport security could be vastly improved, identity theft could be further reduced and private sector security could be enhanced.
Many approaches to the overall face recognition problem have been devised over the years, but one of the most accurate and fastest ways to identify faces is to use what is called the “eigenface” technique. The eigenface technique uses a strong combination of linear algebra and statistical analysis to generate a set of basis faces–the eigenfaces–against which inputs are tested. This project seeks to take in a large set of images of a group of known people and upon inputting an unknown face image, quickly and effectively determine whether or not it matches a known individual.
The following modules will provide a walk through exactly how this goal is achieved. Since this was not the first attempt at automated face recognition it is important to see what other approaches have been tried to appreciate the speed and accuracy of eigenfaces.
The Problem with Eigenfaces:
Face recognition is a very interesting quandry. Ideally a face detection system should be able to take a new face and return a name identifying that person. Mathematically, what possible approach would be robust and fairly computationally economical? If we have a database of people, every face has special features that define that person. Greg may have a wider forehead, while Jeff has a scar on his right eyebrow from a rugby match as a young tuck. One technique may be to go through every person in the database and characterize it by these small features. Another possible approach would be to take the face image as a whole identity.
Statistically, faces can also be very similar. Walking through a crowd without glasses, blurry vision can often result in misidentifying someone, thus yielding an awkward encounter. The statistical similarities between faces give way to an identification approach that uses the full face. Using standard image sizes and the same initial conditions, a system can be built that looks at the statistical relationship of individual pixels. One person may have a greater distance between his or her eyes then another, so two regions of pixels will be correlated to one another differently for image sets of these two people.
From a signal processing perspective the face recognition problem essentially boils down to the identification of an individual based on an array of pixel intensities. Using only these input values and whatever information can be gleaned from other images of known individuals the face recognition problem seeks to assign a name to an unknown set of pixel intensities.
Characterizing the dependencies between pixel values becomes a statistical signal processing problem. The eigenface technique finds a way to create ghost-like faces that represent the majority of variance in an image database. Our system takes advantage of these similarities between faces to create a fairly accurate and computationally “cheap” face recognition system.
Features of the purposed System:
Inter-ocular distance between the lips and the nose distance between the nose tip and the eyes distance between the lips and the line joining the two eyes eccentricity of the face ratio of the dimensions of the bounding box of the face width of the lips.
Analysis of Face Recognition Methods:
Face recognition can be divided into two classified major types:
1) Face verification / Authentication
Face verification is a one-to-one match that compares a query face image against a template face image whose identity is being claimed. To evaluate the verification performance, the verification rate (the rates at which legitimate users are granted access) vs. false accepts rate (the rate at which imposters are granted access) is plotted, called ROC curve. A good verification system should balance these two rates based on operational needs.
2) Face identification / Recognition
Face identification is a one-to-many matching process that compares a query face image against all the template images in a face database to determine the identity of the query face. The identification of the test image is done by locating the image in the database who has the highest similarity with the test image. The identification process is a “closed” test, which means the sensor takes an observation of an individual that is known to be in the database. The test subject’s features are compared to the other features in the system’s database and a similarity score is found for each comparison. These similarities scores are then numerically ranked in a descending order. The percentage of times that the highest similarity score is the correct match for all individuals is referred to as the”top match score.” If any of the top similarity scores corresponds to the test subject, it is considered as a correct match in terms of the cumulative match. The percentage of times one of those similarity scores is the correct match for all individuals is referred to as the” Cumulative Match Score”,. The “Cumulative Match Score” curve is the rank n versus percentage of correct identification, where rank n is the number of top similarity scores reported.
The watch list method is an open-universe test. The test individual may or may not be in the system database. That person is compared to the others in the system’s database and a similarity score is reported for each comparison. These similarity scores are then numerically ranked so that the highest similarity score is first. If a similarity score is higher than a preset threshold, an alarm is raised. If an alarm is raised, the system thinks that the individual is located in the system’s database. There are two main items of interest for watch list applications. The first is the percentage of times the system raises the alarm and it correctly identifies a person on the watch-list. This is called the Detection and Identification Rate. The second item of interest is the percentage of times the system raises the alarm for an individual that is not on the watch list or database. This is called the “False Alarm Rate.”
In this system to identifying the human face we have to focus on the some of the very important part of the face which are given below.
1) Head pose
2) Facial Expression
3) Facial Hair
4) Occlusion / Accessories
finally, there are two major approaches for face recognition Images Analysis are Model based face recognition and Appearance based face recognition.
M.A. Turk, A.P. Pentland, Face Recognition Using Eigenfaces, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3-6 June 1991, Maui, Hawaii, USA, pp. 586-591
H. Moon, P.J. Phillips, Computational and Performance aspects of PCA-based Face Recognition Algorithms, Perception, Vol. 30, 2001, pp. 303-321