Eigenface

An eigenface (/ˈaɪɡən-/ EYE-gən-) is the name given to a set of eigenvectors when used in the computer vision problem of human face recognition.

[1] The approach of using eigenfaces for recognition was developed by Sirovich and Kirby and used by Matthew Turk and Alex Pentland in face classification.

[2][3] The eigenvectors are derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images.

Sirovich and Kirby showed that principal component analysis could be used on a collection of face images to form a set of basis features.

In 1991 M. Turk and A. Pentland expanded these results and presented the eigenface method of face recognition.

Face images usually occupy a high-dimensional space and conventional principal component analysis was intractable on such data sets.

Information is lost by projecting the image on a subset of the eigenvectors, but losses are minimized by keeping those eigenfaces with the largest eigenvalues.

In practical applications, most faces can typically be identified using a projection on between 100 and 150 eigenfaces, so that most of the 10,000 eigenvectors can be discarded.

Note that although the covariance matrix S generates many eigenfaces, only a fraction of those are needed to represent the majority of the faces.

To calculate this result, implement the following code: Performing PCA directly on the covariance matrix of the images is often computationally infeasible.

If the number of training examples is smaller than the dimensionality of the images, the principal components can be computed more easily as follows.

As eigenface is primarily a dimension reduction method, a system can represent many subjects with a relatively small set of data.

When a new face is presented to the system for classification, its own weights are found by projecting the image onto the collection of eigenfaces.

A nearest-neighbour method is a simple approach for finding the Euclidean distance between two vectors, where the minimum can be classified as the closest subject.

Many modern approaches still use principal component analysis as a means of dimension reduction or to form basis images for different modes of variation.

By discarding those three eigenfaces, there will be a decent amount of boost in accuracy of face recognition, but other methods such as fisherface and linear space still have the advantage.

Some eigenfaces from AT&T Laboratories Cambridge