Face Recognition Grand Challenge
The Face Recognition Grand Challenge was conducted in an effort to promote and advance face recognition technology. It was the successor of the Face Recognition Vendor Test.
Overview
The primary goal of the FRGC was to promote and advance face recognition technology designed to support existing face recognition efforts in the U.S. Government. FRGC developed new face recognition techniques and prototype systems while increasing performance by an order of magnitude. The FRGC was open to face recognition researchers and developers in companies, academia, and research institutions. FRGC ran from May 2004 to March 2006.The FRGC consisted of progressively difficult challenge problems. Each challenge problem consisted of a data set of facial images and a defined set of experiments. One of the impediments to developing improved face recognition is the lack of data. The FRGC challenge problems include sufficient data to overcome this impediment. The set of defined experiments assists researchers and developers in making progress on meeting the new performance goals.
There are three main contenders for improving face recognition algorithms: high resolution images, three-dimensional face recognition, and new preprocessing techniques. The FRGC is simultaneously pursuing and will assess the merit of all three techniques. Current face recognition systems are designed to work on relatively small still facial images. The traditional method for measuring the size of a face is the number of pixels between the centers of the eyes. In current images there are 40 to 60 pixels between the centers of the eyes. In the FRGC, high resolution images consist of facial images with 250 pixels between the centers of the eyes on average. The FRGC will facilitate the development of new algorithms that take advantage of the additional information inherent in high resolution images.
Three-dimensional face recognition algorithms identify faces from the 3D shape of a person's face. In current face recognition systems, changes in lighting and pose of the face reduce performance. Because the shape of faces is not affected by changes in lighting or pose, 3D face recognition has the potential to improve performance under these conditions.
In the last couple years there have been advances in computer graphics and computer vision on modeling lighting and pose changes in facial imagery. These advances have led to the development of new computer algorithms that can automatically correct for lighting and pose changes in facial imagery. These new algorithms work by preprocessing a facial image to correct for lighting and pose prior to being processed through a face recognition system. The preprocessing portion of the FRGC will measure the impact of new preprocessing algorithms on recognition performance.
The FRGC improved the capabilities of automatic face recognition systems through experimentation with clearly stated goals and challenge problems. Researchers and developers can develop new algorithms and systems that meet the FRGC goals. The development of the new algorithms and systems is facilitated by the FRGC challenge problems.
Structure of the Face Recognition Grand Challenge
The FRGC is structured around challenge problems that are designed to challenge researchers to meet the FRGC performance goal.There are three aspects of the FRGC that will be new to the face recognition community. The first aspect is the size of the FRGC in terms of data. The FRGC data set contains 50,000 recordings. The second aspect is the complexity of the FRGC. Previous face recognition data sets have been restricted to still images. The FRGC will consist of three modes:
- high resolution still images
- 3D images
- multi-images of a person.
The FRGC Data Set
The FRGC data distribution consists of three parts. The first is the FRGC data set. The second part is the FRGC BEE. The BEE distribution includes all the data sets for performing and scoring the six experiments. The third part is a set of baseline algorithms for experiments 1 through 4. With all three components, it is possible to run experiments 1 through 4, from processing the raw images to producing Receiver Operating Characteristics.The data for FRGC consists of 50,000 recordings divided into training and validation partitions. The training partition is designed for training algorithms and the validation partition is for assessing performance of an approach in a laboratory setting. The validation partition consists of data from 4,003 subject sessions. A subject session is the set of all images of a person taken each time a person's biometric data is collected and consists of four controlled still images, two uncontrolled still images, and one three-dimensional image. The controlled images were taken in a studio setting, are full frontal facial images taken under two lighting conditions and with two facial expressions. The uncontrolled images were taken in varying illumination conditions; e.g., hallways, atriums, or outside. Each set of uncontrolled images contains two expressions, smiling and neutral. The 3D image was taken under controlled illumination conditions. The 3D images consist of both a range and a texture image. The 3D images were acquired by a Minolta Vivid 900/910 series sensor.
The FRGC distribution consists of six experiments. In experiment 1, the gallery consists of a single controlled still image of a person and each probe consists of a single controlled still image. Experiment 1 is the control experiment. Experiment 2 studies the effect of using multiple still images of a person on performance. In experiment 2, each biometric sample consists of the four controlled images of a person taken in a subject session. For example, the gallery is composed of four images of each person where all the images are taken in the same subject session. Likewise, a probe now consists of four images of a person.
Experiment 3 measures the performance of 3D face recognition. In experiment 3, the gallery and probe set consist of 3D images of a person. Experiment 4 measures recognition performance from uncontrolled images. In experiment 4, the gallery consists of a single controlled still image, and the probe set consists of a single uncontrolled still image.
Experiments 5 and 6 examine comparing 3D and 2D images. In both experiments, the gallery consists of 3D images. In experiment 5, the probe set consists of a single controlled still. In experiment 6, the probe set consists of a single uncontrolled still.