scispace - formally typeset
Search or ask a question
Conference

IEEE International Conference on Automatic Face and Gesture Recognition 

About: IEEE International Conference on Automatic Face and Gesture Recognition is an academic conference. The conference publishes majorly in the area(s): Facial recognition system & Face detection. Over the lifetime, 253 publications have been published by the conference receiving 32404 citations.

Papers published on a yearly basis

Papers
More filters
Proceedings ArticleDOI
26 Mar 2000
TL;DR: The problem space for facial expression analysis is described, which includes level of description, transitions among expressions, eliciting conditions, reliability and validity of training and test data, individual differences in subjects, head orientation and scene complexity image characteristics, and relation to non-verbal behavior.
Abstract: Within the past decade, significant effort has occurred in developing methods of facial expression analysis. Because most investigators have used relatively limited data sets, the generalizability of these various methods remains unknown. We describe the problem space for facial expression analysis, which includes level of description, transitions among expressions, eliciting conditions, reliability and validity of training and test data, individual differences in subjects, head orientation and scene complexity image characteristics, and relation to non-verbal behavior. We then present the CMU-Pittsburgh AU-Coded Face Expression Image Database, which currently includes 2105 digitized image sequences from 182 adult subjects of varying ethnicity, performing multiple tokens of most primary FACS action units. This database is the most comprehensive testbed to date for comparative studies of facial expression analysis.

2,705 citations

Proceedings ArticleDOI
14 Apr 1998
TL;DR: The results show that it is possible to construct a facial expression classifier with Gabor coding of the facial images as the input stage and the Gabor representation shows a significant degree of psychological plausibility, a design feature which may be important for human-computer interfaces.
Abstract: A method for extracting information about facial expressions from images is presented. Facial expression images are coded using a multi-orientation multi-resolution set of Gabor filters which are topographically ordered and aligned approximately with the face. The similarity space derived from this representation is compared with one derived from semantic ratings of the images by human observers. The results show that it is possible to construct a facial expression classifier with Gabor coding of the facial images as the input stage. The Gabor representation shows a significant degree of psychological plausibility, a design feature which may be important for human-computer interfaces.

2,100 citations

Proceedings ArticleDOI
20 May 2002
TL;DR: Between October 2000 and December 2000, a database of over 40,000 facial images of 68 people was collected, using the CMU 3D Room to imaged each person across 13 different poses, under 43 different illumination conditions, and with four different expressions.
Abstract: Between October 2000 and December 2000, we collected a database of over 40,000 facial images of 68 people. Using the CMU (Carnegie Mellon University) 3D Room, we imaged each person across 13 different poses, under 43 different illumination conditions, and with four different expressions. We call this database the CMU Pose, Illumination and Expression (PIE) database. In this paper, we describe the imaging hardware, the collection procedure, the organization of the database, several potential uses of the database, and how to obtain the database.

1,697 citations

Proceedings ArticleDOI
Ming-Hsuan Yang1
20 May 2002
TL;DR: Experimental results show that kernel methods provide better representations and achieve lower error rates for face recognition, which are compared with classical algorithms such as Eigenface, Fisherface, ICA, and Support Vector Machine.
Abstract: Principal Component A nalysis and Fisher Linear Discriminant methods have demonstrated their success in fac edete ction, r ecognition and tr acking. The representations in these subspace methods are based on second order statistics of the image set, and do not address higher order statistical dependencies such as the relationships among three or more pixels. Recently Higher Order Statistics and Independent Component Analysis (ICA) have been used as informative representations for visual recognition. In this paper, we investigate the use of Kernel Principal Component Analysis and Kernel Fisher Linear Discriminant for learning low dimensional representations for face recognition, which we call Kernel Eigenface and Kernel Fisherface methods.While Eigenface and Fisherface methods aim to find projection directions based on second order correlation of samples, Kernel Eigenface and Kernel Fisherface methods provide generalizations which take higher order correlations into account. We compare the performance of kernel methods with classical algorithms such as Eigenface, Fisherface, ICA, and Support Vector Machine (SVM) within the context of appearance-based face recognition problem using two data sets where images vary in pose, scale, lighting and expression. Experimental results show that kernel methods provide better representations and achieve lower error rates for face recognition.

786 citations

Proceedings ArticleDOI
20 May 2002
TL;DR: This work describes a representation of gait appearance based on simple features such as moments extracted from orthogonal view video silhouettes of human walking motion that contains enough information to perform well on human identification and gender classification tasks.
Abstract: We describe a representation of gait appearance for the purpose of person identification and classification This gait representation is based on simple features such as moments extracted from orthogonal view video silhouettes of human walking motion Despite its simplicity, the resulting feature vector contains enough information to perform well on human identification and gender classification tasks We explore the recognition behaviors of two different methods to aggregate features over time under different recognition tasks We demonstrate the accuracy of recognition using gait video sequences collected over different days and times and under varying lighting environments In addition, we show results for gender classification based our gait appearance features using a support-vector machine

775 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
20111
20041
200268
200086
199897