scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Age Classification from Facial Images

01 Apr 1999-Computer Vision and Image Understanding (Elsevier Science Inc.)-Vol. 74, Iss: 1, pp 1-21
TL;DR: The first work involving age classification, and the first work that successfully extracts and uses natural wrinkles, is also a successful demonstration that facial features are sufficient for a classification task, a finding that is important to the debate about what are appropriate representations for facial analysis.
About: This article is published in Computer Vision and Image Understanding.The article was published on 1999-04-01. It has received 580 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: An Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features and transient facial features in a nearly frontal-view face image sequence and Multistate face and facial component models are proposed for tracking and modeling the various facial features.
Abstract: Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an automatic face analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AU) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AU and 10 lower face AU) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AU and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AU. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.

1,773 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper proposes a simple convolutional net architecture that can be used even when the amount of learning data is limited and shows that by learning representations through the use of deep-convolutional neural networks, a significant increase in performance can be obtained on these tasks.
Abstract: Automatic age and gender classification has become relevant to an increasing amount of applications, particularly since the rise of social platforms and social media. Nevertheless, performance of existing methods on real-world images is still significantly lacking, especially when compared to the tremendous leaps in performance recently reported for the related task of face recognition. In this paper we show that by learning representations through the use of deep-convolutional neural networks (CNN), a significant increase in performance can be obtained on these tasks. To this end, we propose a simple convolutional net architecture that can be used even when the amount of learning data is limited. We evaluate our method on the recent Adience benchmark for age and gender estimation and show it to dramatically outperform current state-of-the-art methods.

1,046 citations


Cites background or methods from "Age Classification from Facial Imag..."

  • ...Past approaches to estimating or classifying these attributes from face images have relied on differences in facial feature dimensions [29] or “tailored” face descriptors (e.g., [10, 15, 32])....

    [...]

  • ...Before describing the proposed method we briefly review related methods for age and gender classification and provide a cursory overview of deep convolutional networks....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes an automatic age estimation method named AGES (AGing pattErn Subspace), which is to model the aging pattern, which is defined as the sequence of a particular individual's face images sorted in time order, by constructing a representative subspace.
Abstract: While recognition of most facial variations, such as identity, expression, and gender, has been extensively studied, automatic age estimation has rarely been explored. In contrast to other facial variations, aging variation presents several unique characteristics which make age estimation a challenging task. This paper proposes an automatic age estimation method named AGES (AGing pattErn Subspace). The basic idea is to model the aging pattern, which is defined as the sequence of a particular individual's face images sorted in time order, by constructing a representative subspace. The proper aging pattern for a previously unseen face image is determined by the projection in the subspace that can reconstruct the face image with minimum reconstruction error, while the position of the face image in that aging pattern will then indicate its age. In the experiments, AGES and its variants are compared with the limited existing age estimation methods (WAS and AAS) and some well-established classification methods (kNN, BP, C4.5, and SVM). Moreover, a comparison with human perception ability on age is conducted. It is interesting to note that the performance of AGES is not only significantly better than that of all the other algorithms, but also comparable to that of the human observers.

912 citations


Cites background from "Age Classification from Facial Imag..."

  • ...Kwon and da Vitoria Lobo [11] proposed an age classification method based on well-controlled high-quality face images, which can classify faces into one of the three groups (babies, young adults, and senior adults)....

    [...]

Journal ArticleDOI
TL;DR: A deep learning solution to age estimation from a single face image without the use of facial landmarks is proposed and the IMDB-WIKI dataset is introduced, the largest public dataset of face images with age and gender labels.
Abstract: In this paper we propose a deep learning solution to age estimation from a single face image without the use of facial landmarks and introduce the IMDB-WIKI dataset, the largest public dataset of face images with age and gender labels. If the real age estimation research spans over decades, the study of apparent age estimation or the age as perceived by other humans from a face image is a recent endeavor. We tackle both tasks with our convolutional neural networks (CNNs) of VGG-16 architecture which are pre-trained on ImageNet for image classification. We pose the age estimation problem as a deep classification problem followed by a softmax expected value refinement. The key factors of our solution are: deep learned models from large data, robust face alignment, and expected value formulation for age regression. We validate our methods on standard benchmarks and achieve state-of-the-art results for both real and apparent age estimation.

755 citations

References
More filters
Journal ArticleDOI

37,017 citations


"Age Classification from Facial Imag..." refers methods in this paper

  • ...The five ratios were recomputed after dropping the data evalua as unfavorable due to facial expression or rotation of the he The bimodal threshold for each ratio is calculated according Otsu’s method [22]....

    [...]

Journal ArticleDOI
TL;DR: This work uses snakes for interactive interpretation, in which user-imposed constraint forces guide the snake near features of interest, and uses scale-space continuation to enlarge the capture region surrounding a feature.
Abstract: A snake is an energy-minimizing spline guided by external constraint forces and influenced by image forces that pull it toward features such as lines and edges. Snakes are active contour models: they lock onto nearby edges, localizing them accurately. Scale-space continuation can be used to enlarge the capture region surrounding a feature. Snakes provide a unified account of a number of visual problems, including detection of edges, lines, and subjective contours; motion tracking; and stereo matching. We have used snakes successfully for interactive interpretation, in which user-imposed constraint forces guide the snake near features of interest.

18,095 citations

Proceedings ArticleDOI
03 Jun 1991
TL;DR: An approach to the detection and identification of human faces is presented, and a working, near-real-time face recognition system which tracks a subject's head and then recognizes the person by comparing characteristics of the face to those of known individuals is described.
Abstract: An approach to the detection and identification of human faces is presented, and a working, near-real-time face recognition system which tracks a subject's head and then recognizes the person by comparing characteristics of the face to those of known individuals is described. This approach treats face recognition as a two-dimensional recognition problem, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. Face images are projected onto a feature space ('face space') that best encodes the variation among known face images. The face space is defined by the 'eigenfaces', which are the eigenvectors of the set of faces; they do not necessarily correspond to isolated features such as eyes, ears, and noses. The framework provides the ability to learn to recognize new faces in an unsupervised manner. >

5,489 citations


"Age Classification from Facial Imag..." refers background in this paper

  • ...In an attempt at recognizing facial expressions, Matsuno, L a d Tsuji [19] use potential nets, which undergo structural formations at features such as the eyebrows, nose, and m Based on the pattern of deformations, classification is achie Working in the other paradigm, Turk and Pentland [27] co vert anN× N image of a face into a single vector of size N2 by concatenating scan lines....

    [...]

  • ...In an attempt at recognizing facial expressions, Matsuno, L a d Tsuji [19] use potential nets, which undergo structural formations at features such as the eyebrows, nose, and m Based on the pattern of deformations, classification is achie Working in the other paradigm, Turk and Pentland [27] co vert anN× N image of a face into a single vector of sizeN2 by concatenating scan lines....

    [...]

  • ...This recognition step uses eigen analysis approach (see below) of Turk and Pentland and Kirby and Sirovich [15]....

    [...]

  • ...M. A. Turk and A. P. Pentland, Face recognition using eigenfaces, inProc....

    [...]

Journal ArticleDOI
TL;DR: The use of natural symmetries (mirror images) in a well-defined family of patterns (human faces) is discussed within the framework of the Karhunen-Loeve expansion, which results in an extension of the data and imposes even and odd symmetry on the eigenfunctions of the covariance matrix.
Abstract: The use of natural symmetries (mirror images) in a well-defined family of patterns (human faces) is discussed within the framework of the Karhunen-Loeve expansion This results in an extension of the data and imposes even and odd symmetry on the eigenfunctions of the covariance matrix, without increasing the complexity of the calculation The resulting approximation of faces projected from outside of the data set onto this optimal basis is improved on average >

2,686 citations

Journal ArticleDOI
TL;DR: Two new algorithms for computer recognition of human faces, one based on the computation of a set of geometrical features, such as nose width and length, mouth position, and chin shape, and the second based on almost-gray-level template matching are presented.
Abstract: Two new algorithms for computer recognition of human faces, one based on the computation of a set of geometrical features, such as nose width and length, mouth position, and chin shape, and the second based on almost-gray-level template matching, are presented. The results obtained for the testing sets show about 90% correct recognition using geometrical features and perfect recognition using template matching. >

2,671 citations


Additional excerpts

  • ...R. Brunelli and T. Poggio, Face recognition: Features versus templa IEEE PAMI15, No. 10 (1993), 1042–1052....

    [...]

  • ...The lly, rk, ally isat n of ds arades rahe hin ial es. iris ims to us oter AGE CLASSIFICATION Recently, Brunelli and Poggio [4] have compared the util of the two paradigms described above, in the task of face rec nition....

    [...]

  • ...Recently, Brunelli and Poggio [4] have compared the util of the two paradigms described above, in the task of face rec nition....

    [...]