scispace - formally typeset
Search or ask a question

Showing papers on "Facial Action Coding System published in 1999"


Journal ArticleDOI
TL;DR: This paper explores and compares techniques for automatically recognizing facial actions in sequences of images and provides converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions.
Abstract: The facial action coding system (FAGS) is an objective method for quantifying facial movement in terms of component actions. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include: analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions.

1,086 citations


Journal ArticleDOI
TL;DR: In this article, the authors applied computer image analysis to the problem of automatically detecting facial actions in sequences of images and compared three approaches: holistic spatial analysis, explicit measurement of features such as wrinkles, and estimation of motion flow fields.
Abstract: Facial expressions provide an important behavioral measure for the study of emotion, cognitive processes, and social interaction. The Facial Action Coding System (Ekman & Friesen, 1978) is an objective method for quantifying facial movement in terms of component actions. We applied computer image analysis to the problem of automatically detecting facial actions in sequences of images. Three approaches were compared: holistic spatial analysis, explicit measurement of features such as wrinkles, and estimation of motion flow fields. The three methods were combined in a hybrid system that classified six upper facial actions with 91% accuracy. The hybrid system outperformed human nonexperts on this task and performed as well as highly trained experts. An automated system would make facial expression measurement more widely accessible as a research tool in behavioral science and investigations of the neural substrates of emotion.

435 citations


Journal ArticleDOI
TL;DR: An automated method of facial display analysis by feature point tracking demonstrated high concurrent validity with manual FACS coding.
Abstract: The face is a rich source of information about human behavior. Available methods for coding facial displays, however, are human-observer dependent, labor intensive, and difficult to standardize. To enable rigorous and efficient quantitative measurement of facial displays, we have developed an automated method of facial display analysis. In this report, we compare the results with this automated system with those of manual FACS (Facial Action Coding System, Ekman & Friesen, 1978a) coding. One hundred university students were videotaped while performing a series of facial displays. The image sequences were coded from videotape by certified FACS coders. Fifteen action units and action unit combinations that occurred a minimum of 25 times were selected for automated analysis. Facial features were automatically tracked in digitized image sequences using a hierarchical algorithm for estimating optical flow. The measurements were normalized for variation in position, orientation, and scale. The image sequences were randomly divided into a training set and a cross-validation set, and discriminant function analyses were conducted on the feature point measurements. In the training set, average agreement with manual FACS coding was 92% or higher for action units in the brow, eye, and mouth regions. In the cross-validation set, average agreement was 91%, 88%, and 81% for action units in the brow, eye, and mouth regions, respectively. Automated face analysis by feature point tracking demonstrated high concurrent validity with manual FACS coding.

287 citations


Journal ArticleDOI
TL;DR: The present study demonstrated that the CFCS serves as a valid measurement tool for persistent pain in children.
Abstract: :Objective:The purposes of the study were threefold: (a) to determine whether a measurement system based on facial expression would be useful in the assessment of post-operative pain in young children; (b) to examine construct validity in terms of structure, consistency, and dynamics of the

102 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined the relationship between social motives, emotional feelings, and smiling, with a view to demonstrating that smiling is determined by both factors but in different ways.
Abstract: The goal of the present study was to examine the relationship between social motives, emotional feelings, and smiling, with a view to demonstrating that smiling is determined by both factors but in different ways. To vary social motives, the authors manipulated two aspects of social context. Pairs of friends performed either the same or a different task in either the same or a different room, whereas a control group participated in the experiment alone. To vary emotional feelings, participants viewed each of two film clips that differed with respect to the intensity of positive emotional feelings they evoked. Dependent variables included facial activity, as measured by the Facial Action Coding System (FACS), self-reported emotional feelings, and measures of social motives. As predicted, both emotional feelings and social motives affected facial activity. The relevance of the results for theories of facial displays is discussed.

102 citations


Proceedings ArticleDOI
08 Nov 1999
TL;DR: The Integrated System for Facial Expression Recognition (ISFER), which performs facial expression analysis from a still dual facial view image, demonstrates rather high concurrent validity with human coding of facial expressions using FACS and formal instructions in emotion signals.
Abstract: This paper discusses the Integrated System for Facial Expression Recognition (ISFER), which performs facial expression analysis from a still dual facial view image. The system consists of three major parts: a facial data generator, a facial data evaluator and a facial data analyser. While the facial data generator applies fairly conventional techniques for facial feature extraction, the rest of the system represents a novel way of performing a reliable identification of 30 different face actions and a multiple classification of expressions into the six basic emotion categories. An expert system has been utilised to convert low level face geometry into high level face actions, and then this into highest level weighted emotion labels. The system evaluation results demonstrated rather high concurrent validity with human coding of facial expressions using FACS and formal instructions in emotion signals.

55 citations


01 Jan 1999
TL;DR: In this paper, the authors compared the performance of the Gabor wavelet representation and the independent component representation for detecting facial actions in sequences of images and found that the ICA representation takes 90% less CPU time than Gabor representation to compute.
Abstract: The Facial Action Coding System (FACS) (10) is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These methods include unsupervised learning techniques for finding basis images such as principal component analysis, independent component analysis and local feature analysis, and supervised learning techniques such as Fisher's linear discriminants. These data-driven bases are compared to Gabor wavelets, in which the basis images are predefined. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96% accuracy for classifying twelve facial actions. Once the basis images are, learned, the ICA representation takes 90% less CPU time than the Gabor representation to compute. The results provide evidence for the importance of using local image bases, high spatial frequencies, and statistical independence for classifying facial actions. Measurement of facial behavior at the level of detail of FACS provides information for detection of deceit. Applications to detection of deceit are discussed.

37 citations


Proceedings Article
29 Nov 1999
TL;DR: Techniques for automatically recognizing facial actions in sequences of images are explored and the best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96% accuracy for classifying 12 facial actions.
Abstract: The Facial Action Coding System (FACS) (9) is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These methods include unsupervised learning techniques for finding basis images such as principal component analysis, independent component analysis and local feature analysis, and supervised learning techniques such as Fisher's linear discriminants. These data-driven bases are compared to Gabor wavelets, in which the basis images are predefined. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96% accuracy for classifying 12 facial actions. The ICA representation employs 2 orders of magnitude fewer basis images than the Gabor representation and takes 90% less CPU time to compute for new images. The results provide converging support for using local basis images, high spatial frequencies, and statistical independence for classifying facial actions.

25 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: A framework for embedded recognition of faces and facial expressions is described and a face model is constructed for each person in the database using video segments showing different facial expressions.
Abstract: A framework for embedded recognition of faces and facial expressions is described. Faces are modeled based on the appearances and positions of facial features. Hidden states are used to represent discrete facial expressions. A face model is constructed for each person in the database using video segments showing different facial expressions. Face recognition and facial expression recognition are carried out using Bayesian classification. In our current implementation, the face is divided into nine facial features grouped in four regions which are detected and tracked automatically in video segments. We report results on face and facial expression recognition using a video database of 18 people and six expressions.

22 citations


Journal ArticleDOI
TL;DR: 4 facial expressions were found to be associated with true myocardial infarction: lowering the brow, pressing the lips, parting the lip, and turning the head left.

17 citations


Proceedings ArticleDOI
12 Oct 1999
TL;DR: A real-time image processing system is developed to calculate the distance changes of various feature points on the face and correlate them with fuzzy rules to infer the degree of six basic emotion factors, and a so-called muscle model process is adopted for effective image synthesis for facial expression.
Abstract: The aim of this study is to develop image processing methodologies for recognition of human facial expression to understand human emotion, and for the synthesis of facial expression to express emotion by computer graphics. With respect to the recognition of facial expression, a real-time image processing system is developed to calculate the distance changes of various feature points on the face and then correlate them with fuzzy rules to infer the degree of six basic emotion factors. On the other hand, a so-called muscle model process is adopted for effective image synthesis for facial expression, muscle contractions are simulated and animated as facial expression, based on a facial action coding system.