scispace - formally typeset
Search or ask a question

Showing papers on "Facial Action Coding System published in 2015"


Proceedings ArticleDOI
07 Jun 2015
TL;DR: This work introduces joint-patch and multi-label learning (JPML) to address issues of group sparsity and results show that in four of five comparisons on three diverse datasets, CK+, GFT, and BP4D, JPML produced the highest average F1 scores.
Abstract: The face is one of the most powerful channel of nonverbal communication. The most commonly used taxonomy to describe facial behaviour is the Facial Action Coding System (FACS). FACS segments the visible effects of facial muscle activation into 30+ action units (AUs). AUs, which may occur alone and in thousands of combinations, can describe nearly all-possible facial expressions. Most existing methods for automatic AU detection treat the problem using one-vs-all classifiers and fail to exploit dependencies among AU and facial features. We introduce joint-patch and multi-label learning (JPML) to address these issues. JPML leverages group sparsity by selecting a sparse subset of facial patches while learning a multi-label classifier. In four of five comparisons on three diverse datasets, CK+, GFT, and BP4D, JPML produced the highest average F1 scores in comparison with state-of-the art.

188 citations


Journal ArticleDOI
05 Aug 2015-PLOS ONE
TL;DR: EquiFACS provides a method that can now be used to document the facial movements associated with different social contexts and thus to address questions relevant to understanding social cognition and comparative psychology, as well as informing current veterinary and animal welfare practices.
Abstract: Although previous studies of horses have investigated their facial expressions in specific contexts, e.g. pain, until now there has been no methodology available that documents all the possible facial movements of the horse and provides a way to record all potential facial configurations. This is essential for an objective description of horse facial expressions across a range of contexts that reflect different emotional states. Facial Action Coding Systems (FACS) provide a systematic methodology of identifying and coding facial expressions on the basis of underlying facial musculature and muscle movement. FACS are anatomically based and document all possible facial movements rather than a configuration of movements associated with a particular situation. Consequently, FACS can be applied as a tool for a wide range of research questions. We developed FACS for the domestic horse (Equus caballus) through anatomical investigation of the underlying musculature and subsequent analysis of naturally occurring behaviour captured on high quality video. Discrete facial movements were identified and described in terms of the underlying muscle contractions, in correspondence with previous FACS systems. The reliability of others to be able to learn this system (EquiFACS) and consistently code behavioural sequences was high—and this included people with no previous experience of horses. A wide range of facial movements were identified, including many that are also seen in primates and other domestic animals (dogs and cats). EquiFACS provides a method that can now be used to document the facial movements associated with different social contexts and thus to address questions relevant to understanding social cognition and comparative psychology, as well as informing current veterinary and animal welfare practices.

124 citations


Proceedings ArticleDOI
09 Nov 2015
TL;DR: This paper proposes a pair-wise learning strategy to automatically seek a set of facial image patches which are important for discriminating two particular emotion categories and found these learnt local patches are in part consistent with the locations of expression-specific Action Units (AUs), thus the features extracted from such kind of facial patches are named AU-aware facial features.
Abstract: The Emotion Recognition in the Wild (EmotiW) Challenge has been held for three years. Previous winner teams primarily focus on designing specific deep neural networks or fusing diverse hand-crafted and deep convolutional features. They all neglect to explore the significance of the latent relations among changing features resulted from facial muscle motions. In this paper, we study this recognition challenge from the perspective of analyzing the relations among expression-specific facial features in an explicit manner. Our method has three key components. First, we propose a pair-wise learning strategy to automatically seek a set of facial image patches which are important for discriminating two particular emotion categories. We found these learnt local patches are in part consistent with the locations of expression-specific Action Units (AUs), thus the features extracted from such kind of facial patches are named AU-aware facial features. Second, in each pair-wise task, we use an undirected graph structure, which takes learnt facial patches as individual vertices, to encode feature relations between any two learnt facial patches. Finally, a robust emotion representation is constructed by concatenating all task-specific graph-structured facial feature relations sequentially. Extensive experiments on the EmotiW 2015 Challenge testify the efficacy of the proposed approach. Without using additional data, our final submissions achieved competitive results on both sub-challenges including the image based static facial expression recognition (we got 55.38% recognition accuracy outperforming the baseline 39.13% with a margin of 16.25%) and the audio-video based emotion recognition (we got 53.80% recognition accuracy outperforming the baseline 39.33% and the 2014 winner team's final result 50.37% with the margins of 14.47% and 3.43%, respectively).

96 citations


Journal ArticleDOI
TL;DR: This paper provides the first-ever evidence that computer software was more accurate in recognizing neutral faces than people were, and posited two theoretical mechanisms, i.e., smile-as-a-baseline and false recognition of emotion, as possible explanations for my findings.
Abstract: Little is known about people’s accuracy of recognizing neutral faces as neutral. In this paper, I demonstrate the importance of knowing how well people recognize neutral faces. I contrasted human recognition scores of 100 typical, neutral front-up facial images with scores of an arguably objective judge – automated facial coding (AFC) software. I hypothesized that the software would outperform humans in recognizing neutral faces because of the inherently objective nature of computer algorithms. Results confirmed this hypothesis. I provided the first-ever evidence that computer software (90%) was more accurate in recognizing neutral faces than people were (59%). I posited two theoretical mechanisms, i.e. smile-as-a-baseline and false recognition of emotion, as possible explanations for my findings.

64 citations


Journal ArticleDOI
TL;DR: A major advance in automated coding of spontaneous facial actions during an unscripted social interaction involving three strangers is reported, suggesting automated FACS coding has progressed sufficiently to be applied to observational research in emotion and related areas of study.
Abstract: Methods to assess individual facial actions have potential to shed light on important behavioral phenomena ranging from emotion and social interaction to psychological disorders and health. However, manual coding of such actions is labor intensive and requires extensive training. To date, establishing reliable automated coding of unscripted facial actions has been a daunting challenge impeding development of psychological theories and applications requiring facial expression assessment. It is therefore essential that automated coding systems be developed with enough precision and robustness to ease the burden of manual coding in challenging data involving variation in participant gender, ethnicity, head pose, speech, and occlusion. We report a major advance in automated coding of spontaneous facial actions during an unscripted social interaction involving three strangers. For each participant (n = 80, 47 % women, 15 % Nonwhite), 25 facial action units (AUs) were manually coded from video using the Facial Action Coding System. Twelve AUs occurred more than 3 % of the time and were processed using automated FACS coding. Automated coding showed very strong reliability for the proportion of time that each AU occurred (mean intraclass correlation = 0.89), and the more stringent criterion of frame-by-frame reliability was moderate to strong (mean Matthew’s correlation = 0.61). With few exceptions, differences in AU detection related to gender, ethnicity, pose, and average pixel intensity were small. Fewer than 6 % of frames could be coded manually but not automatically. These findings suggest automated FACS coding has progressed sufficiently to be applied to observational research in emotion and related areas of study.

62 citations


Journal ArticleDOI
19 Mar 2015-PeerJ
TL;DR: It is shown that the production of a primate facial expression can also be sensitive to the attention of the play partner, and it is concluded that the orangutan playface is intentionally produced.
Abstract: Primate facial expressions are widely accepted as underpinned by reflexive emotional processes and not under voluntary control. In contrast, other modes of primate communication, especially gestures, are widely accepted as underpinned by intentional, goal-driven cognitive processes. One reason for this distinction is that production of primate gestures is often sensitive to the attentional state of the recipient, a phenomenon used as one of the key behavioural criteria for identifying intentionality in signal production. The reasoning is that modifying/producing a signal when a potential recipient is looking could demonstrate that the sender intends to communicate with them. Here, we show that the production of a primate facial expression can also be sensitive to the attention of the play partner. Using the orangutan (Pongo pygmaeus) Facial Action Coding System (OrangFACS), we demonstrate that facial movements are more intense and more complex when recipient attention is directed towards the sender. Therefore, production of the playface is not an automated response to play (or simply a play behaviour itself) and is instead produced flexibly depending on the context. If sensitivity to attentional stance is a good indicator of intentionality, we must also conclude that the orangutan playface is intentionally produced. However, a number of alternative, lower level interpretations for flexible production of signals in response to the attention of another are discussed. As intentionality is a key feature of human language, claims of intentional communication in related primate species are powerful drivers in language evolution debates, and thus caution in identifying intentionality is important.

40 citations


Journal ArticleDOI
01 May 2015-Pain
TL;DR: Results suggest that consciously applying emotion regulation strategies during a painful task can moderate both cognitively mediated and automatic expressions of pain.
Abstract: Although emotion regulation modulates the pain experience, inconsistencies have been identified regarding the impact of specific regulation strategies on pain. Our goal was to examine the effects of emotion suppression and cognitive reappraisal on automatic (ie, nonverbal) and cognitively mediated (ie, verbal) pain expressions. Nonclinical participants were randomized into either a suppression (n = 58), reappraisal (n = 51), or monitoring control (n = 42) condition. Upon arrival to the laboratory, participants completed the Emotion Regulation Questionnaire, to quantify self-reported suppression and reappraisal tendencies. Subsequently, they completed a thermal pain threshold and tolerance task. They were then provided with instructions to use, depending on their experimental condition, suppression, reappraisal, or monitoring strategies. Afterward, they were exposed to experimentally induced pain. Self-report measures of pain, anxiety, and tension were administered, and facial expressions, heart rate, and galvanic skin response were recorded. The Facial Action Coding System was used to quantify general and pain-related facial activity (ie, we defined facial actions that occurred during at least 5% of pain stimulation periods as "pain-related actions"). Reappraisal and suppression induction led to reductions in nonverbal and verbal indices of pain. Moreover, self-reported tendencies to use suppression and reappraisal (as measured by the Emotion Regulation Questionnaire) did not interact with experimental condition in the determination of participants' responses. Results suggest that consciously applying emotion regulation strategies during a painful task can moderate both cognitively mediated (e.g., verbal) and automatic (e.g., facial activity) expressions of pain.

39 citations


Journal ArticleDOI
TL;DR: The aim of this research is to detect facial expression by observing the change of key features in AAM using Fuzzy Logic, and shows that detection accuracy of emotions depend on the kind of emotion itself.

38 citations


Journal ArticleDOI
TL;DR: A framework based on Dynamic Bayesian Network (DBN) is proposed to systematically model the dynamic and semantic relationships among multilevel AU intensities and demonstrates the superiority of this method over single image-driven methods in AU intensity measurement.

36 citations


Journal ArticleDOI
TL;DR: This study is the first to explore the Duchenne smile in people with eating disorders, providing further evidence of difficulties in the socio-emotional domain inPeople with Anorexia Nervosa.
Abstract: A large body of research has associated Eating Disorders with difficulties in socio-emotional functioning and it has been argued that they may serve to maintain the illness. This study aimed to explore facial expressions of positive emotions in individuals with Anorexia Nervosa (AN) and Bulimia Nervosa (BN) compared to healthy controls (HC), through an examination of the Duchenne smile (DS), which has been associated with feelings of enjoyment, amusement and happiness (Ekman et al., 1990). Sixty participants (AN=20; BN=20; HC=20) were videotaped while watching a humorous film clip. The duration and intensity of DS were subsequently analyzed using the facial action coding system (FACS) (Ekman and Friesen, 2003). Participants with AN displayed DS for shorter durations than BN and HC participants, and their DS had lower intensity. In the clinical groups, lower duration and intensity of DS were associated with lower BMI, and use of psychotropic medication. The study is the first to explore DS in people with eating disorders, providing further evidence of difficulties in the socio-emotional domain in people with AN.

33 citations


Journal ArticleDOI
TL;DR: This paper examines the composition and perception of smiling behavior by Republican presidential candidates during the 2012 preprimary period by coding facial muscle activity at the microlevel using the Facial Action Coding System (FACS) to produce an inventory of politically relevant smile types.
Abstract: The smiles and affiliative expressions of presidential candidates are important for political success, allowing contenders to nonverbally connect with potential supporters and bond with followers. Smiles, however, are not unitary displays; they are multifaceted in composition and signaling intent due to variations in performance. With this in mind, we examine the composition and perception of smiling behavior by Republican presidential candidates during the 2012 preprimary period. In this paper we review literature concerning different smile types and the muscular movements that compose them from a biobehavioral perspective. We then analyze smiles expressed by Republican presidential candidates early in the 2012 primary season by coding facial muscle activity at the microlevel using the Facial Action Coding System (FACS) to produce an inventory of politically relevant smile types. To validate the subtle observed differences between smile types, we show viewers a series of short video clips to differentiate displays on the basis of their perceived reassurance, or social signaling. The discussion considers the implications of our findings in relation to political evaluation and communication efficacy.

Journal ArticleDOI
TL;DR: A new approach is developed by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design that may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.
Abstract: The quest of developing realistic facial animation is ever-growing The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications The production of computer-animated movies using synthetic actors are still challenging issues Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems Realistic facial expressions of avatar

Journal ArticleDOI
15 Sep 2015-PeerJ
TL;DR: Differences in use and form of some movements are found, most likely due to specializations in the communicative repertoire of each species, rather than morphological differences.
Abstract: Human and non-human primates exhibit facial movements or displays to communicate with one another. The evolution of form and function of those displays could be better understood through multispecies comparisons. Anatomically based coding systems (Facial Action Coding Systems: FACS) are developed to enable such comparisons because they are standardized and systematic and aid identification of homologous expressions underpinned by similar muscle contractions. To date, FACS has been developed for humans, and subsequently modified for chimpanzees, rhesus macaques, orangutans, hylobatids, dogs, and cats. Here, we wanted to test whether the MaqFACS system developed in rhesus macaques (Macaca mulatta) could be used to code facial movements in Barbary macaques (M. sylvanus), a species phylogenetically close to the rhesus macaques. The findings show that the facial movement capacity of Barbary macaques can be reliably coded using the MaqFACS. We found differences in use and form of some movements, most likely due to specializations in the communicative repertoire of each species, rather than morphological differences.

Journal ArticleDOI
TL;DR: It is shown that healthy adults can discriminate different negative emotions, including pain, expressed by avatars at varying intensities, and there is evidence that masking part of an avatar's face does not prevent the detection of different levels of pain.
Abstract: Empathy is a multifaceted emotional and mental faculty that is often found to be affected in a great number of psychopathologies, such as schizophrenia, yet it remains very difficult to measure in an ecological context. The challenge stems partly from the complexity and fluidity of this social process, but also from its covert nature. One powerful tool to enhance experimental control over such dynamic social interactions has been the use of avatars in virtual reality (VR); information about an individual in such an interaction can be collected through the analysis of his or her neurophysiological and behavioral responses. We have developed a unique platform, the Empathy-Enhancing Virtual Evolving Environment (EEVEE), which is built around three main components: (1) different avatars capable of expressing feelings and emotions at various levels based on the Facial Action Coding System (FACS); (2) systems for measuring the physiological responses of the observer (heart and respiration rate, skin conductance, gaze and eye movements, facial expression); and (3) a multimodal interface linking the avatar's behavior to the observer's neurophysiological response. In this article, we provide a detailed description of the components of this innovative platform and validation data from the first phases of development. Our data show that healthy adults can discriminate different negative emotions, including pain, expressed by avatars at varying intensities. We also provide evidence that masking part of an avatar's face (top or bottom half) does not prevent the detection of different levels of pain. This innovative and flexible platform provides a unique tool to study and even modulate empathy in a comprehensive and ecological manner in various populations, notably individuals suffering from neurological or psychiatric disorders.

Journal ArticleDOI
TL;DR: This paper investigated the verbal and facial responses of 20 gelotophobes and 20 non-gelotophobe towards videos of people recalling memories of laughter-eliciting positive emotions (amusement, relief, schadenfreude, tactile pleasure).

Journal ArticleDOI
TL;DR: Assessing facial function in patients with idiopathic facial palsy with a new computer-based system that automatically recognizes action units (AUs) defined by the Facial Action Coding System (FACS) delivers fast and objective global and regional data on facial motor function for use in clinical routine and clinical trials.
Abstract: Aim of the present observational single center study was to objectively assess facial function in patients with idiopathic facial palsy with a new computer-based system that automatically recognizes action units (AUs) defined by the Facial Action Coding System (FACS) Still photographs using posed facial expressions of 28 healthy subjects and of 299 patients with acute facial palsy were automatically analyzed for bilateral AU expression profiles All palsies were graded with the House-Brackmann (HB) grading system and with the Stennert Index (SI) Changes of the AU profiles during follow-up were analyzed for 77 patients The initial HB grading of all patients was 33 ± 12 SI at rest was 186 ± 13 and during motion 379 ± 43 Healthy subjects showed a significant AU asymmetry score of 21 ± 11 % and there was no significant difference to patients (p = 0128) At initial examination of patients, the number of activated AUs was significantly lower on the paralyzed side than on the healthy side (p < 00001) The final examination for patients took place 4 ± 6 months post baseline The number of activated AUs and the ratio between affected and healthy side increased significantly between baseline and final examination (both p < 00001) The asymmetry score decreased between baseline and final examination (p < 00001) The number of activated AUs on the healthy side did not change significantly (p = 0779) Radical rethinking in facial grading is worthwhile: automated FACS delivers fast and objective global and regional data on facial motor function for use in clinical routine and clinical trials

Journal ArticleDOI
TL;DR: In this paper, facial responses and ratings to contempt and joy were investigated in individuals with or without gelotophobia (fear of being laughed at) in a paradigm facilitating smile misattribution.
Abstract: In a paradigm facilitating smile misattribution, facial responses and ratings to contempt and joy were investigated in individuals with or without gelotophobia (fear of being laughed at). Participants from two independent samples (N 1 = 83, N 2 = 50) rated the intensity of eight emotions in 16 photos depicting joy, contempt, and different smiles. Facial responses were coded by the Facial Action Coding System in the second study. Compared with non-fearful individuals, gelotophobes rated joy smiles as less joyful and more contemptuous. Moreover, gelotophobes showed less facial joy and more contempt markers. The contempt ratings were comparable between the two groups. Looking at the photos of smiles lifted the positive mood of nongelotophobes, whereas gelotophobes did not experience an increase. We hypothesize that the interpretation bias of “joyful faces hiding evil minds” (i.e., being also contemptuous) and exhibiting less joy facially may complicate social interactions for gelotophobes and serve as a maintaining factor of gelotophobia.

Journal ArticleDOI
TL;DR: The facial features that are relevant for the observer in the identification of the expression of pain remain largely unknown despite the strong medical impact that misjudging pain can have on patients’ well‐being.

Proceedings ArticleDOI
04 May 2015
TL;DR: The first-ever ultra large-scale clustering of facial events extracted from over 1.5 million facial videos collected while individuals from over 94 countries respond to one of more that 8000 online videos is presented.
Abstract: Facial behavior contains rich non-verbal information However, to date studies have typically been limited to the analysis of a few hundred or thousand video sequences We present the first-ever ultra large-scale clustering of facial events extracted from over 15 million facial videos collected while individuals from over 94 countries respond to one of more that 8000 online videos We believe this is the first example of what might be described “big data” analysis in facial expression research Automated facial coding was used to quantify eyebrow raise (AU2), eyebrow lowerer (AU4) and smile behaviors in the 700,000,000+ frames Facial “events” were extracted and defined by a set of temporal features and then clustered using the k-means clustering algorithm Verifying the observations in each cluster against human-coded data we were able to identify reliable clusters of facial events with different dynamics (eg fleeting vs sustained and rapid offset vs slow offset smiles) These events provide a way of summarizing behaviors that occur without prescribing the properties We examined the how these nuanced facial events were tied to consumer behavior We found that smile events — particularly those with high peaks — were much more likely to occur during viral ads This data is cross-cultural, we also examine the prevalence of different events across regions of the globe

Journal ArticleDOI
TL;DR: The data suggests that the degree of facial expressiveness is not regulated by inhibitory control in general, but specifically depends on inhibitory mechanisms regulating automatic motor responses.

Journal ArticleDOI
TL;DR: In this proposed algorithm initially detecting eye and mouth, features of eye and Mouth are extracted using Gabor filter, (Local Binary Pattern) LBP and PCA is used to reduce the dimensions of the features, and SVM is used for classification of expression and facial action units.
Abstract: Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field Computational facial expression analysis is a challenging research topic in computer vision It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels First, in the bottom level, facial feature tracking, which usually detects and tracks prominent landmarks surrounding facial components (ie, mouth, eyebrow, etc), captures the detailed face shape information; Second, facial actions recognition, ie, recognize facial action units (AUs) defined in FACS, try to recognize some meaningful facial activities (ie, lid tightener, eyebrow raiser, etc); In the top level, facial expression analysis attempts to recognize some meaningful facial activities (ie, lid tightener, eyebrow raiser, etc); In the top level, facial expression analysis attempts to recognize facial expressions that represent the human emotion states In this proposed algorithm initially detecting eye and mouth, features of eye and mouth are extracted using Gabor filter, (Local Binary Pattern) LBP and PCA is used to reduce the dimensions of the features Finally SVM is used to classification of expression and facial action units

Journal ArticleDOI
24 Mar 2015-PeerJ
TL;DR: The results demonstrate that familiarity gives rise to more efficient processing of global facial geometry, and are interpreted in terms of increased holistic processing of facial information that is maintained across viewing distances.
Abstract: Identification of personally familiar faces is highly efficient across various viewing conditions. While the presence of robust facial representations stored in memory is considered to aid this process, the mechanisms underlying invariant identification remain unclear. Two experiments tested the hypothesis that facial representations stored in memory are associated with differential perceptual processing of the overall facial geometry. Subjects who were personally familiar or unfamiliar with the identities presented discriminated between stimuli whose overall facial geometry had been manipulated to maintain or alter the original facial configuration (see Barton, Zhao & Keenan, 2003). The results demonstrate that familiarity gives rise to more efficient processing of global facial geometry, and are interpreted in terms of increased holistic processing of facial information that is maintained across viewing distances.

Proceedings ArticleDOI
19 Mar 2015
TL;DR: A comparative study of the different approaches initiated for automatic real-time facial expression recognition is undertaken along with their benefits and flaws which will further help in developing and improving the system.
Abstract: Facial Expression Recognition lies in one of the crucial areas of research for human-computer interaction and human emotion identification. For a system to recognize a facial expression it needs to come across multiple variability of human face like color, texture, posture, expression, orientation and so on. The first step to recognize a facial expression of a person with various facial movements of the muscles beneath the eyes, nose and lips are to be detected and further classifying those features by comparing them with a set of trained data values using a good classifier for recognizing the emotion. In this paper a comparative study of the different approaches initiated for automatic real-time facial expression recognition is undertaken along with their benefits and flaws which will further help in developing and improving the system.

Journal ArticleDOI
TL;DR: HapFACS is described, a free software and API that is developed to provide the affective computing community with a resource that produces static and dynamic facial expressions for three-dimensional speaking characters, and results of multiple experiments are discussed.
Abstract: With the growing number of researchers interested in modeling the inner workings of affective social intelligence, the need for tools to easily model its associated expressions has emerged. The goal of this article is two-fold: 1) we describe HapFACS, a free software and API that we developed to provide the affective computing community with a resource that produces static and dynamic facial expressions for three-dimensional speaking characters; and 2) we discuss results of multiple experiments that we conducted in order to scientifically validate our facial expressions and head animations in terms of the widely accepted Facial Action Coding System (FACS) standard, and its Action Units (AU). The result is that users, without any 3D-modeling nor computer graphics expertise, can animate speaking virtual characters with FACS-based realistic facial expression animations, and embed these expressive characters in their own application(s). The HapFACS software and API can also be used for generating repertoires of realistic FACS-validated facial expressions, useful for testing emotion expression generation theories.

Journal ArticleDOI
TL;DR: It is suggested that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children.
Abstract: "Emotional numbing" is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent's Report of the Child's Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes ('baseline video') followed by a 2-min video clip from a television comedy ('comedy video'). Children's facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children's reactions to disasters.

Journal ArticleDOI
TL;DR: A forced-choice discrimination paradigm using such CGI facial animations showed that human observers can categorize identity solely from facial motion cues, and showed that observers are able to make accurate discriminations of identity in the absence of all cues except facial motion.
Abstract: Advances in marker-less motion capture technology now allow the accurate replication of facial motion and deformation in computer-generated imagery (CGI). A forced-choice discrimination paradigm using such CGI facial animations showed that human observers can categorize identity solely from facial motion cues. Animations were generated from motion captures acquired during natural speech, thus eliciting both rigid (head rotations and translations) and nonrigid (expressional changes) motion. To limit interferences from individual differences in facial form, all animations shared the same appearance. Observers were required to discriminate between different videos of facial motion and between the facial motions of different people. Performance was compared to the control condition of orientation-inverted facial motion. The results show that observers are able to make accurate discriminations of identity in the absence of all cues except facial motion. A clear inversion effect in both tasks provided consisten...

Journal ArticleDOI
TL;DR: A representation model based on mid-level facial muscular movement features for automatically recognizing basic and subtle emotions is proposed and achieves accuracies close to human perception.

Journal ArticleDOI
TL;DR: Overall quantitative changes in the degree of facial pain expressiveness occurred in PD patients but also qualitative changes were found which refer to a strongly affected encoding of the sensory dimension of pain (eye-narrowing) while the affective dimension ofpain (contradiction of the eyebrows) was preserved.

Journal ArticleDOI
TL;DR: Within the context of facial expression classification using the facial action coding system (FACS), this work addresses the problem of detecting facial action units (AUs) and shows that both these sources of error can be reduced by enhancing ECOC through bootstrapping and weighted decoding.

Journal ArticleDOI
TL;DR: The preliminary findings suggest that stimulus-evoked facial expressions from emergency department patients with cardiopulmonary symptoms might be a useful component of gestalt pretest probability assessment.
Abstract: Background and objective The hypothesis of the present work derives from clinical experience that suggests that patients who are more ill have less facial expression variability in response to emotional cues. Methods Prospective study of diagnostic accuracy from a convenience sample of adult patients with dyspnoea and chest pain in an emergency department. Patients viewed three stimulus slides on a laptop computer that were intended to evoke a change in facial affect. The computer simultaneously video recorded patients’ facial expressions. Videos were examined by two independent blinded observers who analysed patients’ facial expressions using the Facial Action Coding System (FACS). Patients were followed for predefined serious cardiopulmonary diagnosis (Disease+) within 14 days (acute coronary syndrome, pulmonary embolism, pneumonia, aortic or oesophageal disasters or new cancer). The main analysis compared total FACS scores, and action units of smile, surprise and frown between Disease+ and Disease−. Results Of 50 patients, 8 (16%) were Disease+. The two observers had 92% exact agreement on the FACS score from the first stimulus slide. During stimulus slide 1, the median of all FACS values from Disease+ patients was 3.4 (1st–3rd quartiles 1–6), significantly less than the median of 7 (3–14) from D−patients (p=0.019, Mann–Whitney U). Expression of surprise had the largest difference between Disease+ and Disease−(area under the receiver operating characteristic curve 0.75, 95% CI 0.52 to 0.87). Conclusions With a single visual stimulus, patients with serious cardiopulmonary diseases lacked facial expression variability and surprise affect. Our preliminary findings suggest that stimulus-evoked facial expressions from emergency department patients with cardiopulmonary symptoms might be a useful component of gestalt pretest probability assessment.