Evaluation of the Intricacies of Emotional Facial Expression of Psychiatric Patients Using Computational Models
01 Jan 2015-pp 199-226
TL;DR: In this paper, an attempt has been made to provide a systematic review of the following issues: the importance of facial expression as a diagnostic measure in psychiatric disorders, the effectiveness of computational models of facial action coding system (FACS) to aid in diagnosis, and the usefulness of computational approach on facial expression analysis as a measure of psychiatric diagnosis.
Abstract: One of the richest avenues for nonverbal expression of emotion is emotional facial expression (EFE), which reflects inner psychic reality of an individual. It can be developed as a very important diagnostic index for psychiatric disorders. In this chapter, an attempt has been made to provide a systematic review of the following issues—the importance of facial expression as a diagnostic measure in psychiatric disorders, the effectiveness of computational models of facial action coding system (FACS) to aid in diagnosis, and, finally, the usefulness of computational approach on facial expression analysis as a measure of psychiatric diagnosis. The possibility of bringing objectivity in psychiatric diagnosis through computational model of EFEs will be discussed.
Citations
More filters
TL;DR: The review outlines methods and algorithms for visual feature extraction, dimensionality reduction, decision methods for classification and regression approaches, as well as different fusion strategies, for automatic depression assessment utilizing visual cues alone or in combination with vocal or verbal cues.
Abstract: Automatic depression assessment based on visual cues is a rapidly growing research domain. The present exhaustive review of existing approaches as reported in over sixty publications during the last ten years focuses on image processing and machine learning algorithms. Visual manifestations of depression, various procedures used for data collection, and existing datasets are summarized. The review outlines methods and algorithms for visual feature extraction, dimensionality reduction, decision methods for classification and regression approaches, as well as different fusion strategies. A quantitative meta-analysis of reported results, relying on performance metrics robust to chance, is included, identifying general trends and key unresolved issues to be considered in future studies of automatic depression assessment utilizing visual cues alone or in combination with vocal or verbal cues.
123 citations
Additional excerpts
...depression assessment [84], [85], [86], [87], [88], [89], [90], [91], [92], [93], [94], [95], [96]....
[...]
TL;DR: An ensemble system for FER is proposed that has the aptitude of incrementally learning and thus, can learn all possible patterns of expressions that may be generated in feature or in various cultures and ethnicities.
Abstract: Facial Expression Recognition (FER) is inherently data driven. Spontaneous expressions are substantially different from posed expressions. Spontaneous facial expressions are more challenging and more difficult to recognize. Any facial expression can be represented in many different patterns of muscles movements. Moreover, the facial expressions show discrepancies in different cultures and ethnicities. Therefore, a FER system has to learn a huge problem/feature space. A base classifier trained on a sub-region of feature space, cannot perform equally well in other areas of feature space. Therefore, a base classifier should be assigned higher voting weight in the regions near to its training space and a lower voting weight in the regions far from its training space. In order to maintain high accuracy and robustness of a FER system in space and time, a Dynamic Weight Majority Voting (DWMV) mechanism for base classifiers is introduced. An ensemble system for FER is proposed that has the aptitude of incrementally learning and thus, can learn all possible patterns of expressions that may be generated in feature or in various cultures and ethnicities. Speeded-Up Robust Features (SURF) are used to represent the feature space. Since no work in literature is found on “which similarity measure is more appropriate in SURF descriptor domain for facial expression recognition”, therefore, different similarity measures are used and the results are compared. A vast range of experimentation is performed on posed and spontaneous databases that demonstrates promising results.
35 citations
Cites background from "Evaluation of the Intricacies of Em..."
...facial movements in the film Avatar, Cartoons for children), security, HCI, facial image fusion for gender conversion and different age groups’ fusion [8, 17, 42] etc....
[...]
TL;DR: The videos collection process of depression patients and control group at Shandong Mental Health Center in China is introduced and the key facial features are extracted from the collected facial videos by person specific active appearance model and are classified with the movement changes of eyes, eyebrows and corners of mouth by support vector machine.
Abstract: Emotional state analysis of facial expression is an important research content of emotion recognition. At the same time, in the medical field, the auxiliary early screening tools for depression are also urgently needed by clinics. Whether there are differences in facial expression changes between depressive patients and normal people in the same situation, and whether the characteristics can be obtained and recognized from the video images of depressive patients, so as to help doctors detect and diagnose potential depressive patients early are the contents of this study. In this paper, we introduce the videos collection process of depression patients and control group at Shandong Mental Health Center in China. The key facial features are extracted from the collected facial videos by person specific active appearance model. On the basis of locating facial features, we classified depression with the movement changes of eyes, eyebrows and corners of mouth by support vector machine. The results show that these features are effective for automatic classification of depression patients.
25 citations
TL;DR: FER system has been proposed by using hybrid texture features to predict the expressions of human using Gabor LBP features and Random Forest Classifier to solve the problem of discrepancies in different cultures and ethnicities.
Abstract: Communication is fundamental to humans. In the literature, it has been shown through many scientific research studies that human communication ranges from 54 to 94 percent is non-verbal. Facial expressions are the most of the important part of the non-verbal communication and it is the most promising way for people to communicate their feelings and emotions to represent their intentions. Pervasive computing and ambient intelligence is required to develop human-centered systems that actively react to complex human communication happening naturally. Therefore, Facial Expression Recognition (FER) system is required that can be used for such type of problem. In this paper, FER system has been proposed by using hybrid texture features to predict the expressions of human. Existing FER system has a problem that these systems show discrepancies in different cultures and ethnicities. Proposed systems also solve this type of problem by using hybrid texture features which are invariant to scale as well as rotate. For texture features, Gabor LBP (GLBP) features have been used to classify expressions by using Random Forest Classifier. Experimentation has been performed on different facial databases that demonstrate promising results.
17 citations
Cites background from "Evaluation of the Intricacies of Em..."
...facial movements in the film Avatar, Cartoons for children), security, HCI, facial image fusion for gender conversion and different age groups’ fusion [7]-[9], etc....
[...]
TL;DR: In this article, the authors provide a comprehensive review of facial expression recognition in three different machine learning problem definitions: Single Label Learning (SLL), Multilabel Learning (MLL), and Label Distribution Learning (LDL) that recover the distribution of emotion in FER data annotation.
Abstract: Facial Expression Recognition (FER) is presently the aspect of cognitive and affective computing with the most attention and popularity, aided by its vast application areas. Several studies have been conducted on FER, and many review works are also available. The existing FER review works only give an account of FER models capable of predicting the basic expressions. None of the works considers intensity estimation of an emotion; neither do they include studies that address data annotation inconsistencies and correlation among labels in their works. This work first introduces some identified FER application areas and provides a discussion on recognised FER challenges. We proceed to provide a comprehensive FER review in three different machine learning problem definitions: Single Label Learning (SLL)- which presents FER as a multiclass problem, Multilabel Learning (MLL)- that resolves the ambiguity nature of FER, and Label Distribution Learning- that recovers the distribution of emotion in FER data annotation. We also include studies on expression intensity estimation from the face. Furthermore, popularly employed FER models are thoroughly and carefully discussed in handcrafted, conventional machine learning and deep learning models. We finally itemise some recognise unresolved issues and also suggest future research areas in the field.
15 citations
References
More filters
01 Dec 2001
TL;DR: A machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates and the introduction of a new image representation called the "integral image" which allows the features used by the detector to be computed very quickly.
Abstract: This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the "integral image" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a "cascade" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.
18,620 citations
TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.
Abstract: We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed "Fisherface" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases.
11,674 citations
01 Jul 1992
TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.
Abstract: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
11,211 citations
Book•
01 Jan 1872
TL;DR: The Expression of the Emotions in Man and Animals Introduction to the First Edition and Discussion Index, by Phillip Prodger and Paul Ekman.
Abstract: Acknowledgments List of Illustrations Figures Plates Preface to the Anniversary Edition by Paul Ekman Preface to the Third Edition by Paul Ekman Preface to the Second Edition by Francis Darwin Introduction to the Third Edition by Paul Ekman The Expression of the Emotions in Man and Animals Introduction to the First Edition 1. General Principles of Expression 2. General Principles of Expression -- continued 3. General Principles of Expression -- continued 4. Means of Expression in Animals 5. Special Expressions of Animals 6. Special Expressions of Man: Suffering and Weeping 7. Low Spirits, Anxiety, Grief, Dejection, Despair 8. Joy, High Spirits, Love, Tender Feelings, Devotion 9. Reflection - Meditation - Ill-temper - Sulkiness - Determination 10. Hatred and Anger 11. Disdain - Contempt - Disgust - Guilt - Pride, Etc. - Helplessness - Patience - Affirmation and Negation 12. Surprise - Astonishment - Fear - Horror 13. Self-attention - Shame - Shyness - Modesty: Blushing 14. Concluding Remarks and Summary Afterword, by Paul Ekman APPENDIX I: Charles Darwin's Obituary, by T. H. Huxley APPENDIX II: Changes to the Text, by Paul Ekman APPENDIX III: Photography and The Expression of the Emotions, by Phillip Prodger APPENDIX IV: A Note on the Orientation of the Plates, by Phillip Prodger and Paul Ekman APPENDIX V: Concordance of Illustrations, by Phillip Prodger APPENDIX VI: List of Head Words from the Index to the First Edition NOTES NOTES TO THE COMMENTARIES INDEX
9,342 citations
7,489 citations