scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Statistical moments based facial expression analysis

TL;DR: The paper presents detection of all the six universal emotions based on statistical moments i.e. Zernike moments and simulation based experimentation provides average detection accuracy and detection time remains at par with frontal face images.
Abstract: Facial expression analysis plays pivotal role for all the applications which are based on emotion recognition. Some of the significant applications are driver alert system, animation, pain monitoring for patients and clinical practices. Emotion recognition is carried out in diverse ways and facial expressions based method is one of the most prominent in non verbal category of emotion recognition. The paper presents detection of all the six universal emotions based on statistical moments i.e. Zernike moments. The features extracted by Zernike moments are further classified through Naive Bayesian classifier. Rotation Invariance is one of the important properties of Zernike moments which is also experimented. The simulation based experimentation provides average detection accuracy as 81.66% and recognition time less than 2 seconds for frontal face images. The average precision with respect to positives is 81.85 and average sensitivity is obtained as 80.60%. Robustness of system is verified against rotation of images till 360 degrees with step size as 45 degrees. The detection accuracy varies with reference to emotion under consideration but the average accuracy and detection time remains at par with frontal face images.
Citations
More filters
Proceedings ArticleDOI
17 May 2019
TL;DR: Facial emotion recognition is the process of detecting and recognizing different types of emotions in humans using facial expressions, the various steps include detection of the face and its landmarks, feature extraction of facial landmarks, and emotional state classification.
Abstract: Emotions are found using verbal and non-verbal cues by analyzing voices and facial expressions. Monitoring emotional patterns of human is gaining importance in predicting the mood of a person. Facial emotion recognition is the process of detecting and recognizing different types of emotions in humans using facial expressions. The various steps include detection of the face and its landmarks, feature extraction of facial landmarks, and emotional state classification. The Haar cascading approach is used to detect different facial components such as eyes, mouth, and nose in an image. Facial features are analyzed using Histogram of Gradients (HOG) and Local Binary Pattern (LBP). The resultant feature vector is formed from the feature points. The three emotional states namely happy, sad and angry are classified using neural network classifier. The new feature points of test data are compared against trained data and their corresponding label values are displayed as the output for emotion recognition with the accuracy of 87% and 64% is being achieved using HOG and LBP techniques.

6 citations


Cites methods from "Statistical moments based facial ex..."

  • ...The features extracted by Zernike moments are classified through Naïve Bayesian classifier [4] and some uses fuzzy logic [6, 20]....

    [...]

Journal ArticleDOI
01 Oct 2019
TL;DR: Recent research works on the contextual modalities and associated machine learning algorithms which are required to build resident intention prediction system have been surveyed and a classification taxonomy of contextualmodalities is discussed.
Abstract: The Smart Home is an environment that enables the resident to interact with home appliances which provide resident intended services. In recent years, predicting resident intention based on the contextual modalities like activity, speech, emotion, object affordances, and physiological parameters have increased importance in the field of pervasive computing. Contextual modality is the feature through which resident interacts with the home appliances like TVs, lights, doors, fans, etc. These modalities assist the appliances in predicting the resident intentions making them recommend resident intended services like opening and closing doors, turning on and off televisions, lights, and fans. Resident-appliance interaction can be achieved by embedding artificial intelligence-based machine learning algorithms into the appliances. Recent research works on the contextual modalities and associated machine learning algorithms which are required to build resident intention prediction system have been surveyed in this article. A classification taxonomy of contextual modalities is also discussed.

4 citations

Proceedings ArticleDOI
01 Feb 2018
TL;DR: Zernike moments based feature extraction method with support vector machine is proposed to identify 8 expressions (including Disgust, and Contempt) on JAFFE and Radboud faces database with discriminative multi-manifold analysis technique with Single Sample Per person (SSPP) and compared results of Zernike with Hu moments.
Abstract: For interactive human and computer interface (HCI) it is important that the computer understand facial expressions of human. With HCI the gap between computers and humans will reduce. The computers can interact in more appropriate way with humans by judging their expressions. There are various techniques for facial expression recognition which focuses on getting good results of human expressions. Most of these works are done on standard databases of foreign origin with six (Neutral, Happy, fear, Anger, Surprise, Sad) basic expression identification. We propose Zernike moments based feature extraction method with support vector machine to identify 8 expressions (including Disgust, and Contempt) on JAFFE and Radboud faces database with discriminative multi-manifold analysis technique with Single Sample Per person (SSPP) and finally compared results of Zernike with Hu moments.

2 citations

Proceedings ArticleDOI
09 May 2022
TL;DR: This research study presents a rating system based on facial expression and a recognition-based scoring system using pre-trained convolutional neural network (CNN) models to solve the problem of customer satisfaction at the restaurant.
Abstract: Lately there has been a surge in the number of automated and unmanned restaurants. Since these restaurants have become automated there isn't a proper system in place to get feedback from the customers about customer satisfaction and their experience at the restaurant. To solve this cardinal problem, this research study presents a rating system based on facial expression and a recognition-based scoring system using pre-trained convolutional neural network (CNN) models. The computer must interpret human facial expressions to provide an interactive human-computer interaction (HCI). The HCI will bridge the gap between computers and humans. By analyzing human expressions, computers may interact with humans in more acceptable ways. There seem to be numerous ways for facial expression recognition that emphasize generating good results from human expressions and thereafter rating the meal. The scoring system currently offers three expressions (neutral, satisfied, and unhappy).

1 citations

Proceedings ArticleDOI
01 Jul 2019
TL;DR: Experiments on the extended CK+ dataset and JAFFE dataset show the effectiveness of the proposed FER system, which includes point distribution model (PDM) and support vector machines (SVM) classifiers) trained which are used to predict the facial expressions.
Abstract: The discriminative features between different facial expressions are mostly constrained to some regions of a face when an expression occurs on a face. Extracting these discriminating features is one of the most important aspect of a facial expression recognition (FER) system. Though significant amount of work has been done in identifying optimum features for expression recognition, it is still a challenging part of FER systems. Identifying the most significant regions and selectively applying higher order moments on these patches of the face image is found to give good feature vectors that have high discriminative ability between the classes (expressions). Point distribution model (PDM) has been used in the proposed method for locating the facial landmarks required to extract facial patches. Higher order Zernike moment (ZM) invariants were evaluated for these facial patches which form the features to be used for classification. ZMs are rotation invariant orthogonal moments having a high degree of image representation capability, hence they can satisfactorily extract significant amounts of both local and global information contents from the facial patch. The final feature vector consists of higher order Zernike moment invariants from all facial patches. Finally, support vector machines (SVM) classifiers are trained which are used to predict the facial expressions. Experiments on the extended CK+ dataset and JAFFE dataset show the effectiveness of the proposed FER system.

1 citations


Cites methods from "Statistical moments based facial ex..."

  • ...This paper proposes a method with better positive detection rates than recent works on FER using Zernike based methods ([15] and [22])....

    [...]

  • ...In [22], Zernike moment based features were extracted from left eye, right eye and lip regions without the use of any accurate landmark localization methods....

    [...]

References
More filters
BookDOI
31 Aug 2011
TL;DR: This highly anticipated new edition provides a comprehensive account of face recognition research and technology, spanning the full range of topics needed for designing operational face recognition systems, as well as offering challenges and future directions.
Abstract: This highly anticipated new edition provides a comprehensive account of face recognition research and technology, spanning the full range of topics needed for designing operational face recognition systems. After a thorough introductory chapter, each of the following chapters focus on a specific topic, reviewing background information, up-to-date techniques, and recent results, as well as offering challenges and future directions. Features: fully updated, revised and expanded, covering the entire spectrum of concepts, methods, and algorithms for automated face detection and recognition systems; provides comprehensive coverage of face detection, tracking, alignment, feature extraction, and recognition technologies, and issues in evaluation, systems, security, and applications; contains numerous step-by-step algorithms; describes a broad range of applications; presents contributions from an international selection of experts; integrates numerous supporting graphs, tables, charts, and performance data.

1,609 citations


Additional excerpts

  • ...[14] Stan Z....

    [...]

Journal ArticleDOI
TL;DR: Experimental results show that such a hybrid combination of the HVC structure with a hierarchical classifier significantly improves expression recognition accuracy when applied to wide-ranging databases, and is not only robust to corrupted data and missing information, but can be generalized to cross-database expression recognition.

299 citations

Journal ArticleDOI
TL;DR: Comparison with the state-of-the-art performance confirms that the proposed approach achieves the highest CRR on the JAFFE database and is among the top performers on the Cohn-Kanade (CK) database.
Abstract: Facial expression is an important channel for human communication and can be applied in many real applications. One critical step for facial expression recognition (FER) is to accurately extract emotional features. Current approaches on FER in static images have not fully considered and utilized the features of facial element and muscle movements, which represent static and dynamic, as well as geometric and appearance characteristics of facial expressions. This paper proposes an approach to solve this limitation using "salient” distance features, which are obtained by extracting patch-based 3D Gabor features, selecting the "salient” patches, and performing patch matching operations. The experimental results demonstrate high correct recognition rate (CRR), significant performance improvements due to the consideration of facial element and muscle movements, promising results under face registration errors, and fast processing time. Comparison with the state-of-the-art performance confirms that the proposed approach achieves the highest CRR on the JAFFE database and is among the top performers on the Cohn-Kanade (CK) database.

229 citations


"Statistical moments based facial ex..." refers background in this paper

  • ...This study focuses on recognition of emotions through facial expressions....

    [...]

Journal ArticleDOI
TL;DR: An emotion classification paradigm, based on emotion profiles (EPs), is an approach to interpret the emotional content of naturalistic human expression by providing multiple probabilistic class labels, rather than a single hard label.
Abstract: Automatic recognition of emotion is becoming an increasingly important component in the design process for affect-sensitive human-machine interaction (HMI) systems. Well-designed emotion recognition systems have the potential to augment HMI systems by providing additional user state details and by informing the design of emotionally relevant and emotionally targeted synthetic behavior. This paper describes an emotion classification paradigm, based on emotion profiles (EPs). This paradigm is an approach to interpret the emotional content of naturalistic human expression by providing multiple probabilistic class labels, rather than a single hard label. EPs provide an assessment of the emotion content of an utterance in terms of a set of simple categorical emotions: anger; happiness; neutrality; and sadness. This method can accurately capture the general emotional label (attaining an accuracy of 68.2% in our experiment on the IEMOCAP data) in addition to identifying underlying emotional properties of highly emotionally ambiguous utterances. This capability is beneficial when dealing with naturalistic human emotional expressions, which are often not well described by a single semantic label.

215 citations


"Statistical moments based facial ex..." refers methods in this paper

  • ...Performance analysis of the proposed work is done in section V. Section VI provides Conclusion and directions for future work....

    [...]

Journal ArticleDOI
TL;DR: Experimental results show that the method of combining 2D-LDA (Linear Discriminant Analysis) and SVM (Support Vector Machine) outperforms others and takes only 0.0357 second to process one image of size 256 × 256.
Abstract: Facial expression provides an important behavioral measure for studies of emotion, cognitive processes, and social interaction. Facial expression recognition has recently become a promising research area. Its applications include human-computer interfaces, human emotion analysis, and medical care and cure. In this paper, we investigate various feature representation and expression classification schemes to recognize seven different facial expressions, such as happy, neutral, angry, disgust, sad, fear and surprise, in the JAFFE database. Experimental results show that the method of combining 2D-LDA (Linear Discriminant Analysis) and SVM (Support Vector Machine) outperforms others. The recognition rate of this method is 95.71% by using leave-one-out strategy and 94.13% by using cross-validation strategy. It takes only 0.0357 second to process one image of size 256 × 256.

160 citations


"Statistical moments based facial ex..." refers background in this paper

  • ...Emotional aspects have huge impact on Rational intelligence (memory, decision making etc) and social intelligence (Communication, adaption etc) which are further interlinked with Learning capabilities and behavioral aspect of any person....

    [...]