scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Facial Emotion Recognition Based on Biorthogonal Wavelet Entropy, Fuzzy Support Vector Machine, and Stratified Cross Validation

TL;DR: A new emotion recognition system based on facial expression images that is superior to three state-of-the-art methods is proposed and achieved an overall accuracy of 96.77±0.10%.
Abstract: Emotion recognition represents the position and motion of facial muscles. It contributes significantly in many fields. Current approaches have not obtained good results. This paper aimed to propose a new emotion recognition system based on facial expression images. We enrolled 20 subjects and let each subject pose seven different emotions: happy, sadness, surprise, anger, disgust, fear, and neutral. Afterward, we employed biorthogonal wavelet entropy to extract multiscale features, and used fuzzy multiclass support vector machine to be the classifier. The stratified cross validation was employed as a strict validation model. The statistical analysis showed our method achieved an overall accuracy of 96.77±0.10%. Besides, our method is superior to three state-of-the-art methods. In all, this proposed method is efficient.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper streamline machine learning algorithms for effective prediction of chronic disease outbreak in disease-frequent communities by proposing a new convolutional neural network (CNN)-based multimodal disease risk prediction algorithm using structured and unstructured data from hospital.
Abstract: With big data growth in biomedical and healthcare communities, accurate analysis of medical data benefits early disease detection, patient care, and community services. However, the analysis accuracy is reduced when the quality of medical data is incomplete. Moreover, different regions exhibit unique characteristics of certain regional diseases, which may weaken the prediction of disease outbreaks. In this paper, we streamline machine learning algorithms for effective prediction of chronic disease outbreak in disease-frequent communities. We experiment the modified prediction models over real-life hospital data collected from central China in 2013–2015. To overcome the difficulty of incomplete data, we use a latent factor model to reconstruct the missing data. We experiment on a regional chronic disease of cerebral infarction. We propose a new convolutional neural network (CNN)-based multimodal disease risk prediction algorithm using structured and unstructured data from hospital. To the best of our knowledge, none of the existing work focused on both data types in the area of medical big data analytics. Compared with several typical prediction algorithms, the prediction accuracy of our proposed algorithm reaches 94.8% with a convergence speed, which is faster than that of the CNN-based unimodal disease risk prediction algorithm.

764 citations


Cites background from "Facial Emotion Recognition Based on..."

  • ...Some advanced features shall be tested in future study, such as fractal dimension [30], biorthogonal wavelet transform [31], [32] etc....

    [...]

Journal ArticleDOI
28 Jun 2018-Sensors
TL;DR: A comprehensive review on physiological signal-based emotion recognition, including emotion models, emotion elicitation methods, the published emotional physiological datasets, features, classifiers, and the whole framework for emotion recognition based on the physiological signals is presented.
Abstract: Emotion recognition based on physiological signals has been a hot topic and applied in many areas such as safe driving, health care and social security. In this paper, we present a comprehensive review on physiological signal-based emotion recognition, including emotion models, emotion elicitation methods, the published emotional physiological datasets, features, classifiers, and the whole framework for emotion recognition based on the physiological signals. A summary and comparation among the recent studies has been conducted, which reveals the current existing problems and the future work has been discussed.

484 citations


Cites background from "Facial Emotion Recognition Based on..."

  • ...One is using human physical signals such as facial expression [4], speech [5], gesture, posture, etc....

    [...]

Journal ArticleDOI
TL;DR: The emotion recognition methods based on multi-channel EEG signals as well as multi-modal physiological signals are reviewed and the correlation between different brain areas and emotions is discussed.

281 citations

Journal ArticleDOI
TL;DR: A novel Deep Learning based approach to detect emotions - Happy, Sad and Angry in textual dialogues using semi-automated techniques to gather large scale training data with diverse ways of expressing emotions to train the model.

244 citations

Journal ArticleDOI
TL;DR: The goal of this study is to provide a new computer-vision based technique to detect Alzheimer's disease in an efficient way using convolutional neural network and increased the classification accuracy by approximately 5% compared to state-of-the-art methods.
Abstract: Alzheimer’s disease (AD) is a progressive brain disease. The goal of this study is to provide a new computer-vision based technique to detect it in an efficient way. The brain-imaging data of 98 AD patients and 98 healthy controls was collected using data augmentation method. Then, convolutional neural network (CNN) was used, CNN is the most successful tool in deep learning. An 8-layer CNN was created with optimal structure obtained by experiences. Three activation functions (AFs): sigmoid, rectified linear unit (ReLU), and leaky ReLU. The three pooling-functions were also tested: average pooling, max pooling, and stochastic pooling. The numerical experiments demonstrated that leaky ReLU and max pooling gave the greatest result in terms of performance. It achieved a sensitivity of 97.96%, a specificity of 97.35%, and an accuracy of 97.65%, respectively. In addition, the proposed approach was compared with eight state-of-the-art approaches. The method increased the classification accuracy by approximately 5% compared to state-of-the-art methods.

229 citations


Additional excerpts

  • ...Traditional computer vision methods are composed of three important stages [24, 25]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This work reviews feature extraction methods for emotion recognition from EEG based on 33 studies, and results suggest preference to locations over parietal and centro-parietal lobes.
Abstract: Emotion recognition from EEG signals allows the direct assessment of the “inner” state of a user, which is considered an important factor in human-machine-interaction. Many methods for feature extraction have been studied and the selection of both appropriate features and electrode locations is usually based on neuro-scientific findings. Their suitability for emotion recognition, however, has been tested using a small amount of distinct feature sets and on different, usually small data sets. A major limitation is that no systematic comparison of features exists. Therefore, we review feature extraction methods for emotion recognition from EEG based on 33 studies. An experiment is conducted comparing these features using machine learning techniques for feature selection on a self recorded data set. Results are presented with respect to performance of different feature selection methods, usage of selected feature types, and selection of electrode locations. Features selected by multivariate methods slightly outperform univariate methods. Advanced feature extraction techniques are found to have advantages over commonly used spectral power bands. Results also suggest preference to locations over parietal and centro-parietal lobes.

743 citations


"Facial Emotion Recognition Based on..." refers background in this paper

  • ...The research of FER benefits a massive of fields, for example, the in-car conversational interface [3], EEG signal [4], the theatre performance [5], speech emotion recognition [6], bipolar disorder [7], adolescents with disabilities [8], depressive symptom [9], Parkinson’s disease [10], etc....

    [...]

Journal ArticleDOI
TL;DR: It is shown that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four.

385 citations


"Facial Emotion Recognition Based on..." refers background or result in this paper

  • ...Those two findings support the conclusion in [46]....

    [...]

  • ...The reasons may result from the early phase of dynamic facial expressions between Anger and Disgust are similar due to the common transmission of nose wrinkle and lip funneler [46]....

    [...]

  • ...For Fear and Surprise, the upper lid raiser and jaw drop may contribute to the confusion [46]....

    [...]

Journal ArticleDOI
TL;DR: This paper presents a neural network (NN) based method to classify a given MR brain image as normal or abnormal, which first employs wavelet transform to extract features from images, and then applies the technique of principle component analysis (PCA) to reduce the dimensions of features.
Abstract: Automated and accurate classification of MR brain images is of importance for the analysis and interpretation of these images and many methods have been proposed. In this paper, we present a neural network (NN) based method to classify a given MR brain image as normal or abnormal. This method first employs wavelet transform to extract features from images, and then applies the technique of principle component analysis (PCA) to reduce the dimensions of features. The reduced features are sent to a back propagation (BP) NN, with which scaled conjugate gradient (SCG) is adopted to find the optimal weights of the NN. We applied this method on 66 images (18 normal, 48 abnormal). The classification accuracies on both training and test images are 100%, and the computation time per image is only 0.0451s.

318 citations


"Facial Emotion Recognition Based on..." refers methods in this paper

  • ...Our method can apply to not only facial expression images, but also MR images [62], [63], CT images, remotesensing images [64], [65], etc....

    [...]

Journal ArticleDOI
13 Sep 2012-Sensors
TL;DR: This work proposes a novel classification method based on a multi-class kernel support vector machine (kSVM) with the desirable goal of accurate and fast classification of fruits.
Abstract: Automatic classification of fruits via computer vision is still a complicated task due to the various properties of numerous types of fruits. We propose a novel classification method based on a multi-class kernel support vector machine (kSVM) with the desirable goal of accurate and fast classification of fruits. First, fruit images were acquired by a digital camera, and then the background of each image was removed by a split-and-merge algorithm; Second, the color histogram, texture and shape features of each fruit image were extracted to compose a feature space; Third, principal component analysis (PCA) was used to reduce the dimensions of feature space; Finally, three kinds of multi-class SVMs were constructed, i.e., Winner-Takes-All SVM, Max-Wins-Voting SVM, and Directed Acyclic Graph SVM. Meanwhile, three kinds of kernels were chosen, i.e., linear kernel, Homogeneous Polynomial kernel, and Gaussian Radial Basis kernel; finally, the SVMs were trained using 5-fold stratified cross validation with the reduced feature vectors as input. The experimental results demonstrated that the Max-Wins-Voting SVM with Gaussian Radial Basis kernel achieves the best classification accuracy of 88.2%. For computation time, the Directed Acyclic Graph SVMs performs swiftest.

253 citations


"Facial Emotion Recognition Based on..." refers methods in this paper

  • ...Hence, we used the winner-takes-all (WTA) technique [35] to break the 7-class task into multiple 2-class tasks [36]....

    [...]

Journal ArticleDOI
TL;DR: A novel method that could be applied to the fleld of MR brain image classiflcation and can assist the doctors to diagnose where a patient is normal or abnormal to certain degrees is presented.
Abstract: Automated and accurate classification of MR brain images is extremely important for medical analysis and interpretation. Over the last decade numerous methods have already been proposed. In this paper, we presented a novel method to classify a given MR brain image as normal or abnormal. The proposed method first employed wavelet transform to extract features from images, followed by applying principle component analysis (PCA) to reduce the dimensions of features. The reduced features were submitted to a kernel support vector machine (KSVM). The strategy of Kfold stratified cross validation was used to enhance generalization of KSVM. We chose seven common brain diseases (glioma, meningioma, Alzheimer’s disease, Alzheimer’s disease plus visual agnosia, Pick’s disease, sarcoma, and Huntington’s disease) as abnormal brains, and collected 160 MR brain images (20 normal and 140 abnormal) from Harvard Medical School website. We performed our proposed methods with four different kernels, and found that the GRB kernel achieves the highest classification accuracy as 99.38%. The LIN, HPOL, and IPOL kernel achieves 95%, 96.88%, and 98.12%, respectively. We also compared our method to those from literatures in the last decade, and the results showed our DWT+PCA+KSVM with GRB kernel still achieved the best accurate classification results. The averaged processing time for a 256× 256 size image on a laptop of P4 IBM with 3GHz processor and 2 GB RAM is 0.0448 s. From the experimental data, our method was effective and rapid. It could be applied to the field of MR brain image classification and can assist the doctors to diagnose where a patient is normal or abnormal to certain degrees. Received 14 June 2012, Accepted 23 July 2012, Scheduled 19 August 2012 * Corresponding author: Yudong Zhang (zhangyudongnuaa@gmail.com).

230 citations


"Facial Emotion Recognition Based on..." refers methods in this paper

  • ...It has been successfully applied into identify brains [29], spatio-temporal activity [30], Alzheimer’s disease [31], online review [32], multiple sclerosis [33], etc....

    [...]