scispace - formally typeset
Search or ask a question
Author

C. P. Sumathi

Bio: C. P. Sumathi is an academic researcher. The author has contributed to research in topics: Facial expression & Facial recognition system. The author has an hindex of 1, co-authored 1 publications receiving 50 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Facial expression recognition is analyzed with various methods of facial detection, facial feature extraction and classification to derive holistic and feature based facial recognition approaches.
Abstract: Automatic recognition of facial expressions is an important component for human-machine interfaces. It has lot of attraction in research area since 1990's.Although humans recognize face without effort or delay, recognition by a machine is still a challenge. Some of its challenges are highly dynamic in their orientation, lightening, scale, facial expression and occlusion. Applications are in the fields like user authentication, person identification, video surveillance, information security, data privacy etc. The various approaches for facial recognition are categorized into two namely holistic based facial recognition and feature based facial recognition. Holistic based treat the image data as one entity without isolating different region in the face where as feature based methods identify certain points on the face such as eyes, nose and mouth etc. In this paper, facial expression recognition is analyzed with various methods of facial detection,facial feature extraction and classification.

52 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper presents a multimodal emotion recognition system, which is based on the analysis of audio and visual cues, and defines the current state-of-the-art in all three databases.
Abstract: This paper presents a multimodal emotion recognition system, which is based on the analysis of audio and visual cues. From the audio channel, Mel-Frequency Cepstral Coefficients, Filter Bank Energies and prosodic features are extracted. For the visual part, two strategies are considered. First, facial landmarks’ geometric relations, i.e., distances and angles, are computed. Second, we summarize each emotional video into a reduced set of key-frames, which are taught to visually discriminate between the emotions. In order to do so, a convolutional neural network is applied to key-frames summarizing videos. Finally, confidence outputs of all the classifiers from all the modalities are used to define a new feature space to be learned for final emotion label prediction, in a late fusion/stacking fashion. The experiments conducted on the SAVEE, eNTERFACE’05, and RML databases show significant performance improvements by our proposed system in comparison to current alternatives, defining the current state-of-the-art in all three databases.

166 citations

Journal ArticleDOI
TL;DR: A systematic and comprehensive survey on current state-of-art Artificial Intelligence techniques (datasets and algorithms) that provide a solution to the aforementioned issues and a taxonomy of existing facial sentiment analysis strategies in brief are presented.
Abstract: With the advancements in machine and deep learning algorithms, the envision of various critical real-life applications in computer vision becomes possible. One of the applications is facial sentiment analysis. Deep learning has made facial expression recognition the most trending research fields in computer vision area. Recently, deep learning-based FER models have suffered from various technological issues like under-fitting or over-fitting. It is due to either insufficient training and expression data. Motivated from the above facts, this paper presents a systematic and comprehensive survey on current state-of-art Artificial Intelligence techniques (datasets and algorithms) that provide a solution to the aforementioned issues. It also presents a taxonomy of existing facial sentiment analysis strategies in brief. Then, this paper reviews the existing novel machine and deep learning networks proposed by researchers that are specifically designed for facial expression recognition based on static images and present their merits and demerits and summarized their approach. Finally, this paper also presents the open issues and research challenges for the design of a robust facial expression recognition system.

86 citations

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This paper compare and analyse the performance of three machine learning algorithm to do the task of classifying human facial expression, using the total of 23 variables calculated from the distance of facial features as the input for the classification process.
Abstract: Human depicts their emotions through facial expression or their way of speech. In order to make this process possible for a machine, a training mechanism is needed to give machine the ability to recognize human expression. This paper compare and analyse the performance of three machine learning algorithm to do the task of classifying human facial expression. The total of 23 variables calculated from the distance of facial features are used as the input for the classification process, with the output of seven categories, such as: angry, disgust, fear, happy, neutral, sad, and surprise. Some test cases were made to test the system, in which each test cases has different amount of data, ranging from 165–520 training data. The result for each algorithm is quite satisfying with the accuracy of 75.15% for K-Nearest Neighbor (KNN), 80% for Support Vector Machine (SVM), and 76.97% for Random Forests algorithm, tested using test case with the smallest amount of data. As for the result using the largest amount of data, the accuracy is 98.85% for KNN, 90% for SVM, and 98.85% for Random Forests algorithm. The training data for each test case was also classified using Discriminant Analysis with the result 97.7% accuracy.

48 citations

Journal ArticleDOI
TL;DR: The proposed system will use classic Histograms of Oriented Gradients (HOG) along with facial landmark detection technique; these detected features then passed through SVM classifier to predict the mood of the user, which will stimulate the creation of playlist.
Abstract: Increasing and maintaining human productivity of different tasks in stressful environment is a challenge. Music is a vital mood controller and helps in improving the mood and state of the person which in turn will act as a catalyst to increase productivity. Continuous music play requires creating and managing personalized song playlist which is a time consuming task. It would be very helpful if the music player itself selects a song according to the current mood of the user. The mood of the user can be detected by a facial expression of the person. A facial expression detection system should address three major problems: detection of face from an image, facial feature extraction and facial expression classification[1].The first stage is of face detection from an image for which various techniques used are model based face tracking which includes real-time face detection using edge orientation matching [2], Robust face detection using Hausdorff distance [3], weak classifier cascade which includes Viola and Jones algorithm [4], and Histograms of Oriented Gradients (HOG) descriptors. The next stage is to extract features from detected face. Two major approaches for feature extraction which use Gabor filters [Dennis Gabor] and Principle Component Analysis [Jolliffe]. The final stage is of image classification for mood detection, where various classifiers like BrownBoost [Freund, 2001], AdaBoost [Freund and Schapire, 1995] and Support Vector Machines (SVM) are available. The proposed system will use classic Histograms of Oriented Gradients (HOG) along with facial landmark detection technique; these detected features then passed through SVM classifier to predict the mood of the user. This predicted mood will stimulate the creation of playlist. General Terms Pattern Recognition, Image Classification, Pattern Matching, Emotion Recognition,

30 citations

Proceedings ArticleDOI
01 Sep 2017
TL;DR: The main contribution of the paper is the feature selection applied, in which the high variance LBP pixels are selected to represent faces, and the recognition rates were improved significantly.
Abstract: Facial expression, a non-verbal communication, is a means through which humans convey their inner emotional state, thus playing an important role in social interaction and interpersonal relations. Facial expression recognition plays a significant role in human-computer interaction as well as various fields of behavioral science. There are six known classes of emotional state which are anger, disgust, fear, happiness, sadness and surprise, associated with their respective facial expressions, according to Ekman's studies. Humans recognize facial expressions almost effortlessly and without delay, but this is quite challenging for digital computers. The paper presents facial expression recognition using local binary patterns. The main contribution of the paper is the feature selection applied, in which the high variance LBP pixels are selected to represent faces. By selecting the high variance pixels based on LBPs, the recognition rates were improved significantly. The tests are completed on the BU-3DFE database. The experiments show that after applying feature selection, the recognition rates are improved by 11%.

23 citations