scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Automatic expression recognition and expertise prediction in Bharatnatyam

TL;DR: In this paper, the intensity values obtained from this tool for four distinct expressions (Joy, Surprise, Sad and Disgust) are used as their feature set for classification and predictive analysis.
Abstract: Bharatnatyam is an ancient Indian Classical Dance form consisting of complex postures and expressions. One of the main challenges in this dance form is to perform expression recognition and use the resulting data to predict the expertise of a test dancer. In this paper, expression recognition is carried out for the 6 basic expressions in Bharatnatyam using iMotions tool. The intensity values obtained from this tool for 4 distinct expressions — Joy, Surprise, Sad and Disgust are being used as our feature set for classification and predictive analysis. The recognition was performed on our own dataset consisting of 50 dancers with varied expertise ratings. Logistic Regression performed the best for Joy, Surprise and Disgust expressions giving an average accuracy of 80.78% whereas Support Vector Machine classifier with Radial Basis kernel function performed best for Sad expression giving an accuracy of 71.36%. A separate analysis on positive and negative emotions is carried out to determine the expertise of each rating on the basis of these emotions.
Citations
More filters
Journal ArticleDOI
TL;DR: The obtained results support a general acceptance towards ARDTS among the users who are interested in exploring the cutting-edge technology of AR for gaining expertise in a dance skill.
Abstract: The advancement in Computer Vision (CV) has evolved drastically from image processing to object recognition, tracking video, restoration of images, three-dimensional (3D) pose recognition, and emotion analysis These advancements have eventually led to the birth of Augmented Reality (AR) technology, which means embedding virtual objects into the real-world environment The primary focus of this research was to solve the long-term learning retention and poor learning efficiency for mastering a dance skill through the AR technology based on constructivism learning theory, Dreyfus model and Technology Acceptance Model (TAM) The problem analysis carried out in this research had major research findings, in which the retention and learning efficiency of a dance training system were predominantly determined through the type of learning theory adopted, learning environment, training tools, skill acquisition technology and type of AR technique Therefore, the influential factors for the user acceptance of AR-based dance training system (ARDTS) were based on quantitative analysis These influential factors were determined to address the problem of knowledge gap on acceptance of AR-based systems for dance education through self-learning The evaluation and testing were conducted to validate the developed and implemented ARDTS system The Technology Acceptance Model (TAM) as the evaluation model and quantitative analysis was done with a research instrument that encompassed external and internal variables TAM consisted of 37 items, in which six factors were used to assess the new developed ARDTS by the authors and its acceptability among 86 subjects The current study had investigated the potential use of AR-based dance training system to promote a particular dance skill among a sample population with various backgrounds and interests The obtained results support a general acceptance towards ARDTS among the users who are interested in exploring the cutting-edge technology of AR for gaining expertise in a dance skill

13 citations

Journal ArticleDOI
TL;DR: In this paper , a convolutional neural network (CNN)-based automatic mudra identification system was proposed for the identification of the asamyukta mudra of bharatanatyam, one of the most popular classical dance forms in India.
Abstract: Abstract. Mudras in traditional Indian dance forms convey meaningful information when performed by an artist. The subtle changes between the different mudras in a dance form render automatic identification challenging as compared to conventional hand gesture identification, where the gestures are uniquely distinct from each other. Therefore, the objective of this study is to build a classifier model for the identification of the asamyukta mudra of bharatanatyam, one of the most popular classical dance forms in India. The first part of the paper provides a comprehensive review of the issues present in bharatanatyam mudra identification and the various studies conducted on the automatic classification of mudras. Based on this review, we observe that the unavailability of a large mudra corpus is a major challenge in mudra identification. Therefore, the second part of the paper focuses on the development of a relatively large database of mudra images consisting of 29 asamyukta mudras prevalent in bharatanatyam, which is obtained by incorporating different variabilities, such as subject, artist type (amateur or professional), and orientation. The mudra image database so developed is made available for academic research purposes. The final part of this paper describes the development of a convolutional neural network (CNN)-based automatic mudra identification system. Multistyle training of mudra classes on a conventional CNN showed a 92% correct identification rate. Based on the “eigenface” projection used in face recognition, “eigenmudras” projections of mudra images are proposed for improving the CNN-based mudra identification. Although the CNNs trained on the eigenmudra-projected images provide nearly equal identification rates as that obtained using the CNNs trained on raw mudra grayscale images, both models provide complementary mudra class information. The presence of complementary class information is confirmed by the improvement in the mudra identification performance when the CNN models trained from the raw mudra and eigenmudra-projected images are combined by computing the average of the scores obtained in the final softmax layers of both models. The same trend of improved mudra identification is observed upon combination of the average score level of VGG19 CNN models of the raw mudra images and corresponding eigenmudra-projected images.
References
More filters
Proceedings ArticleDOI
21 Mar 2011
TL;DR: The Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, is presented and officially released for free academic use.
Abstract: We present the Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, and officially release it for free academic use. CERT can automatically code the intensity of 19 different facial actions from the Facial Action Unit Coding System (FACS) and 6 different protoypical facial expressions. It also estimates the locations of 10 facial features as well as the 3-D orientation (yaw, pitch, roll) of the head. On a database of posed facial expressions, Extended Cohn-Kanade (CK+ [1]), CERT achieves an average recognition performance (probability of correctness on a two-alternative forced choice (2AFC) task between one positive and one negative example) of 90.1% when analyzing facial actions. On a spontaneous facial expression dataset, CERT achieves an accuracy of nearly 80%. In a standard dual core laptop, CERT can process 320 × 240 video images in real time at approximately 10 frames per second.

553 citations


"Automatic expression recognition an..." refers methods in this paper

  • ...The registered faces are further processed with several stages of Gabor filters to detect the Action Units which code the movement of facial muscles....

    [...]

Journal ArticleDOI
TL;DR: Asymmetries of the smiling facial movement were more frequent in deliberate imitations than spontaneous emotional expressions, and these findings were obtained for the actions involved in negative emotions, but a small data base made these results tentative.
Abstract: Asymmetries of the smiling facial movement were more frequent in deliberate imitations than spontaneous emotional expressions. When asymmetries did occur they were usually stronger on the left side of the face if the smile was deliberate. Asymmetrical emotional expressions, however, were about equally divided between those stronger on the left side of the face and those stronger on the right. Similar findings were obtained for the actions involved in negative emotions, but a small data base made these results tentative.

318 citations


"Automatic expression recognition an..." refers methods in this paper

  • ...These action units are further fed to a PCA to reduce dimension and then classified using a single neural network [5]....

    [...]

Posted Content
TL;DR: A time-line view of the advances made in this field, the applications of automatic face expression recognizers, the characteristics of an ideal system, the databases that have been used and the advancesmade in terms of their standardization and a detailed summary of the state of the art are presented.
Abstract: The automatic recognition of facial expressions has been an active research topic since the early nineties There have been several advances in the past few years in terms of face detection and tracking, feature extraction mechanisms and the techniques used for expression classification This paper surveys some of the published work since 2001 till date The paper presents a time-line view of the advances made in this field, the applications of automatic face expression recognizers, the characteristics of an ideal system, the databases that have been used and the advances made in terms of their standardization and a detailed summary of the state of the art The paper also discusses facial parameterization using FACS Action Units (AUs) and MPEG-4 Facial Animation Parameters (FAPs) and the recent advances in face detection, tracking and feature extraction methods Notes have also been presented on emotions, expressions and facial features, discussion on the six prototypic expressions and the recent studies on expression classifiers The paper ends with a note on the challenges and the future work This paper has been written in a tutorial style with the intention of helping students and researchers who are new to this field

304 citations


"Automatic expression recognition an..." refers background in this paper

  • ...MPEG-4 metrics are provided to FAPs to model facial expressions [7]....

    [...]

Journal ArticleDOI
TL;DR: American and Indian college students responded to each of these 45 expressions using either a fixed-response format (10 emotion names and “neutral/no emotion”) or a totally free response format, and were quite accurate in identifying emotions correctly.
Abstract: Subjects were presented with videotaped expressions of 10 classic Hindu emotions. The 10 emotions were (in rough translation from Sanskrit) anger, disgust, fear, heroism, humor-amusement, love, peace, sadness, shame-embarrassment, and wonder. These emotions (except for shame) and their portrayal were described about 2,000 years ago in the Natyasastra, and are enacted in the contemporary Hindu classical dance. The expressions are dynamic and include both the face and the body, especially the hands. Three different expressive versions of each emotion were presented, along with 15 neutral expressions. American and Indian college students responded to each of these 45 expressions using either a fixed-response format (10 emotion names and "neutral/no emotion") or a totally free response format. Participants from both countries were quite accurate in identifying emotions correctly using both fixed-choice (65% correct, expected value of 9%) and free-response (61% correct, expected value close to zero) methods.

83 citations


"Automatic expression recognition an..." refers methods in this paper

  • ...The proposed method lacks of recognizing subtle facial gestures found in some of the acts in a choreography....

    [...]