scispace - formally typeset
Book ChapterDOI

Artificially Intelligent Game Framework Based on Facial Expression Recognition

22 Dec 2019-pp 312-321
Abstract: During gameplay, a player experiences emotional turmoil. In most of the cases, these emotions directly reflect the outcome of the game. Adapting game features based on players’ emotions necessitates a way to detect the current emotional state. Researchers in the area of “video game user research” has studied biometric data as a way to address the diverse characteristics of players, their individual preferences, gameplay expertise, and experiences. Identification of the player’s current state is fundamental for designing a game, which interacts with the player adaptively. In this paper, we present an artificially intelligent game framework with smart features based on automatic facial expression recognition and adaptive game features based on the gamer’s emotion. The gamer’s emotions are recognized at run-time during gameplay using Deep Convolutional Neural Networks (CNN), and the game is adapted accordingly to the emotional condition. Once identified, these features directly modify critical parameters of the underlying game engine to make the game more exciting and challenging.

...read more

Topics: Video game (63%), Outcome (game theory) (60%)
References
More filters

Journal ArticleDOI
Paul A. Viola1, Michael Jones2Institutions (2)
Abstract: This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; Rowley et al., 1998; Schneiderman and Kanade, 2000; Roth et al., 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.

...read more

12,467 citations


Proceedings ArticleDOI
Paul A. Viola1, Michael Jones2Institutions (2)
07 Jul 2001-
TL;DR: A new image representation called the “Integral Image” is introduced which allows the features used by the detector to be computed very quickly and a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions.

...read more

Abstract: This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the "Integral Image" which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algo- rithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a "cascade" which allows back- ground regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection perfor- mance comparable to the best previous systems (Sung and Poggio, 1998; Rowley et al., 1998; Schneiderman and Kanade, 2000; Roth et al., 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.

...read more

10,155 citations


Proceedings ArticleDOI
07 Mar 2016-
TL;DR: A deep neural network architecture to address the FER problem across multiple well-known standard face datasets is proposed, comparable to or better than the state-of-the-art methods and better than traditional convolutional neural networks in both accuracy and training time.

...read more

Abstract: Automated Facial Expression Recognition (FER) has remained a challenging and interesting problem in computer vision. Despite efforts made in developing various methods for FER, existing approaches lack generalizability when applied to unseen images or those that are captured in wild setting (i.e. the results are not significant). Most of the existing approaches are based on engineered features (e.g. HOG, LBPH, and Gabor) where the classifier's hyper-parameters are tuned to give best recognition accuracies across a single database, or a small collection of similar databases. This paper proposes a deep neural network architecture to address the FER problem across multiple well-known standard face datasets. Specifically, our network consists of two convolutional layers each followed by max pooling and then four Inception layers. The network is a single component architecture that takes registered facial images as the input and classifies them into either of the six basic or the neutral expressions. We conducted comprehensive experiments on seven publicly available facial expression databases, viz. MultiPIE, MMI, CK+, DISFA, FERA, SFEW, and FER2013. The results of our proposed architecture are comparable to or better than the state-of-the-art methods and better than traditional convolutional neural networks in both accuracy and training time.

...read more

572 citations


Journal ArticleDOI
TL;DR: A simple solution for facial expression recognition that uses a combination of Convolutional Neural Network and specific image pre-processing steps to extract only expression specific features from a face image and explore the presentation order of the samples during training.

...read more

Abstract: Facial expression recognition has been an active research area in the past 10 years, with growing application areas including avatar animation, neuromarketing and sociable robots. The recognition of facial expressions is not an easy problem for machine learning methods, since people can vary significantly in the way they show their expressions. Even images of the same person in the same facial expression can vary in brightness, background and pose, and these variations are emphasized if considering different subjects (because of variations in shape, ethnicity among others). Although facial expression recognition is very studied in the literature, few works perform fair evaluation avoiding mixing subjects while training and testing the proposed algorithms. Hence, facial expression recognition is still a challenging problem in computer vision. In this work, we propose a simple solution for facial expression recognition that uses a combination of Convolutional Neural Network and specific image pre-processing steps. Convolutional Neural Networks achieve better accuracy with big data. However, there are no publicly available datasets with sufficient data for facial expression recognition with deep architectures. Therefore, to tackle the problem, we apply some pre-processing techniques to extract only expression specific features from a face image and explore the presentation order of the samples during training. The experiments employed to evaluate our technique were carried out using three largely used public databases (CK+, JAFFE and BU-3DFE). A study of the impact of each image pre-processing operation in the accuracy rate is presented. The proposed method: achieves competitive results when compared with other facial expression recognition methods 96.76% of accuracy in the CK+ database it is fast to train, and it allows for real time facial expression recognition with standard computers. HighlightsA CNN based approach for facial expression recognition.A set of pre-processing steps allowing for a simpler CNN architecture.A study of the impact of each pre-processing step in the accuracy.A study for lowering the impact of the sample presentation order during training.High facial expression recognition accuracy (96.76%) with real time evaluation.

...read more

469 citations


Proceedings ArticleDOI
Junge Zhang1, Kaiqi Huang1, Yinan Yu1, Tieniu Tan1Institutions (1)
20 Jun 2011-
TL;DR: This paper proposes a boosted Local Structured HOG-LBP based object detector to capture the object's local structure, and develop the descriptors from shape and texture information, respectively, and presents a boosted feature selection and fusion scheme for part based object detectors.

...read more

Abstract: Object localization is a challenging problem due to variations in object's structure and illumination. Although existing part based models have achieved impressive progress in the past several years, their improvement is still limited by low-level feature representation. Therefore, this paper mainly studies the description of object structure from both feature level and topology level. Following the bottom-up paradigm, we propose a boosted Local Structured HOG-LBP based object detector. Firstly, at feature level, we propose Local Structured Descriptor to capture the object's local structure, and develop the descriptors from shape and texture information, respectively. Secondly, at topology level, we present a boosted feature selection and fusion scheme for part based object detector. All experiments are conducted on the challenging PASCAL VOC2007 datasets. Experimental results show that our method achieves the state-of-the-art performance.

...read more

165 citations


11


Network Information
Related Papers (5)
01 Jan 2014

P.M. Blom, S. Bakkes +1 more

03 Dec 2008

Mohammad Obaid, Charles Han +1 more

20 Oct 2008

Tim Tijs, Dirk Brokken +1 more