scispace - formally typeset
Search or ask a question

Semi-supervised Facial Expression Recognition Algorithm on The Condition of Multi-pose

01 Jan 2013-Vol. 4, pp 138-146
TL;DR: In the proposed method, transfer learning has been brought into semi-supervised learning to solve the problem of multi-pose facial expression recognition.
Abstract: A major challenge in pattern recognition is labeling of large numbers of sam- ples. This problem has been solved by extending supervised learning to semi-supervised learning. Thus semi-supervised learning has become one of the most important methods on the research of facial expression recognition. Frontal and un-occluded face images have been well recognized using traditional facial expression recognition based on semi- supervised learning. However, pose-variants caused by body movement, may decrease facial expression recognition rate. A novel facial expression recognition algorithm based on semi-supervised learning is proposed to improve the robustness in multi-pose facial expression recognition. In the proposed method, transfer learning has been brought into semi-supervised learning to solve the problem of multi-pose facial expression recognition. Experiments show that our method is competent for semi-supervised facial expression recognition on the condition of multi-pose. The recognition rates are 82.68% and 87.71% on the RaFD database and BHU database, respectively.
Citations
More filters
Journal ArticleDOI
06 Dec 2013-Sensors
TL;DR: This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, and can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission.
Abstract: This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.

38 citations

Proceedings ArticleDOI
04 May 2015
TL;DR: This paper transforms the problem of facial expression recognition under large head rotations into a missing data classification problem and can accurately recognize facial expressions in a much larger pan and tilt range than state-of-the-art approaches.
Abstract: Most of the facial expression recognition methods assume frontal or near-frontal head poses and usually their accuracy strongly decreases when tested with non-frontal poses. Training a 2D pose-specific classifier for a large number of discrete poses can be time consuming due to the need of many samples per pose. On the other hand, 2D and 3D view-point independent approaches are usually not robust to very large head rotations. In this paper we transform the problem of facial expression recognition under large head rotations into a missing data classification problem. 3D data of the face are projected onto a head pose invariant 2D representation and in this projection the only difference between poses is due to self-occlusions with respect to the depth sensor's position. Once projected, the visible part of the face is split in overlapping patches which are input to independent local classifiers and a voting scheme gives the final output. Experimental results on common benchmarks show that our method can accurately recognize facial expressions in a much larger pan and tilt range than state-of-the-art approaches, obtaining comparable performance to the best existing systems working only in narrower ranges.

35 citations


Cites methods from "Semi-supervised Facial Expression R..."

  • ...In [13] Jiang and Jia propose a semi-supervised approach based on Transfer AdaBoost [14]...

    [...]

Journal ArticleDOI
TL;DR: Deep Decision Tree Transfer Boosting is proposed, whose weights are learned and assigned to base learners by minimizing the data-dependent learning bounds across both source and target domains in terms of the Rademacher complexities.
Abstract: Instance transfer approaches consider source and target data together during the training process, and borrow examples from the source domain to augment the training data, when there is limited or no label in the target domain. Among them, boosting-based transfer learning methods (e.g., TrAdaBoost) are most widely used. When dealing with more complex data, we may consider the more complex hypotheses (e.g., a decision tree with deeper layers). However, with the fixed and high complexity of the hypotheses, TrAdaBoost and its variants may face the overfitting problems. Even worse, in the transfer learning scenario, a decision tree with deep layers may overfit different distribution data in the source domain. In this paper, we propose a new instance transfer learning method, i.e., Deep Decision Tree Transfer Boosting (DTrBoost), whose weights are learned and assigned to base learners by minimizing the data-dependent learning bounds across both source and target domains in terms of the Rademacher complexities. This guarantees that we can learn decision trees with deep layers without overfitting. The theorem proof and experimental results indicate the effectiveness of our proposed method.

24 citations

Journal ArticleDOI
TL;DR: This paper proposes a new representation based classification method that can effectively and simultaneously reduce noise in the test and training samples and then exploits them to determine the label of the test sample.

15 citations


Cites background from "Semi-supervised Facial Expression R..."

  • ...For face recognition, besides noise from the acquisition stage, the variation of the facial pose and expression [44,45] of the same face can also be viewed as generalized noise....

    [...]

Proceedings ArticleDOI
01 May 2016
TL;DR: Two class emotion detection and multi class facial expression classification using Support Vector Machine (SVM) significantly outperforms the classical LBP based algorithms.
Abstract: In this paper, two class emotion detection and multi class facial expression classification using Support Vector Machine (SVM) is presented. Facial feature vectors in dual form are obtained using Local Binary Pattern (LBP) Histogram by tracing the bins in clockwise and anticlockwise direction. The Histogram feature descriptors are calculated from LBP images in dual form which are then concatenated to obtain features of complete face image. The proposed algorithm is tested using standard Japanese Female Facial Expression Database and Taiwanese facial Expression Database and results are verified using locally developed Indian face database of students. The proposed algorithm significantly outperforms the classical LBP based algorithms.

11 citations


Cites methods from "Semi-supervised Facial Expression R..."

  • ...AdaBoost methods provide a simple and effective approach for stage wise learning of a nonlinear classification function[10]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Abstract: A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.

18,616 citations


"Semi-supervised Facial Expression R..." refers background in this paper

  • ...Transfer learning [10] theory may offer a way to improve the semi-supervised learning....

    [...]

Journal ArticleDOI
01 Aug 1997
TL;DR: The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
Abstract: In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.

15,813 citations

Journal ArticleDOI
TL;DR: The present article presents the freely available Radboud Faces Database, a face database in which displayed expressions, gaze direction, and head orientation are parametrically varied in a complete factorial design, containing both Caucasian adult and children images.
Abstract: Many research fields concerned with the processing of information contained in human faces would benefit from face stimulus sets in which specific facial characteristics are systematically varied while other important picture characteristics are kept constant. Specifically, a face database in which displayed expressions, gaze direction, and head orientation are parametrically varied in a complete factorial design would be highly useful in many research domains. Furthermore, these stimuli should be standardised in several important, technical aspects. The present article presents the freely available Radboud Faces Database offering such a stimulus set, containing both Caucasian adult and children images. This face database is described both procedurally and in terms of content, and a validation study concerning its most important characteristics is presented. In the validation study, all frontal images were rated with respect to the shown facial expression, intensity of expression, clarity of expression, genuineness of expression, attractiveness, and valence. The results show very high recognition of the intended facial expressions.

2,041 citations

16 Sep 2002
TL;DR: A simple iterative algorithm to propagate labels through the dataset along high density are as d fined by unlabeled data is proposed and its solution is analyzed, and its connection to several other algorithms is analyzed.
Abstract: We investigate the use of unlabeled data to help labeled data in cl ssification. We propose a simple iterative algorithm, label pro pagation, to propagate labels through the dataset along high density are as d fined by unlabeled data. We analyze the algorithm, show its solution , and its connection to several other algorithms. We also show how to lear n p ameters by minimum spanning tree heuristic and entropy minimiz ation, and the algorithm’s ability to perform feature selection. Expe riment results are promising.

1,663 citations


"Semi-supervised Facial Expression R..." refers background in this paper

  • ...Label Propagation is a classical Semi-Supervised Learning method....

    [...]

  • ...M1, Label Propagation [15], ASSEMBLE [16], RegBoost [17]) and MP-AdaBoost to identify six facial expressions (angry, disgust, sad, happy, fear and surprise) in the RaFD and BHU databases....

    [...]

Proceedings ArticleDOI
20 Jun 2007
TL;DR: In this paper, the authors proposed a transfer learning framework called TrAdaBoost, which allows users to utilize a small amount of newly labeled data to leverage the old data to construct a high-quality classification model for the new data.
Abstract: Traditional machine learning makes a basic assumption: the training and test data should be under the same distribution. However, in many cases, this identical-distribution assumption does not hold. The assumption might be violated when a task from one new domain comes, while there are only labeled data from a similar old domain. Labeling the new data can be costly and it would also be a waste to throw away all the old data. In this paper, we present a novel transfer learning framework called TrAdaBoost, which extends boosting-based learning algorithms (Freund & Schapire, 1997). TrAdaBoost allows users to utilize a small amount of newly labeled data to leverage the old data to construct a high-quality classification model for the new data. We show that this method can allow us to learn an accurate model using only a tiny amount of new data and a large amount of old data, even when the new data are not sufficient to train a model alone. We show that TrAdaBoost allows knowledge to be effectively transferred from the old data to the new. The effectiveness of our algorithm is analyzed theoretically and empirically to show that our iterative algorithm can converge well to an accurate model.

1,509 citations