Robust Face Recognition via Sparse Representation
TLDR
This work considers the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise, and proposes a general classification algorithm for (image-based) object recognition based on a sparse representation computed by C1-minimization.Abstract:
We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.read more
Citations
More filters
Journal ArticleDOI
Heterogeneous Feature Selection With Multi-Modal Deep Neural Networks and Sparse Group LASSO
Lei Zhao,Qinghua Hu,Wenwu Wang +2 more
TL;DR: This framework is composed of two modules, namely, multi-modal deep neural networks and feature selection with sparse group LASSO, and shows that the proposed approach is effective in selecting the relevant feature groups and achieves competitive classification performance as compared with several recent baseline methods.
Journal ArticleDOI
Sorted random projections for robust rotation-invariant texture classification
TL;DR: This paper presents a simple, novel, yet very powerful approach for robust rotation-invariant texture classification based on random projection, with significant improvements in classification accuracy, including what it believes to be the best reported results for Brodatz, UMD and KTH-TIPS.
Proceedings Article
Robust Regression via hard thresholding
TL;DR: A simple hard-thresholding algorithm called TORRENT is studied which, under mild conditions on X, can recover w* exactly even if b corrupts the response variables in an adversarial manner, i.e. both the support and entries of b are selected adversarially after observing X and w*.
Proceedings ArticleDOI
Heterogeneous feature machines for visual recognition
TL;DR: This paper proposes a machinery called the Heterogeneous Feature Machine (HFM), which builds a kernel logistic regression model based on similarities that combine different features and distance metrics to effectively solve visual recognition tasks in need of multiple types of features.
Journal ArticleDOI
Structural damage identification via a combination of blind feature extraction and sparse representation classification
Yongchao Yang,Satish Nagarajaiah +1 more
TL;DR: The two-step CP–SR damage identification method alleviates the training process required by traditional pattern recognition based methods and can be of small size by formulating the issues of locating damage and assessing damage extent as a two-stage procedure and taking advantage of the robustness of the SR framework.
References
More filters
Journal ArticleDOI
Regression Shrinkage and Selection via the Lasso
TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book
The Nature of Statistical Learning Theory
TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Book
Convex Optimization
Stephen Boyd,Lieven Vandenberghe +1 more
TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Journal ArticleDOI
Eigenfaces for recognition
Matthew Turk,Alex Pentland +1 more
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Journal ArticleDOI
Eigenfaces vs. Fisherfaces: recognition using class specific linear projection
TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.