scispace - formally typeset
Open AccessJournal ArticleDOI

Robust Face Recognition via Sparse Representation

TLDR
This work considers the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise, and proposes a general classification algorithm for (image-based) object recognition based on a sparse representation computed by C1-minimization.
Abstract
We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems

TL;DR: A General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-conveX penalties and a detailed convergence analysis of the GIST algorithm is presented.
Proceedings ArticleDOI

A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding

TL;DR: By extending the popular soft-thresholding operator, a generalized iterated shrinkage algorithm (GISA) for Ip-norm non-convex sparse coding is proposed, which is theoretically more solid and can achieve more accurate solutions.
Journal ArticleDOI

Inferring Biological Networks by Sparse Identification of Nonlinear Dynamics

TL;DR: In this paper, the authors propose an alternative data-driven method to infer networked nonlinear dynamical systems by using sparsity-promoting optimization to select a subset of nonlinear interactions representing dynamics on a network.
Journal ArticleDOI

Texture Classification from Random Features

TL;DR: The proposed unconventional random feature extraction is simple, yet by leveraging the sparse nature of texture images, the approach outperforms traditional feature extraction methods which involve careful design and complex steps and leads to significant improvements in classification accuracy and reductions in feature dimensionality.
Journal ArticleDOI

Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition

TL;DR: In this paper, a discriminant correlation analysis (DCA) is proposed for feature fusion by maximizing the pairwise correlations across the two feature sets and eliminating the between-class correlations and restricting the correlations to be within the classes.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Book

Convex Optimization

TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Journal ArticleDOI

Eigenfaces for recognition

TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Journal ArticleDOI

Eigenfaces vs. Fisherfaces: recognition using class specific linear projection

TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.
Related Papers (5)
Trending Questions (1)
What is the minimum number of images required for a facial recognition model to sufficiently learn features?

The paper does not provide a specific minimum number of images required for a facial recognition model to sufficiently learn features.