scispace - formally typeset
Open AccessJournal ArticleDOI

Robust Face Recognition via Sparse Representation

TLDR
This work considers the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise, and proposes a general classification algorithm for (image-based) object recognition based on a sparse representation computed by C1-minimization.
Abstract
We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Curvelet-wavelet regularized split bregman iteration for compressed sensing

TL;DR: This work proposes models that contain weighted sparsity constraints in two different frames that can be sparsely approximated in expansions of suitable frames as wavelets, curvelets, wave atoms and others.
Journal ArticleDOI

Text detection in images using sparse representation with discriminative dictionaries

TL;DR: A classification-based algorithm for text detection using a sparse representation with discriminative dictionaries that can effectively detect texts of various sizes, fonts and colors from images and videos.
Journal ArticleDOI

Corrupted Sensing: Novel Guarantees for Separating Structured Signals

TL;DR: In this paper, a convex programming approach is used to disentangle signal and corruption, and conditions for exact signal recovery from structured corruption and stable signal recovery with added unstructured noise are provided.
Journal ArticleDOI

Constrained Multi-View Video Face Clustering

TL;DR: This paper proposes a constrained multi-view video face clustering method under a unified graph-based model that strengthens the pairwise constraints through the whole video face clusters framework, both in sparse subspace representation and spectral clustering.
Journal ArticleDOI

Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications

TL;DR: The double nuclear norm and Frobenius/nuclear hybrid norm penalties are defined and it is proved that they are in essence the Schatten-LaTeX quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Book

Convex Optimization

TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Journal ArticleDOI

Eigenfaces for recognition

TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Journal ArticleDOI

Eigenfaces vs. Fisherfaces: recognition using class specific linear projection

TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.
Related Papers (5)
Trending Questions (1)
What is the minimum number of images required for a facial recognition model to sufficiently learn features?

The paper does not provide a specific minimum number of images required for a facial recognition model to sufficiently learn features.