Journal ArticleDOI
Beyond sparsity: The role of L1-optimizer in pattern classification
TLDR
An insight into the newly-emerging sparse representation-based classifier (SRC) is given and reasonable supports for its effectiveness are sought and it is found that for pattern recognition tasks, L"1- Optimizer provides more classification meaningful information than L"0-optimizer does.About:
This article is published in Pattern Recognition.The article was published on 2012-03-01. It has received 194 citations till now. The article focuses on the topics: Sparse approximation.read more
Citations
More filters
Journal ArticleDOI
A Survey of Sparse Representation: Algorithms and Applications
TL;DR: A comprehensive overview of sparse representation is provided and an experimentally comparative study of these sparse representation algorithms was presented, which could sufficiently reveal the potential nature of the sparse representation theory.
Journal ArticleDOI
Deep learning on image denoising: An overview.
TL;DR: A comparative study of deep techniques in image denoising by classifying the deep convolutional neural networks for additive white noisy images, the deep CNNs for real noisy images; the deepCNNs for blind Denoising and the deep network for hybrid noisy images.
Journal ArticleDOI
The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition
TL;DR: It is shown that AGR consistently operationalises gender in a trans-exclusive way, and consequently carries disproportionate risk for trans people subject to it.
Journal ArticleDOI
Nuclear Norm Based Matrix Regression with Applications to Face Recognition with Occlusion and Illumination Changes
TL;DR: This paper presents a two-dimensional image-matrix-based error model, namely, nuclear norm based matrix regression (NMR), for face representation and classification, and develops a fast ADMM algorithm to solve the approximate NMR model.
Posted Content
Nuclear Norm based Matrix Regression with Applications to Face Recognition with Occlusion and Illumination Changes
TL;DR: Wang et al. as discussed by the authors presented a two-dimensional image matrix based error model, i.e., matrix regression, for face representation and classification, which uses the minimal nuclear norm of representation error image as a criterion, and the alternating direction method of multipliers method to calculate the regression coefficients.
References
More filters
Book
The Nature of Statistical Learning Theory
TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Book
Compressed sensing
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Journal ArticleDOI
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Journal ArticleDOI
Eigenfaces vs. Fisherfaces: recognition using class specific linear projection
TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.
Journal ArticleDOI
Atomic Decomposition by Basis Pursuit
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.