scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that the introduction of mixed-norms in such contexts allows one to go one step beyond in signal models, and promote some different, structured, forms of sparsity in group shrinkage operators.
Abstract: Sparse regression often uses $\ell_p$ norm priors (with p<2). This paper demonstrates that the introduction of mixed-norms in such contexts allows one to go one step beyond in signal models, and promote some different, structured, forms of sparsity. It is shown that the particular case of $\ell_{1,2}$ and $\ell_{2,1}$ norms lead to new group shrinkage operators. Mixed norm priors are shown to be particularly efficient in a generalized basis pursuit denoising approach, and are also used in a context of morphological component analysis. A suitable version of the Block Coordinate Relaxation algorithm is derived for the latter. The group-shrinkage operators are then modified to overcome some limitations of the mixed-norms. The proposed group shrinkage operators are tested on simulated signals in specific situations, to illustrate their different behaviors. Results on real data are also used to illustrate the relevance of the approach.

153 citations

Book ChapterDOI
07 Oct 2012
TL;DR: This work introduces the concept of video-dictionaries for face recognition, which generalizes the work in sparse representation and dictionaries for faces in still images and performs significantly better than many competitive video-based face recognition algorithms.
Abstract: The main challenge in recognizing faces in video is effectively exploiting the multiple frames of a face and the accompanying dynamic signature. One prominent method is based on extracting joint appearance and behavioral features. A second method models a person by temporal correlations of features in a video. Our approach introduces the concept of video-dictionaries for face recognition, which generalizes the work in sparse representation and dictionaries for faces in still images. Video-dictionaries are designed to implicitly encode temporal, pose, and illumination information. We demonstrate our method on the Face and Ocular Challenge Series (FOCS) Video Challenge, which consists of unconstrained video sequences. We show that our method is efficient and performs significantly better than many competitive video-based face recognition algorithms.

153 citations

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of recovering a complete (i.e., square and invertible) matrix from a sparse representation of the input signals, provided that the matrix is sufficiently sparse.
Abstract: We consider the problem of recovering a complete (i.e., square and invertible) matrix $ A_{0}$ , from $ Y \in \mathbb R ^{n \times p}$ with $Y = \boldsymbol A_{0} X_{0}$ , provided $ X_{0}$ is sufficiently sparse. This recovery problem is central to theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals and finds numerous applications in modern signal processing and machine learning. We give the first efficient algorithm that provably recovers $ A_{0}$ when $ X_{0}$ has $O \left ({ n }\right )$ nonzeros per column, under suitable probability model for $ X_{0}$ . Our algorithmic pipeline centers around solving a certain nonconvex optimization problem with a spherical constraint, and hence is naturally phrased in the language of manifold optimization. In a companion paper, we have showed that with high probability, our nonconvex formulation has no “spurious” local minimizers and around any saddle point, the objective function has a negative directional curvature. In this paper, we take advantage of the particular geometric structure and describe a Riemannian trust region algorithm that provably converges to a local minimizer with from arbitrary initializations. Such minimizers give excellent approximations to the rows of $ X_{0}$ . The rows are then recovered by a linear programming rounding and deflation.

153 citations

Journal ArticleDOI
TL;DR: This paper describes the design and use of non-convex penalty functions (regularizers) constrained so as to ensure the convexity of the total cost function F to be minimized and demonstrates that optimal parameters can be obtained by semidefinite programming (SDP).
Abstract: This paper addresses the problem of sparsity penalized least squares for applications in sparse signal processing, e.g., sparse deconvolution. This paper aims to induce sparsity more strongly than L1 norm regularization, while avoiding non-convex optimization. For this purpose, this paper describes the design and use of non-convex penalty functions (regularizers) constrained so as to ensure the convexity of the total cost function F to be minimized. The method is based on parametric penalty functions, the parameters of which are constrained to ensure convexity of F. It is shown that optimal parameters can be obtained by semidefinite programming (SDP). This maximally sparse convex (MSC) approach yields maximally non-convex sparsity-inducing penalty functions constrained such that the total cost function F is convex. It is demonstrated that iterative MSC (IMSC) can yield solutions substantially more sparse than the standard convex sparsity-inducing approach, i.e., L1 norm minimization.

153 citations

ReportDOI
01 Jun 2008
TL;DR: A framework for learning optimal dictionaries for simultaneous sparse signal representation and robust class classification is introduced, addressing for the first time the explicit incorporation of both reconstruction and discrimination terms in the non-parametric dictionary learning and sparse coding energy.
Abstract: : A framework for learning optimal dictionaries for simultaneous sparse signal representation and robust class classification is introduced in this paper This problem for dictionary learning is solved by a class-dependent supervised simultaneous orthogonal matching pursuit, which learns the intra-class structure while increasing the inter-class discrimination, interleaved with an efficient dictionary update obtained via singular value decomposition This framework addresses for the first time the explicit incorporation of both reconstruction and discrimination terms in the non-parametric dictionary learning and sparse coding energy The work contributes to the understanding of the importance of learned sparse representations for signal classification, showing the relevance of learning discriminative and at the same time reconstructive dictionaries in order to achieve accurate and robust classification The presentation of the underlying theory is complemented with examples with the standard MNIST and Caltech datasets, and results on the use of the sparse representation obtained from the learned dictionaries as local patch descriptors, replacing commonly used experimental ones

153 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371