scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: This work focuses on sinusoidal desired signals with sparse frequency-domain representation but shows that the analysis can be straightforwardly generalized to nonsinusoidal signals with known structures.
Abstract: A compressive sensing (CS) approach for nonstationary signal separation is proposed. This approach is motivated by challenges in radar signal processing, including separations of micro-Doppler and main body signatures. We consider the case where the signal of interest assumes sparse representation over a given basis. Other signals present in the data overlap with the desired signal in the time and frequency domains, disallowing conventional windowing or filtering operations to be used for desired signal recovery. The proposed approach uses linear time-frequency representations to reveal the data local behavior. Using the L-statistics, only the time-frequency (TF) points that belong to the desired signal are retained, whereas the common points and others pertaining only to the undesired signals are deemed inappropriate and cast as missing samples. These samples amount to reduced frequency observations in the TF domain. The linear relationship between the measurement and sparse domains permits the application of CS techniques to recover the desired signal without significant distortion. We focus on sinusoidal desired signals with sparse frequency-domain representation but show that the analysis can be straightforwardly generalized to nonsinusoidal signals with known structures. Several examples are provided to demonstrate the effectiveness of the proposed approach.

151 citations

Journal ArticleDOI
TL;DR: A unified sparse learning framework is proposed by introducing the sparsity or L1 -norm learning, which further extends the LLE-based methods to sparse cases and can be viewed as a general model for sparse linear and nonlinear subspace learning.
Abstract: Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or $L_{1}$ -norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes.

151 citations

Journal ArticleDOI
TL;DR: In this paper, fast and adaptive algorithms for numerically solving nonlinear partial differential equations of the form = Lu+ Nf(u), where L and N are linear differential operators and f(u) is a nonlinear function are developed.

151 citations

Posted Content
TL;DR: This study focuses on the numerical implementation of a sparsity-based classification framework in robust face recognition, where sparse representation is sought to recover human identities from very high-dimensional facial images that may be corrupted by illumination, facial disguise, and pose variation.
Abstract: L1-minimization refers to finding the minimum L1-norm solution to an underdetermined linear system b=Ax. Under certain conditions as described in compressive sensing theory, the minimum L1-norm solution is also the sparsest solution. In this paper, our study addresses the speed and scalability of its algorithms. In particular, we focus on the numerical implementation of a sparsity-based classification framework in robust face recognition, where sparse representation is sought to recover human identities from very high-dimensional facial images that may be corrupted by illumination, facial disguise, and pose variation. Although the underlying numerical problem is a linear program, traditional algorithms are known to suffer poor scalability for large-scale applications. We investigate a new solution based on a classical convex optimization framework, known as Augmented Lagrangian Methods (ALM). The new convex solvers provide a viable solution to real-world, time-critical applications such as face recognition. We conduct extensive experiments to validate and compare the performance of the ALM algorithms against several popular L1-minimization solvers, including interior-point method, Homotopy, FISTA, SESOP-PCD, approximate message passing (AMP) and TFOCS. To aid peer evaluation, the code for all the algorithms has been made publicly available.

151 citations

Journal ArticleDOI
TL;DR: A thorough set of performance comparisons indicates a very wide range of performance differences among the existing and proposed methods, and clearly identifies those that are the most effective.
Abstract: Convolutional sparse representations are a form of sparse representation with a dictionary that has a structure that is equivalent to convolution with a set of linear filters. While effective algorithms have recently been developed for the convolutional sparse coding problem, the corresponding dictionary learning problem is substantially more challenging. Furthermore, although a number of different approaches have been proposed, the absence of thorough comparisons between them makes it difficult to determine which of them represents the current state of the art. The present work both addresses this deficiency and proposes some new approaches that outperform existing ones in certain contexts. A thorough set of performance comparisons indicates a very wide range of performance differences among the existing and proposed methods, and clearly identifies those that are the most effective.

150 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371