scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Proceedings ArticleDOI
23 Jun 2013
TL;DR: This paper addresses the problem of learning over-complete dictionaries for the coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces, and proposes a Bayesian method using a beta process prior to learn the over- complete dictionaries.
Abstract: This paper addresses the problem of learning over-complete dictionaries for the coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. A Bayesian method using a beta process prior is applied to learn the over-complete dictionaries. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces. Another advantage of the proposed method is that the number of dictionary atoms and their relative importance may be inferred non-parametrically. We compare the proposed approach to several state-of-the-art dictionary learning methods by applying this method to single image super-resolution. The experimental results show that dictionaries learned by our method produces the best super-resolution results compared to other state-of-the-art methods.

182 citations

Proceedings ArticleDOI
01 Jan 2010
TL;DR: An efficient version of SBA for systems where the secondary structure (relations among cameras) is also sparse, which outperforms the current SBA standard implementation on datasets with sparse secondary structure by at least an order of magnitude, while also being more efficient on dense datasets.
Abstract: Sparse Bundle Adjustment (SBA) is a method for simultaneously optimizing a set of camera poses and visible points. It exploits the sparse primary structure of the problem, where connections exist just between points and cameras. In this paper, we implement an efficient version of SBA for systems where the secondary structure (relations among cameras) is also sparse. The method, which we call Sparse SBA (sSBA), integrates an efficient method for setting up the linear subproblem with recent advances in direct sparse Cholesky solvers. sSBA outperforms the current SBA standard implementation on datasets with sparse secondary structure by at least an order of magnitude, while also being more efficient on dense datasets.

181 citations

Proceedings ArticleDOI
20 Jun 2011
TL;DR: An image classification framework by leveraging the non-negative sparse coding, low-rank and sparse matrix decomposition techniques (LR-Sc+ SPM), which achieves or outperforms the state-of-the-art results on several benchmarks.
Abstract: We propose an image classification framework by leveraging the non-negative sparse coding, low-rank and sparse matrix decomposition techniques (LR-Sc+ SPM). First, we propose a new non-negative sparse coding along with max pooling and spatial pyramid matching method (Sc+ SPM) to extract local features' information in order to represent images, where non-negative sparse coding is used to encode local features. Max pooling along with spatial pyramid matching (SPM) is then utilized to get the feature vectors to represent images. Second, motivated by the observation that images of the same class often contain correlated (or common) items and specific (or noisy) items, we propose to leverage the low-rank and sparse matrix recovery technique to decompose the feature vectors of images per class into a low-rank matrix and a sparse error matrix. To incorporate the common and specific attributes into the image representation, we still adopt the idea of sparse coding to recode the Sc+ SPM representation of each image. In particular, we collect the columns of the both matrixes as the bases and use the coding parameters as the updated image representation by learning them through the locality-constrained linear coding (LLC). Finally, linear SVM classifier is leveraged for the final classification. Experimental results show that the proposed method achieves or outperforms the state-of-the-art results on several benchmarks.

181 citations

Proceedings ArticleDOI
01 Nov 2009
TL;DR: It is demonstrated that a simple algorithm, which is dubbed Justice Pursuit (JP), can achieve exact recovery from measurements corrupted with sparse noise.
Abstract: Compressive sensing provides a framework for recovering sparse signals of length N from M ≪ N measurements. If the measurements contain noise bounded by ∈, then standard algorithms recover sparse signals with error at most C∈. However, these algorithms perform suboptimally when the measurement noise is also sparse. This can occur in practice due to shot noise, malfunctioning hardware, transmission errors, or narrowband interference. We demonstrate that a simple algorithm, which we dub Justice Pursuit (JP), can achieve exact recovery from measurements corrupted with sparse noise. The algorithm handles unbounded errors, has no input parameters, and is easily implemented via standard recovery techniques.

181 citations

Proceedings Article
Tong Zhang1
08 Dec 2008
TL;DR: This work proposes a novel combination that is based on the forward greedy algorithm but takes backward steps adaptively whenever beneficial, and proves strong theoretical results showing that this procedure is effective in learning sparse representations.
Abstract: Consider linear prediction models where the target function is a sparse linear combination of a set of basis functions. We are interested in the problem of identifying those basis functions with non-zero coefficients and reconstructing the target function from noisy observations. Two heuristics that are widely used in practice are forward and backward greedy algorithms. First, we show that neither idea is adequate. Second, we propose a novel combination that is based on the forward greedy algorithm but takes backward steps adaptively whenever beneficial. We prove strong theoretical results showing that this procedure is effective in learning sparse representations. Experimental results support our theory.

180 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371