scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Proceedings ArticleDOI
28 Jun 2009
TL;DR: A new formulation of appearance-only SLAM suitable for very large scale navigation that naturally incorporates robustness against perceptual aliasing is described and demonstrated performing reliable online appearance mapping and loop closure detection over a 1,000 km trajectory.
Abstract: We describe a new formulation of appearance-only SLAM suitable for very large scale navigation. The system navigates in the space of appearance, assigning each new observation to either a new or previously visited location, without reference to metric position. The system is demonstrated performing reliable online appearance mapping and loop closure detection over a 1,000 km trajectory, with mean filter update times of 14 ms. The 1,000 km experiment is more than an order of magnitude larger than any previously reported result. The scalability of the system is achieved by defining a sparse approximation to the FAB-MAP model suitable for implementation using an inverted index. Our formulation of the problem is fully probabilistic and naturally incorporates robustness against perceptual aliasing. The 1,000 km data set comprising almost a terabyte of omni-directional and stereo imagery is available for use, and we hope that it will serve as a benchmark for future systems.

354 citations

Proceedings ArticleDOI
29 Dec 2011
TL;DR: Experimental results on several HSIs show that the proposed technique outperforms the linear sparsity-based classification technique, as well as the classical support vector machines and sparse kernel logistic regression classifiers.
Abstract: In this paper, a new technique for hyperspectral image classification is proposed. Our approach relies on the sparse representation of a test sample with respect to all training samples in a feature space induced by a kernel function. Projecting the samples into the feature space and kernelizing the sparse representation improves the separability of the data and thus yields higher classification accuracy compared to the more conventional linear sparsity-based classification algorithm. Moreover, the spatial coherence across neighboring pixels is also incorporated through a kernelized joint sparsity model, where all of the pixels within a small neighborhood are sparsely represented in the feature space by selecting a few common training samples. Two greedy algorithms are also provided in this paper to solve the kernel versions of the pixel-wise and jointly sparse recovery problems. Experimental results show that the proposed technique outperforms the linear sparsity-based classification technique and the classical Support Vector Machine classifiers.

353 citations

Journal ArticleDOI
01 Feb 2004
TL;DR: This paper discusses the optimization of two operations: a sparse matrix times a dense vector and a sparse matrices times a set of dense vectors, and describes the different optimizations and parameter selection techniques and evaluates them on several machines using over 40 matrices.
Abstract: Sparse matrix-vector multiplication is an important computational kernel that performs poorly on most modern processors due to a low compute-to-memory ratio and irregular memory access patterns. Optimization is difficult because of the complexity of cache-based memory systems and because performance is highly dependent on the non-zero structure of the matrix. The SPARSITY system is designed to address these problems by allowing users to automatically build sparse matrix kernels that are tuned to their matrices and machines. SPARSITY combines traditional techniques such as loop transformations with data structure transformations and optimization heuristics that are specific to sparse matrices. It provides a novel framework for selecting optimization parameters, such as block size, using a combination of performance models and search. In this paper we discuss the optimization of two operations: a sparse matrix times a dense vector and a sparse matrix times a set of dense vectors. Our experience indicates that register level optimizations are effective for matrices arising in certain scientific simulations, in particular finite-element problems. Cache level optimizations are important when the vector used in multiplication is larger than the cache size, especially for matrices in which the non-zero structure is random. For applications involving multiple vectors, reorganizing the computation to perform the entire set of multiplications as a single operation produces significant speedups. We describe the different optimizations and parameter selection techniques and evaluate them on several machines using over 40 matrices taken from a broad set of application domains. Our results demonstrate speedups of up to 4X for the single vector case and up to 10X for the multiple vector case.

351 citations

Journal ArticleDOI
TL;DR: An easy and efficient sparse-representation-based iterative algorithm for image inpainting that allows a high degree of flexibility to recover different structural components in the image (piecewise smooth, curvilinear, texture, etc.).
Abstract: Representing the image to be inpainted in an appropriate sparse representation dictionary, and combining elements from Bayesian statistics and modern harmonic analysis, we introduce an expectation maximization (EM) algorithm for image inpainting and interpolation. From a statistical point of view, the inpainting/interpolation can be viewed as an estimation problem with missing data. Toward this goal, we propose the idea of using the EM mechanism in a Bayesian framework, where a sparsity promoting prior penalty is imposed on the reconstructed coefficients. The EM framework gives a principled way to establish formally the idea that missing samples can be recovered/interpolated based on sparse representations. We first introduce an easy and efficient sparse-representation-based iterative algorithm for image inpainting. Additionally, we derive its theoretical convergence properties. Compared to its competitors, this algorithm allows a high degree of flexibility to recover different structural components in the image (piecewise smooth, curvilinear, texture, etc.). We also suggest some guidelines to automatically tune the regularization parameter.

350 citations

Journal ArticleDOI
TL;DR: A robust subspace separation scheme is developed that deals with practical issues in a unified mathematical framework and gives surprisingly good performance in the presence of the three types of pathological trajectories mentioned above.
Abstract: In this paper, we study the problem of segmenting tracked feature point trajectories of multiple moving objects in an image sequence. Using the affine camera model, this problem can be cast as the problem of segmenting samples drawn from multiple linear subspaces. In practice, due to limitations of the tracker, occlusions, and the presence of nonrigid objects in the scene, the obtained motion trajectories may contain grossly mistracked features, missing entries, or corrupted entries. In this paper, we develop a robust subspace separation scheme that deals with these practical issues in a unified mathematical framework. Our methods draw strong connections between lossy compression, rank minimization, and sparse representation. We test our methods extensively on the Hopkins155 motion segmentation database and other motion sequences with outliers and missing data. We compare the performance of our methods to state-of-the-art motion segmentation methods based on expectation-maximization and spectral clustering. For data without outliers or missing information, the results of our methods are on par with the state-of-the-art results and, in many cases, exceed them. In addition, our methods give surprisingly good performance in the presence of the three types of pathological trajectories mentioned above. All code and results are publicly available at http://perception.csl.uiuc.edu/coding/motion/.

348 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371