scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes a new sparsity-based algorithm for automatic target detection in hyperspectral imagery (HSI) based on the concept that a pixel in HSI lies in a low-dimensional subspace and thus can be represented as a sparse linear combination of the training samples.
Abstract: In this paper, we propose a new sparsity-based algorithm for automatic target detection in hyperspectral imagery (HSI). This algorithm is based on the concept that a pixel in HSI lies in a low-dimensional subspace and thus can be represented as a sparse linear combination of the training samples. The sparse representation (a sparse vector corresponding to the linear combination of a few selected training samples) of a test sample can be recovered by solving an l0-norm minimization problem. With the recent development of the compressed sensing theory, such minimization problem can be recast as a standard linear programming problem or efficiently approximated by greedy pursuit algorithms. Once the sparse vector is obtained, the class of the test sample can be determined by the characteristics of the sparse vector on reconstruction. In addition to the constraints on sparsity and reconstruction accuracy, we also exploit the fact that in HSI the neighboring pixels have a similar spectral characteristic (smoothness). In our proposed algorithm, a smoothness constraint is also imposed by forcing the vector Laplacian at each reconstructed pixel to be minimum all the time within the minimization process. The proposed sparsity-based algorithm is applied to several hyperspectral imagery to detect targets of interest. Simulation results show that our algorithm outperforms the classical hyperspectral target detection algorithms, such as the popular spectral matched filters, matched subspace detectors, adaptive subspace detectors, as well as binary classifiers such as support vector machines.

385 citations

Journal ArticleDOI
TL;DR: To efficiently find the single sparse vector produced by the last reduction step, this paper suggests an empirical boosting strategy that improves the recovery ability of any given suboptimal method for recovering a sparse vector.
Abstract: The rapid developing area of compressed sensing suggests that a sparse vector lying in a high dimensional space can be accurately and efficiently recovered from only a small set of nonadaptive linear measurements, under appropriate conditions on the measurement matrix. The vector model has been extended both theoretically and practically to a finite set of sparse vectors sharing a common sparsity pattern. In this paper, we treat a broader framework in which the goal is to recover a possibly infinite set of jointly sparse vectors. Extending existing algorithms to this model is difficult due to the infinite structure of the sparse vector set. Instead, we prove that the entire infinite set of sparse vectors can be recovered by solving a single, reduced-size finite-dimensional problem, corresponding to recovery of a finite set of sparse vectors. We then show that the problem can be further reduced to the basic model of a single sparse vector by randomly combining the measurements. Our approach is exact for both countable and uncountable sets, as it does not rely on discretization or heuristic techniques. To efficiently find the single sparse vector produced by the last reduction step, we suggest an empirical boosting strategy that improves the recovery ability of any given suboptimal method for recovering a sparse vector. Numerical experiments on random data demonstrate that, when applied to infinite sets, our strategy outperforms discretization techniques in terms of both run time and empirical recovery rate. In the finite model, our boosting algorithm has fast run time and much higher recovery rate than known popular methods.

384 citations

Posted Content
TL;DR: This paper presents a variational-based approach for fusing hyperspectral and multispectral images and demonstrates the efficiency of the proposed algorithm when compared with state-of-the-art fusion methods.
Abstract: This paper presents a variational based approach to fusing hyperspectral and multispectral images. The fusion process is formulated as an inverse problem whose solution is the target image assumed to live in a much lower dimensional subspace. A sparse regularization term is carefully designed, relying on a decomposition of the scene on a set of dictionaries. The dictionary atoms and the corresponding supports of active coding coefficients are learned from the observed images. Then, conditionally on these dictionaries and supports, the fusion problem is solved via alternating optimization with respect to the target image (using the alternating direction method of multipliers) and the coding coefficients. Simulation results demonstrate the efficiency of the proposed algorithm when compared with the state-of-the-art fusion methods.

384 citations

Journal ArticleDOI
TL;DR: In this article, a review of state-of-the-art scalable Gaussian process regression (GPR) models is presented, focusing on global and local approximations for subspace learning.
Abstract: The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process regression (GPR), a well-known nonparametric, and interpretable Bayesian model, which suffers from cubic complexity to data size. To improve the scalability while retaining desirable prediction quality, a variety of scalable GPs have been presented. However, they have not yet been comprehensively reviewed and analyzed to be well understood by both academia and industry. The review of scalable GPs in the GP community is timely and important due to the explosion of data size. To this end, this article is devoted to reviewing state-of-the-art scalable GPs involving two main categories: global approximations that distillate the entire data and local approximations that divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations that modify the prior but perform exact inference, posterior approximations that retain exact prior but perform approximate inference, and structured sparse approximations that exploit specific structures in kernel matrix; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and capability of scalable GPs are reviewed. Finally, the extensions and open issues of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.

381 citations

Journal ArticleDOI
TL;DR: This work proposes effective pursuit methods that aim to solve inverse problems regularized with the analysis-model prior, accompanied by a preliminary theoretical study of their performance.
Abstract: After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This work proposes effective pursuit methods that aim to solve inverse problems regularized with the analysis-model prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments.

380 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371