scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: Several possible computational strategies for computing a sparse basis for the null space of a sparse underdetermined matrix are described, both combinatorial and noncombinatorial in nature.
Abstract: We present algorithms for computing a sparse basis for the null space of a sparse underdetermined matrix. We describe several possible computational strategies, both combinatorial and noncombinatorial in nature, and we compare their effectiveness for several test problems.

95 citations

Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper proposed a fast orthogonal dictionary learning method for sparse image representation that is much more computationally efficient than the over-complete dictionary based learning methods.
Abstract: In recent years, how to learn a dictionary from input images for sparse modelling has been one very active topic in image processing and recognition. Most existing dictionary learning methods consider an over-complete dictionary, e.g. the K-SVD method. Often they require solving some minimization problem that is very challenging in terms of computational feasibility and efficiency. However, if the correlations among dictionary atoms are not well constrained, the redundancy of the dictionary does not necessarily improve the performance of sparse coding. This paper proposed a fast orthogonal dictionary learning method for sparse image representation. With comparable performance on several image restoration tasks, the proposed method is much more computationally efficient than the over-complete dictionary based learning methods.

95 citations

Journal ArticleDOI
TL;DR: An iterative hard thresholding (IHT) method and its variant for solving regularized box constrained convex programming and it is shown that the sequence generated by these methods converges to a local minimizer.
Abstract: In this paper we consider $$l_0$$ l 0 regularized convex cone programming problems. In particular, we first propose an iterative hard thresholding (IHT) method and its variant for solving $$l_0$$ l 0 regularized box constrained convex programming. We show that the sequence generated by these methods converges to a local minimizer. Also, we establish the iteration complexity of the IHT method for finding an $${{\epsilon }}$$ ∈ -local-optimal solution. We then propose a method for solving $$l_0$$ l 0 regularized convex cone programming by applying the IHT method to its quadratic penalty relaxation and establish its iteration complexity for finding an $${{\epsilon }}$$ ∈ -approximate local minimizer. Finally, we propose a variant of this method in which the associated penalty parameter is dynamically updated, and show that every accumulation point is a local izer of the problem.

95 citations

Journal ArticleDOI
TL;DR: A methodology for online learning of square sparsifying transforms is developed and the proposed transform learning algorithms are shown to have a much lower computational cost than online synthesis dictionary learning.
Abstract: Techniques exploiting the sparsity of signals in a transform domain or dictionary have been popular in signal processing. Adaptive synthesis dictionaries have been shown to be useful in applications such as signal denoising, and medical image reconstruction. More recently, the learning of sparsifying transforms for data has received interest. The sparsifying transform model allows for cheap and exact computations. In this paper, we develop a methodology for online learning of square sparsifying transforms. Such online learning can be particularly useful when dealing with big data, and for signal processing applications such as real-time sparse representation and denoising. The proposed transform learning algorithms are shown to have a much lower computational cost than online synthesis dictionary learning. In practice, the sequential learning of a sparsifying transform typically converges faster than batch mode transform learning. Preliminary experiments show the usefulness of the proposed schemes for sparse representation, and denoising.

95 citations

Journal ArticleDOI
TL;DR: This article proposes a novel hyperspectral imagery super-resolution method by utilizing the sparse representation and spectral mixing model and introduces an adaptive regularization terms into the sparse represented framework by combining the linear spectrum mixing model.
Abstract: For the instrument limitation and imperfect imaging optics, it is difficult to acquire high spatial resolution hyperspectral imagery. Low spatial resolution will result in a lot of mixed pixels and greatly degrade the detection and recognition performance, affect the related application in civil and military fields. As a powerful statistical image modeling technique, sparse representation can be utilized to analyze the hyperspectral image efficiently. Hyperspectral imagery is intrinsically sparse in spatial and spectral domains, and image super-resolution quality largely depends on whether the prior knowledge is utilized properly. In this article, we propose a novel hyperspectral imagery super-resolution method by utilizing the sparse representation and spectral mixing model. Based on the sparse representation model and hyperspectral image acquisition process model, small patches of hyperspectral observations from different wavelengths can be represented as weighted linear combinations of a small number of atoms in pre-trained dictionary. Then super-resolution is treated as a least squares problem with sparse constraints. To maintain the spectral consistency, we further introduce an adaptive regularization terms into the sparse representation framework by combining the linear spectrum mixing model. Extensive experiments validate that the proposed method achieves much better results.

95 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371