scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: Inspired by the existing SGA methods, a novel GA termed subspace matching pursuit (SMP) is presented, which makes use of the low-degree mixed pixels in the hyperspectral image to iteratively find a subspace to reconstruct the Hyperspectral data and can serve as a dictionary pruning algorithm.
Abstract: Sparse unmixing assumes that each mixed pixel in the hyperspectral image can be expressed as a linear combination of only a few spectra (endmembers) in a spectral library, known a priori. It then aims at estimating the fractional abundances of these endmembers in the scene. Unfortunately, because of the usually high correlation of the spectral library, the sparse unmixing problem still remains a great challenge. Moreover, most related work focuses on the l1 convex relaxation methods, and little attention has been paid to the use of simultaneous sparse representation via greedy algorithms (GAs) (SGA) for sparse unmixing. SGA has advantages such as that it can get an approximate solution for the l0 problem directly without smoothing the penalty term in a low computational complexity as well as exploit the spatial information of the hyperspectral data. Thus, it is necessary to explore the potential of using such algorithms for sparse unmixing. Inspired by the existing SGA methods, this paper presents a novel GA termed subspace matching pursuit (SMP) for sparse unmixing of hyperspectral data. SMP makes use of the low-degree mixed pixels in the hyperspectral image to iteratively find a subspace to reconstruct the hyperspectral data. It is proved that, under certain conditions, SMP can recover the optimal endmembers from the spectral library. Moreover, SMP can serve as a dictionary pruning algorithm. Thus, it can boost other sparse unmixing algorithms, making them more accurate and time efficient. Experimental results on both synthetic and real data demonstrate the efficacy of the proposed algorithm.

98 citations

Journal ArticleDOI
TL;DR: This work model the sparsifying Fourier dictionary as a parameterized dictionary, with the sampled frequency grid points treated as the underlying parameters, and develops a novel recovery algorithm for CS of complex sinusoids based on the philosophy of the variational expectation-maximization (EM) algorithm.
Abstract: In the existing compressed sensing (CS) theory, the accurate reconstruction of an unknown signal lies in the awareness of its sparsifying dictionary. For the signal represented by a finite sum of complex sinusoids, however, it is impractical to set a fixed sparsifying Fourier dictionary prior to signal reconstruction due to our ignorance of the signal's component frequencies. To address this, we model the sparsifying Fourier dictionary as a parameterized dictionary, with the sampled frequency grid points treated as the underlying parameters. Consequently, the sparsifying dictionary is refinable during the signal reconstruction process, and its refinement can be accomplished via the adjustment of the frequency grid. Furthermore, based on the philosophy of the variational expectation-maximization (EM) algorithm, we develop a novel recovery algorithm for CS of complex sinusoids. The algorithm achieves joint sparse representation recovery and sparsifying dictionary refinement by successively executing steps of signal coefficients estimation and dictionary parameters optimization. Simulation results under different conditions demonstrate that compared to the state-of-the-art CS recovery methods, the proposed algorithm achieves much higher signal reconstruction accuracy, and yields superior performance both in suppressing additive noise in measurements and in reconstructing signals with closely-spaced component frequencies.

98 citations

Journal ArticleDOI
TL;DR: This paper introduces a novel method for single and simultaneous fault location in distribution networks by means of a sparse representation (SR) vector, Fuzzy-clustering, and machine-learning.
Abstract: This paper introduces a novel method for single and simultaneous fault location in distribution networks by means of a sparse representation (SR) vector, Fuzzy-clustering, and machine-learning. The method requires few smart meters along the primary feeders to measure the pre- and during-fault voltages. The voltage sag values for the measured buses produce a vector whose dimension is less than the number of buses in the system. By concatenating the corresponding rows of the bus impedance matrix, an underdetermined set of equation is formed and is used to recover the fault current vector. Since the current vector ideally contains few nonzero values corresponding to fault currents at the faulted points, it is a sparse vector which can be determined by $\ell^{{{1}}}$ -norm minimization. Because the number of nonzero values in the estimated current vector often exceeds the number of fault points, we analyze the nonzero values by Fuzzy-c mean to estimate four possible faults. Furthermore, the nonzero values are processed by a new machine learning method based on the k-nearest neighborhood technique to estimate a single fault location. The performance of our algorithms is validated by their implementation on a real distribution network with noisy and noise-free measurement.

98 citations

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A new outlier detection method that combines tools from sparse representation with random walks on a graph and establishes a connection between inliers/outliers and essential/inessential states of the Markov chain, which allows us to detect outliers by using random walks.
Abstract: Many computer vision tasks involve processing large amounts of data contaminated by outliers, which need to be detected and rejected. While outlier detection methods based on robust statistics have existed for decades, only recently have methods based on sparse and low-rank representation been developed along with guarantees of correct outlier detection when the inliers lie in one or more low-dimensional subspaces. This paper proposes a new outlier detection method that combines tools from sparse representation with random walks on a graph. By exploiting the property that data points can be expressed as sparse linear combinations of each other, we obtain an asymmetric affinity matrix among data points, which we use to construct a weighted directed graph. By defining a suitable Markov Chain from this graph, we establish a connection between inliers/outliers and essential/inessential states of the Markov chain, which allows us to detect outliers by using random walks. We provide a theoretical analysis that justifies the correctness of our method under geometric and connectivity assumptions. Experimental results on image databases demonstrate its superiority with respect to state-of-the-art sparse and low-rank outlier detection methods.

98 citations

Journal ArticleDOI
Bin Dong1
TL;DR: A new (constructive) characterization of tight wavelet frames on non-flat domains in both continuum setting, i.e. on manifolds, and discrete setting, and how fast tight wavelets frame transforms can be computed and how they can be effectively used to process graph data is introduced.

98 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371