scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: Experimental results show that the improved sparse subspace clustering method has the second shortest computational time and also outperforms the other six methods in classification accuracy when using an appropriate band number obtained by the DC plot algorithm.
Abstract: An improved sparse subspace clustering (ISSC) method is proposed to select an appropriate band subset for hyperspectral imagery (HSI) classification. The ISSC assumes that band vectors are sampled from a union of low-dimensional orthogonal subspaces and each band can be sparsely represented as a linear or affine combination of other bands within its subspace. First, the ISSC represents band vectors with sparse coefficient vectors by solving the L2-norm optimization problem using the least square regression (LSR) algorithm. The sparse and block diagonal structure of the coefficient matrix from LSR leads to correct segmentation of band vectors. Second, the angular similarity measurement is presented and utilized to construct the similarity matrix. Third, the distribution compactness (DC) plot algorithm is used to estimate an appropriate size of the band subset. Finally, spectral clustering is implemented to segment the similarity matrix and the desired ISSC band subset is found. Four groups of experiments on three widely used HSI datasets are performed to test the performance of ISSC for selecting bands in classification. In addition, the following six state-of-the-art band selection methods are used to make comparisons: linear constrained minimum variance-based band correlation constraint (LCMV-BCC), affinity propagation (AP), spectral information divergence (SID), maximum-variance principal component analysis (MVPCA), sparse representation-based band selection (SpaBS), and sparse nonnegative matrix factorization (SNMF). Experimental results show that the ISSC has the second shortest computational time and also outperforms the other six methods in classification accuracy when using an appropriate band number obtained by the DC plot algorithm.

152 citations

Dissertation
01 Jan 2004

152 citations

Journal ArticleDOI
TL;DR: Simulation results show that the performances of the proposed schemes depend on the degree of sparsity, and Provided that suitable intensities of the zero-attracting term are selected, they can outperform the standard diffusion LMS when the considered vector is sparse.
Abstract: We address the problem of in-network distributed estimation for sparse vectors In order to exploit the underlying sparsity of the vector of interest, we incorporate the l1- and l0-norm constraints into the cost function of the standard diffusion least-mean squares (LMS) This technique is equivalent to adding a zero-attracting term in the iteration of the LMS-based algorithm, which accelerates the convergence rates of the zero or near-zero components The rules for selecting the intensity of the zero-attracting term are derived and verified Simulation results show that the performances of the proposed schemes depend on the degree of sparsity Provided that suitable intensities of the zero-attracting term are selected, they can outperform the standard diffusion LMS when the considered vector is sparse In addition, a practical application of the proposed sparse algorithms in spectrum estimation for a narrow-band source is presented

152 citations

Journal ArticleDOI
TL;DR: This work implements a new algorithm for listing all maximal cliques in sparse graphs due to Eppstein, Loffler, and Strash (ISAAC 2010) and analyzes its performance on a large corpus of real-world graphs to show that this algorithm is the first to offer a practical solution to listing allmaximal clique in large sparse graphs.
Abstract: We implement a new algorithm for listing all maximal cliques in sparse graphs due to Eppstein, Loffler, and Strash (ISAAC 2010) and analyze its performance on a large corpus of real-world graphs. Our analysis shows that this algorithm is the first to offer a practical solution to listing all maximal cliques in large sparse graphs. All other theoretically-fast algorithms for sparse graphs have been shown to be significantly slower than the algorithm of Tomita et al. (Theoretical Computer Science, 2006) in practice. However, the algorithm of Tomita et al. uses an adjacency matrix, which requires too much space for large sparse graphs. Our new algorithm opens the door for fast analysis of large sparse graphs whose adjacency matrix will not fit into working memory.

152 citations

Journal ArticleDOI
TL;DR: To prove the sparsity of hyperspectral data and handle the computational intensiveness and time demand of general-purpose linear programming (LP) solvers, this paper proposes a Homotopy-based sparse classification approach, which works efficiently when data is highly sparse.
Abstract: The classification of high-dimensional data with too few labeled samples is a major challenge which is difficult to meet unless some special characteristics of the data can be exploited. In remote sensing, the problem is particularly serious because of the difficulty and cost factors involved in assignment of labels to high-dimensional samples. In this paper, we exploit certain special properties of hyperspectral data and propose an l1-minimization -based sparse representation classification approach to overcome this difficulty in hyperspectral data classification. We assume that the data within each hyperspectral data class lies in a very low-dimensional subspace. Unlike traditional supervised methods, the proposed method does not have separate training and testing phases and, therefore, does not need a training procedure for model creation. Further, to prove the sparsity of hyperspectral data and handle the computational intensiveness and time demand of general-purpose linear programming (LP) solvers, we propose a Homotopy-based sparse classification approach, which works efficiently when data is highly sparse. The approach is not only time efficient, but it also produces results, which are comparable to the traditional methods. The proposed approaches are tested for our difficult classification problem of hyperspectral data with few labeled samples. Extensive experiments on four real hyperspectral data sets prove that hyperspectral data is highly sparse in nature, and the proposed approaches are robust across different databases, offer more classification accuracy, and are more efficient than state-of-the-art methods.

151 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371