scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that the combinatorial problem of finding a low-stretch spanning tree in an undirected graph corresponds to subset selection, and the various implications of this reduction are discussed.
Abstract: We study the following problem of subset selection for matrices: given a matrix $\mathbf{X} \in \mathbb{R}^{n \times m}$ ($m > n$) and a sampling parameter $k$ ($n \le k \le m$), select a subset of $k$ columns from $\mathbf{X}$ such that the pseudoinverse of the sampled matrix has as small a norm as possible. In this work, we focus on the Frobenius and the spectral matrix norms. We describe several novel (deterministic and randomized) approximation algorithms for this problem with approximation bounds that are optimal up to constant factors. Additionally, we show that the combinatorial problem of finding a low-stretch spanning tree in an undirected graph corresponds to subset selection, and discuss various implications of this reduction.

98 citations

Journal ArticleDOI
TL;DR: Experimental results show that incorporating group sparsity into the reconstruction problem produces significant improvement over ordinary sparse algorithm, and two new non-convex group sparse optimization methods are proposed in this work.

98 citations

Journal ArticleDOI
TL;DR: This paper investigates a new approach for more accurate ear recognition and verification problem using the sparse representation of local gray-level orientations and presents experimental results from publically available UND and IITD ear databases which achieve significant improvement in the performance.

98 citations

Journal ArticleDOI
TL;DR: This paper develops a prototype image coder that has near-optimal asymptotic R-D performance D(R)/spl lsim/(logR)/sup 2//R/sup 2/ for piecewise smooth C/Sup 2//C/ Sup 2/ images.
Abstract: The wavelet transform provides a sparse representation for smooth images, enabling efficient approximation and compression using techniques such as zerotrees. Unfortunately, this sparsity does not extend to piecewise smooth images, where edge discontinuities separating smooth regions persist along smooth contours. This lack of sparsity hampers the efficiency of wavelet-based approximation and compression. On the class of images containing smooth C/sup 2/ regions separated by edges along smooth C/sup 2/ contours, for example, the asymptotic rate-distortion (R-D) performance of zerotree-based wavelet coding is limited to D(R) /spl lsim/1/R, well below the optimal rate of 1/R/sup 2/. In this paper, we develop a geometric modeling framework for wavelets that addresses this shortcoming. The framework can be interpreted either as 1) an extension to the "zerotree model" for wavelet coefficients that explicitly accounts for edge structure at fine scales, or as 2) a new atomic representation that synthesizes images using a sparse combination of wavelets and wedgeprints-anisotropic atoms that are adapted to edge singularities. Our approach enables a new type of quadtree pruning for piecewise smooth images, using zerotrees in uniformly smooth regions and wedgeprints in regions containing geometry. Using this framework, we develop a prototype image coder that has near-optimal asymptotic R-D performance D(R)/spl lsim/(logR)/sup 2//R/sup 2/ for piecewise smooth C/sup 2//C/sup 2/ images. In addition, we extend the algorithm to compress natural images, exploring the practical problems that arise and attaining promising results in terms of mean-square error and visual quality.

98 citations

Book ChapterDOI
18 May 2005
TL;DR: A modification of the published algorithm to solve the sparsity problem that occurs in text clustering is presented, based on a new subspace clustering algorithm that automatically calculates the feature weights in the k-means clustering process.
Abstract: This paper presents a new method to solve the problem of clustering large and complex text data. The method is based on a new subspace clustering algorithm that automatically calculates the feature weights in the k-means clustering process. In clustering sparse text data the feature weights are used to discover clusters from subspaces of the document vector space and identify key words that represent the semantics of the clusters. We present a modification of the published algorithm to solve the sparsity problem that occurs in text clustering. Experimental results on real-world text data have shown that the new method outperformed the Standard KMeans and Bisection-KMeans algorithms, while still maintaining efficiency of the k-means clustering process.

98 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371