scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: Extensive testing on finite element matrices indicates that the algorithm typically produces bandwidth and profile which are comparable to those of the commonly-used reverse Cuthill–McKee algorithm, yet requires significantly less computation time.
Abstract: A new algorithm for reducing the bandwidth and profile of a sparse matrix is described. Extensive testing on finite element matrices indicates that the algorithm typically produces bandwidth and profile which are comparable to those of the commonly-used reverse Cuthill–McKee algorithm, yet requires significantly less computation time.

569 citations

Journal ArticleDOI
TL;DR: In this article, the concept of compressed sensing was extended to signals that are not sparse in an orthonormal basis but rather in a redundant dictionary, and it was shown that a matrix, which is a composition of a random matrix of certain type and a deterministic dictionary, has small restricted isometry constants.
Abstract: This paper extends the concept of compressed sensing to signals that are not sparse in an orthonormal basis but rather in a redundant dictionary. It is shown that a matrix, which is a composition of a random matrix of certain type and a deterministic dictionary, has small restricted isometry constants. Thus, signals that are sparse with respect to the dictionary can be recovered via basis pursuit (BP) from a small number of random measurements. Further, thresholding is investigated as recovery algorithm for compressed sensing, and conditions are provided that guarantee reconstruction with high probability. The different schemes are compared by numerical experiments.

567 citations

Journal ArticleDOI
TL;DR: This work proposes and analyzes an anisotropic sparse grid stochastic collocation method for solving partial differential equations with random coefficients and forcing terms (input data of the model) and provides a rigorous convergence analysis of the fully discrete problem.
Abstract: This work proposes and analyzes an anisotropic sparse grid stochastic collocation method for solving partial differential equations with random coefficients and forcing terms (input data of the model). The method consists of a Galerkin approximation in the space variables and a collocation, in probability space, on sparse tensor product grids utilizing either Clenshaw-Curtis or Gaussian knots. Even in the presence of nonlinearities, the collocation approach leads to the solution of uncoupled deterministic problems, just as in the Monte Carlo method. This work includes a priori and a posteriori procedures to adapt the anisotropy of the sparse grids to each given problem. These procedures seem to be very effective for the problems under study. The proposed method combines the advantages of isotropic sparse collocation with those of anisotropic full tensor product collocation: the first approach is effective for problems depending on random variables which weigh approximately equally in the solution, while the benefits of the latter approach become apparent when solving highly anisotropic problems depending on a relatively small number of random variables, as in the case where input random variables are Karhunen-Loeve truncations of “smooth” random fields. This work also provides a rigorous convergence analysis of the fully discrete problem and demonstrates (sub)exponential convergence in the asymptotic regime and algebraic convergence in the preasymptotic regime, with respect to the total number of collocation points. It also shows that the anisotropic approximation breaks the curse of dimensionality for a wide set of problems. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo. In particular, for moderately large-dimensional problems, the sparse grid approach with a properly chosen anisotropy seems to be very efficient and superior to all examined methods.

552 citations

Journal ArticleDOI
01 Jan 2005
TL;DR: An overview of OSKI is provided, which is based on research on automatically tuned sparse kernels for modern cache-based superscalar machines, and the primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine.
Abstract: The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.

546 citations

Journal ArticleDOI
TL;DR: Sparse additive models as discussed by the authors combine ideas from sparse linear modeling and additive non-parametric regression, and derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size.
Abstract: Summary. We present a new class of methods for high dimensional non-parametric regression and classification called sparse additive models. Our methods combine ideas from sparse linear modelling and additive non-parametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. Sparse additive models are essentially a functional version of the grouped lasso of Yuan and Lin. They are also closely related to the COSSO model of Lin and Zhang but decouple smoothing and sparsity, enabling the use of arbitrary non-parametric smoothers. We give an analysis of the theoretical properties of sparse additive models and present empirical results on synthetic and real data, showing that they can be effective in fitting sparse non-parametric models in high dimensional data.

542 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371