scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: The methods of this paper are illustrated for RBF kernels and demonstrate how to obtain robust estimates with selection of an appropriate number of hidden units, in the case of outliers or non-Gaussian error distributions with heavy tails.

1,197 citations

Book
26 Dec 2008
TL;DR: The central concept of sparsity is explained and applied to signal compression, noise reduction, and inverse problems, while coverage is given to sparse representations in redundant dictionaries, super-resolution and compressive sensing applications.
Abstract: Mallat's book is the undisputed reference in this field - it is the only one that covers the essential material in such breadth and depth. - Laurent Demanet, Stanford UniversityThe new edition of this classic book gives all the major concepts, techniques and applications of sparse representation, reflecting the key role the subject plays in today's signal processing. The book clearly presents the standard representations with Fourier, wavelet and time-frequency transforms, and the construction of orthogonal bases with fast algorithms. The central concept of sparsity is explained and applied to signal compression, noise reduction, and inverse problems, while coverage is given to sparse representations in redundant dictionaries, super-resolution and compressive sensing applications.Features:* Balances presentation of the mathematics with applications to signal processing* Algorithms and numerical examples are implemented in WaveLab, a MATLAB toolbox* Companion website for instructors and selected solutions and code available for studentsNew in this edition* Sparse signal representations in dictionaries* Compressive sensing, super-resolution and source separation* Geometric image processing with curvelets and bandlets* Wavelets for computer graphics with lifting on surfaces* Time-frequency audio processing and denoising* Image compression with JPEG-2000* New and updated exercisesA Wavelet Tour of Signal Processing: The Sparse Way, third edition, is an invaluable resource for researchers and R&D engineers wishing to apply the theory in fields such as image processing, video processing and compression, bio-sensing, medical imaging, machine vision and communications engineering.Stephane Mallat is Professor in Applied Mathematics at cole Polytechnique, Paris, France. From 1986 to 1996 he was a Professor at the Courant Institute of Mathematical Sciences at New York University, and between 2001 and 2007, he co-founded and became CEO of an image processing semiconductor company.Companion website: A Numerical Tour of Signal Processing Includes all the latest developments since the book was published in 1999, including itsapplication to JPEG 2000 and MPEG-4Algorithms and numerical examples are implemented in Wavelab, a MATLAB toolboxBalances presentation of the mathematics with applications to signal processing

1,168 citations

Proceedings ArticleDOI
12 May 2008
TL;DR: This work proposes iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian plus the original sparsity-inducing regularizer, and proves convergence of the proposed iterative algorithm to a minimum of the objective function.
Abstract: Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (pound 2) error term added to a sparsity-inducing (usually pound 1) regularizer. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex, sparsity-inducing function. We propose iterative methods in which each step is an optimization subproblem involving a separable quadratic term (diagonal Hessian) plus the original sparsity-inducing term. Our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. In addition to solving the standard pound 2 - pound 1 case, our approach handles other problems, e.g., pound p regularizers with p ne 1, or group-separable (GS) regularizers. Experiments with CS problems show that our approach provides state-of-the-art speed for the standard pound 2 - pound 1 problem, and is also efficient on problems with GS regularizers.

1,154 citations

Journal ArticleDOI
TL;DR: A non intrusive method that builds a sparse PC expansion, which may be obtained at a reduced computational cost compared to the classical ''full'' PC approximation.

1,112 citations

Proceedings Article
08 Dec 2008
TL;DR: A novel sparse representation for signals belonging to different classes in terms of a shared dictionary and discriminative class models is proposed, with results on standard handwritten digit and texture classification tasks.
Abstract: It is now well established that sparse signal models are well suited for restoration tasks and can be effectively learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of purely reconstructive ones. This paper proposes a new step in that direction, with a novel sparse representation for signals belonging to different classes in terms of a shared dictionary and discriminative class models. The linear version of the proposed model admits a simple probabilistic interpretation, while its most general variant admits an interpretation in terms of kernels. An optimization framework for learning all the components of the proposed model is presented, along with experimental results on standard handwritten digit and texture classification tasks.

1,108 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371