scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: The usefulness of utilizing a segmentation step for improving the performance of sparsity based image reconstruction algorithms is demonstrated and the proposed SSR method for both denoising and interpolation of OCT images is demonstrated.
Abstract: We demonstrate the usefulness of utilizing a segmentation step for improving the performance of sparsity based image reconstruction algorithms. In specific, we will focus on retinal optical coherence tomography (OCT) reconstruction and propose a novel segmentation based reconstruction framework with sparse representation, termed segmentation based sparse reconstruction (SSR). The SSR method uses automatically segmented retinal layer information to construct layer-specific structural dictionaries. In addition, the SSR method efficiently exploits patch similarities within each segmented layer to enhance the reconstruction performance. Our experimental results on clinical-grade retinal OCT images demonstrate the effectiveness and efficiency of the proposed SSR method for both denoising and interpolation of OCT images.

114 citations

Proceedings ArticleDOI
07 Nov 2009
TL;DR: This work model the analysis coefficients of typical images obtained using typical pyramidal frames as a strictly sparse vector plus a Gaussian correction term, which allows for an elegant iterated marginal optimization and provides state-of-the-art performance in standard deconvolution tests.
Abstract: Sparse optimization in overcomplete frames has been widely applied in recent years to ill-conditioned inverse problems. In particular, analysis-based sparse optimization consists of achieving a certain trade-off between fidelity to the observation and sparsity in a given linear representation, typically measured by some l p quasi-norm. Whereas most popular choice for p is 1 (convex optimization case), there is an increasing evidence on both the computational feasibility and higher performance potential of non-convex approaches (0 ≤ p ≪ 1). The extreme p = 0 case is especial, because analysis coefficients of typical images obtained using typical pyramidal frames are not strictly sparse, but rather compressible. Here we model the analysis coefficients as a strictly sparse vector plus a Gaussian correction term. This statistical formulation allows for an elegant iterated marginal optimization. We also show that it provides state-of-the-art performance, in a least-squares error sense, in standard deconvolution tests.

113 citations

Journal ArticleDOI
TL;DR: This paper introduces a novel non-parametric, exemplar-based method for reconstructing clean speech from noisy observations, based on techniques from the field of Compressive Sensing, which can impute missing features using larger time windows such as entire words.
Abstract: An effective way to increase the noise robustness of automatic speech recognition is to label noisy speech features as either reliable or unreliable (missing), and to replace (impute) the missing ones by clean speech estimates. Conventional imputation techniques employ parametric models and impute the missing features on a frame-by-frame basis. At low signal-to-noise ratios (SNRs), these techniques fail, because too many time frames may contain few, if any, reliable features. In this paper, we introduce a novel non-parametric, exemplar-based method for reconstructing clean speech from noisy observations, based on techniques from the field of Compressive Sensing. The method, dubbed sparse imputation, can impute missing features using larger time windows such as entire words. Using an overcomplete dictionary of clean speech exemplars, the method finds the sparsest combination of exemplars that jointly approximate the reliable features of a noisy utterance. That linear combination of clean speech exemplars is used to replace the missing features. Recognition experiments on noisy isolated digits show that sparse imputation outperforms conventional imputation techniques at SNR = -5 dB when using an ideal `oracle' mask. With error-prone estimated masks sparse imputation performs slightly worse than the best conventional technique.

113 citations

Journal ArticleDOI
TL;DR: A parts-based 2D DDL scheme is introduced and evaluated for simultaneous denoising and interpolation of seismic data and a special case of versatile non-negative matrix factorization (VNMF) is used to learn a dictionary.

113 citations

Journal ArticleDOI
TL;DR: This paper presents results illustrating the promising performance and significant speed-ups of transform learning over synthesis K-SVD in image denoising, and establishes that the alternating algorithms are globally convergent to the set of local minimizers of the nonconvex transform learning problems.
Abstract: Many applications in signal processing benefit from the sparsity of signals in a certain transform domain or dictio- nary. Synthesis sparsifying dictionaries that are directl y adapted to data have been popular in applications such as image denois- ing, inpainting, and medical image reconstruction. In this work, we focus instead on the sparsifying transform model, and study the learning of well-conditioned square sparsifying transforms. The proposed algorithms alternate between a l0 "norm"-based sparse coding step, and a non-convex transform update step. We derive the exact analytical solution for each of these steps. The proposed solution for the transform update step achieves the global minimum in that step, and also provides speedups over iterative solutions involving conjugate gradients. We establish that our alternating algorithms are globally convergent to the set of local minimizers of the non-convex transform learning prob- lems. In practice, the algorithms are insensitive to initia lization. We present results illustrating the promising performance and significant speed-ups of transform learning over synthesis K-SVD in image denoising. The transform model is not only more general in its modeling capabilities than the analysis models, it is also much more efficient and scalable than both the synthesis and noisy signal analysis models. We briefly review the main distinctions between these sparse models (cf. (5) for a more detailed review, and for the relevant references) in this an d the following paragraphs. One key difference is in the process of finding a sparse representation for data given the model, or dictionary. For the transform model, given the signal y and transform W , the transform sparse coding problem (5) minimizes kWy − xk 2 subject to kxk 0 ≤ s, where s is a given sparsity level. The solutionis obtained exactly and cheaply by zeroing out all but the s coefficients of largest magnitude in Wy 1 . In contrast, for the synthesis or noisy analysis models, the process of sparse coding is NP-hard (Non-deterministic Polynomial-time hard). While some of the approximate algorithms that have been proposed for synthesis or analysis sparse coding are guaranteed to provide the correct solution under certain conditions, in applications, espec ially those involving learning the models from data, these conditions are often violated. Moreover, the various synthesis and analysis sparse coding algorithms tend to be computationally expensive for large-scale problems. Recently, the data-driven adaptation of sparse models has re- ceived much interest. The adaptation of synthesis dictionaries based on training signals (6)-(12) has been shown to be useful in various applications (13)-(15). The learning of analysi s dictionaries, employing either the analysis model or its no isy signal extension, has also received some recent attention ( 3), (16)-(18).

113 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371