scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Proceedings ArticleDOI
18 Dec 1997
TL;DR: The data locality characteristics of the compressed sparse row representation is examined and improvements in locality through matrix permutation are considered and modified sparse matrix representations are evaluated.
Abstract: We analyze single node performance of sparse matrix vector multiplication by investigating issues of data locality and fine grained parallelism. We examine the data locality characteristics of the compressed sparse row representation and consider improvements in locality through matrix permutation. Motivated by potential improvements in fine grained parallelism, we evaluate modified sparse matrix representations. The results lead to general conclusions about improving single node performance of sparse matrix vector multiplication in parallel libraries of sparse iterative solvers.

168 citations

Journal ArticleDOI
Onur G. Guleryuz1
TL;DR: This work shows that constructing estimates based on nonlinear approximants is fundamentally a nonconvex problem and proposes a progressive algorithm that is designed to deal with this issue directly and is applied to images through an extensive set of simulation examples.
Abstract: We combine the main ideas introduced in Part I with adaptive techniques to arrive at a powerful algorithm that estimates missing data in nonstationary signals. The proposed approach operates automatically based on a chosen linear transform that is expected to provide sparse decompositions over missing regions such that a portion of the transform coefficients over missing regions are zero or close to zero. Unlike prevalent algorithms, our method does not necessitate any complex preconditioning, segmentation, or edge detection steps, and it can be written as a progression of denoising operations. We show that constructing estimates based on nonlinear approximants is fundamentally a nonconvex problem and we propose a progressive algorithm that is designed to deal with this issue directly. The algorithm is applied to images through an extensive set of simulation examples, primarily on missing regions containing textures, edges, and other image features that are not readily handled by established estimation and recovery methods. We discuss the properties required of good transforms, and in conjunction, show the types of regions over which well-known transforms provide good predictors. We further discuss extensions of the algorithm where the utilized transforms are also chosen adaptively, where unpredictable signal components in the progressions are identified and not predicted, and where the prediction scenario is more general.

167 citations

Journal ArticleDOI
TL;DR: DeepDenoiser as discussed by the authors uses a deep neural network to learn a sparse representation of data in the time-frequency domain and a nonlinear function that maps this representation into masks that decompose input data into a signal of interest and noise.
Abstract: Denoising and filtering are widely used in routine seismic-data-processing to improve the signal-to-noise ratio (SNR) of recorded signals and by doing so to improve subsequent analyses. In this paper we develop a new denoising/decomposition method, DeepDenoiser, based on a deep neural network. This network is able to learn simultaneously a sparse representation of data in the time-frequency domain and a non-linear function that maps this representation into masks that decompose input data into a signal of interest and noise (defined as any non-seismic signal). We show that DeepDenoiser achieves impressive denoising of seismic signals even when the signal and noise share a common frequency band. Our method properly handles a variety of colored noise and non-earthquake signals. DeepDenoiser can significantly improve the SNR with minimal changes in the waveform shape of interest, even in presence of high noise levels. We demonstrate the effect of our method on improving earthquake detection. There are clear applications of DeepDenoiser to seismic imaging, micro-seismic monitoring, and preprocessing of ambient noise data. We also note that potential applications of our approach are not limited to these applications or even to earthquake data, and that our approach can be adapted to diverse signals and applications in other settings.

167 citations

Journal ArticleDOI
TL;DR: A probabilistic analysis is used to show that data characterized with remarkably selective, invariant, and explicit responses to images of famous individuals or landmark buildings are consistent with a sparse code in which neurons respond in a selective manner to a small fraction of stimuli.
Abstract: Recent experiments characterized individual neurons in the human medial temporal lobe with remarkably selective, invariant, and explicit responses to images of famous individuals or landmark buildings. Here, we used a probabilistic analysis to show that these data are consistent with a sparse code in which neurons respond in a selective manner to a small fraction of stimuli.

167 citations

Journal ArticleDOI
TL;DR: In this paper, the first-order optimality conditions for sparse approximation problems with the norm of a vector being a part of constraints or objective functions were studied, and penalty decomposition (PD) methods for solving them were proposed.
Abstract: In this paper we consider sparse approximation problems, that is, general $l_0$ minimization problems with the $l_0$-``norm” of a vector being a part of constraints or objective function. In particular, we first study the first-order optimality conditions for these problems. We then propose penalty decomposition (PD) methods for solving them in which a sequence of penalty subproblems are solved by a block coordinate descent (BCD) method. Under some suitable assumptions, we establish that any accumulation point of the sequence generated by the PD methods satisfies the first-order optimality conditions of the problems. Furthermore, for the problems in which the $l_0$ part is the only nonconvex part, we show that such an accumulation point is a local minimizer of the problems. In addition, we show that any accumulation point of the sequence generated by the BCD method is a block coordinate minimizer of the penalty subproblem. Moreover, for the problems in which the $l_0$ part is the only nonconvex part, we e...

166 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371