scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel iterative EEG source imaging algorithm, Lp norm iterative sparse solution (LPISS), which was applied to a real evoked potential collected in a study of inhibition of return (IOR), and the result was consistent with the previously suggested activated areas involved in an IOR process.
Abstract: How to localize the neural electric activities effectively and precisely from the scalp EEG recordings is a critical issue for clinical neurology and cognitive neuroscience. In this paper, based on the spatial sparse assumption of brain activities, proposed is a novel iterative EEG source imaging algorithm, Lp norm iterative sparse solution (LPISS). In LPISS, the lp(ples1) norm constraint for sparse solution is integrated into the iterative weighted minimum norm solution of the underdetermined EEG inverse problem, and it is the constraint and the iteratively renewed weight that forces the inverse problem to converge to a sparse solution effectively. The conducted simulation studies with comparison to LORETA and FOCUSS for various dipoles configurations confirmed the validation of LPISS for sparse EEG source localization. Finally, LPISS was applied to a real evoked potential collected in a study of inhibition of return (IOR), and the result was consistent with the previously suggested activated areas involved in an IOR process

102 citations

Proceedings ArticleDOI
01 Dec 1992
TL;DR: The behavior on cache of one of the most frequent primitives, SpMxV sparse matrix vector multiply, is analyzed and a blocking technique which takes into account the specifics of sparse codes is proposed.
Abstract: A methodology is presented for modeling the irregular references of sparse codes using probabilistic methods. The behavior on cache of one of the most frequent primitives, SpMxV sparse matrix vector multiply, is analyzed. A model of its references is built, and performance bottlenecks of SpMxV are analyzed using the model and simulations. The main parameters are identified and their role is explained and quantified. This analysis is then used to discuss optimizations of SpMxV. A blocking technique which takes into account the specifics of sparse codes is proposed. >

102 citations

Journal ArticleDOI
TL;DR: Using a geometrically intuitive framework, this paper provides basic insights for understanding useful lasso screening tests and their limitations, and provides illustrative numerical studies on several datasets.
Abstract: This paper is a survey of dictionary screening for the lasso problem. The lasso problem seeks a sparse linear combination of the columns of a dictionary to best match a given target vector. This sparse representation has proven useful in a variety of subsequent processing and decision tasks. For a given target vector, dictionary screening quickly identifies a subset of dictionary columns that will receive zero weight in a solution of the corresponding lasso problem. These columns can be removed from the dictionary prior to solving the lasso problem without impacting the optimality of the solution obtained. This has two potential advantages: it reduces the size of the dictionary, allowing the lasso problem to be solved with less resources, and it may speed up obtaining a solution. Using a geometrically intuitive framework, we provide basic insights for understanding useful lasso screening tests and their limitations. We also provide illustrative numerical studies on several datasets.

102 citations

Proceedings ArticleDOI
30 Sep 2009
TL;DR: The uncertainty principle is a quantification of the notion that a matrix cannot be sparse while having diffuse row/column spaces and forms the basis for the decomposition method and its analysis.
Abstract: We consider the following fundamental problem: given a matrix that is the sum of an unknown sparse matrix and an unknown low-rank matrix, is it possible to exactly recover the two components? Such a capability enables a considerable number of applications, but the goal is both ill-posed and NP-hard in general. In this paper we develop (a) a new uncertainty principle for matrices, and (b) a simple method for exact decomposition based on convex optimization. Our uncertainty principle is a quantification of the notion that a matrix cannot be sparse while having diffuse row/column spaces. It characterizes when the decomposition problem is ill-posed, and forms the basis for our decomposition method and its analysis. We provide deterministic conditions — on the sparse and low-rank components — under which our method guarantees exact recovery.

101 citations

Journal ArticleDOI
TL;DR: The proposed joint dynamic sparsity prior promotes shared joint sparsity patterns among the multiple sparse representation vectors at class-level, while allowing distinctSparsity patterns at atom-level within each class to facilitate a flexible representation.

101 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371