scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes an efficient algorithm for solving a general class of sparsifying formulations and provides applications, along with details on how to apply, and experimental results.
Abstract: The use of convex optimization for the recovery of sparse signals from incomplete or compressed data is now common practice. Motivated by the success of basis pursuit in recovering sparse vectors, new formulations have been proposed that take advantage of different types of sparsity. In this paper we propose an efficient algorithm for solving a general class of sparsifying formulations. For several common types of sparsity we provide applications, along with details on how to apply the algorithm, and experimental results.

235 citations

Journal ArticleDOI
TL;DR: This paper develops a near real-time algorithm for identifying multiple line outages at the affordable complexity of solving a sparse signal reconstruction problem via either greedy steps or coordinate descent iterations.
Abstract: Fast and accurate unveiling of power-line outages is of paramount importance not only for preventing faults that may lead to blackouts, but also for routine monitoring and control tasks of the smart grid, including state estimation and optimal power flow. Existing approaches are either challenged by the combinatorial complexity issues involved and are thus limited to identifying single and double line-outages or they invoke less pragmatic assumptions such as conditionally independent phasor angle measurements available across the grid. Using only a subset of voltage phasor angle data, the present paper develops a near real-time algorithm for identifying multiple line outages at the affordable complexity of solving a sparse signal reconstruction problem via either greedy steps or coordinate descent iterations. Recognizing that the number of line outages is a small fraction of the total number of lines, the novel approach relies on reformulating the DC linear power flow model as a sparse overcomplete expansion and leveraging contemporary advances in compressive sampling and variable selection. This sparse representation can also be extended to incorporate available information on the internal system and more general line-parameter faults. Analysis and simulated tests on 118-, 300-, and 2383-bus systems confirm the effectiveness of identifying sparse power line outages.

233 citations

Journal Article
TL;DR: In this paper, a multi-layer model, ML-CSC, is proposed, in which signals are assumed to emerge from a cascade of Convolutional Sparse Coding (CSC) layers.
Abstract: Convolutional neural networks (CNN) have led to many state-of-the-art results spanning through various fields. However, a clear and profound theoretical understanding of the forward pass, the core algorithm of CNN, is still lacking. In parallel, within the wide field of sparse approximation, Convolutional Sparse Coding (CSC) has gained increasing attention in recent years. A theoretical study of this model was recently conducted, establishing it as a reliable and stable alternative to the commonly practiced patch-based processing. Herein, we propose a novel multi-layer model, ML-CSC, in which signals are assumed to emerge from a cascade of CSC layers. This is shown to be tightly connected to CNN, so much so that the forward pass of the CNN is in fact the thresholding pursuit serving the ML-CSC model. This connection brings a fresh view to CNN, as we are able to attribute to this architecture theoretical claims such as uniqueness of the representations throughout the network, and their stable estimation, all guaranteed under simple local sparsity conditions. Lastly, identifying the weaknesses in the above pursuit scheme, we propose an alternative to the forward pass, which is connected to deconvolutional and recurrent networks, and also has better theoretical guarantees.

233 citations

Journal ArticleDOI
01 Feb 2011
TL;DR: The results show that the GSNMF algorithm provides better facial representations and achieves higher recognition rates than nonnegative matrix factorization and is also more robust to partial occlusions than other tested methods.
Abstract: In this paper, a novel graph-preserving sparse nonnegative matrix factorization (GSNMF) algorithm is proposed for facial expression recognition. The GSNMF algorithm is derived from the original NMF algorithm by exploiting both sparse and graph-preserving properties. The latter may contain the class information of the samples. Therefore, GSNMF can be conducted as an unsupervised or a supervised dimension reduction method. A sparse representation of the facial images is obtained by minimizing the -norm of the basis images. Furthermore, according to the graph embedding theory, the neighborhood of the samples is preserved by retaining the graph structure in the mapped space. The GSNMF decomposition transforms the high-dimensional facial expression images into a locality-preserving subspace with sparse representation. To guarantee convergence, we use the projected gradient method to calculate the nonnegative solution of GSNMF. Experiments are conducted on the JAFFE database and the Cohn-Kanade database with unoccluded and partially occluded facial images. The results show that the GSNMF algorithm provides better facial representations and achieves higher recognition rates than nonnegative matrix factorization. Moreover, GSNMF is also more robust to partial occlusions than other tested methods.

233 citations

Proceedings ArticleDOI
28 Jan 2014
TL;DR: This paper extends SSC to non-linear manifolds by using the kernel trick, and shows that the alternating direction method of multipliers can be used to efficiently find kernel sparse representations.
Abstract: Subspace clustering refers to the problem of grouping data points that lie in a union of low-dimensional subspaces. One successful approach for solving this problem is sparse subspace clustering, which is based on a sparse representation of the data. In this paper, we extend SSC to non-linear manifolds by using the kernel trick. We show that the alternating direction method of multipliers can be used to efficiently find kernel sparse representations. Various experiments on synthetic as well real datasets show that non-linear mappings lead to sparse representation that give better clustering results than state-of-the-art methods.

233 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371