scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal ArticleDOI
TL;DR: This work suggests SELL-$C$-$\sigma, a variant of Sliced ELLPACK, as a SIMD-friendly data format which combines long-standing ideas from general-purpose graphics processing units and vector computer programming and shows its suitability on a variety of hardware platforms.
Abstract: Sparse matrix-vector multiplication (spMVM) is the most time-consuming kernel in many numerical algorithms and has been studied extensively on all modern processor and accelerator architectures. However, the optimal sparse matrix data storage format is highly hardware-specific, which could become an obstacle when using heterogeneous systems. Also, it is as yet unclear how the wide single instruction multiple data (SIMD) units in current multi- and many-core processors should be used most efficiently if there is no structure in the sparsity pattern of the matrix. We suggest SELL-$C$-$\sigma$, a variant of Sliced ELLPACK, as a SIMD-friendly data format which combines long-standing ideas from general-purpose graphics processing units and vector computer programming. We discuss the advantages of SELL-$C$-$\sigma$ compared to established formats like Compressed Row Storage and ELLPACK and show its suitability on a variety of hardware platforms (Intel Sandy Bridge, Intel Xeon Phi, and Nvidia Tesla K20) for a wi...

202 citations

Journal ArticleDOI
TL;DR: Investigating conditions under which the solution of an underdetermined linear system with minimal lscrp norm, 0 < p les 1, is guaranteed to be also the sparsest one shows that there is limited room for improving over the best known positive results of Foucart and Lai.
Abstract: This paper investigates conditions under which the solution of an underdetermined linear system with minimal lscrp norm, 0 < p les 1, is guaranteed to be also the sparsest one. Matrices are constructed with restricted isometry constants (RIC) delta2m arbitrarily close to 1/radic2 ap 0.707 where sparse recovery with p = 1 fails for at least one m-sparse vector, as well as matrices with delta2m arbitrarily close to one where lscr1 minimization succeeds for any m-sparse vector. This highlights the pessimism of sparse recovery prediction based on the RIC, and indicates that there is limited room for improving over the best known positive results of Foucart and Lai, which guarantee that lscr1 minimization recovers all m-sparse vectors for any matrix with delta2m < 2(3 - radic2)/7 ap 0.4531. These constructions are a by-product of tight conditions for lscrp recovery (0 les p les 1) with matrices of unit spectral norm, which are expressed in terms of the minimal singular values of 2m-column submatrices. Compared to lscr1 minimization, lscrp minimization recovery failure is shown to be only slightly delayed in terms of the RIC values. Furthermore in this case the minimization is nonconvex and it is important to consider the specific minimization algorithm being used. It is shown that when lscrp optimization is attempted using an iterative reweighted lscr1 scheme, failure can still occur for delta2m arbitrarily close to 1/radic2.

201 citations

Journal ArticleDOI
TL;DR: In this paper, a semi-supervised sparse representation-based classification method was proposed to deal with the non-linear nuisance variations between labeled and unlabeled samples, where a gallery dictionary consisting of one or more examples of each person and a variation dictionary representing linear nuisance variables (e.g., different lighting conditions and different glasses).
Abstract: This paper addresses the problem of face recognition when there is only few, or even only a single, labeled examples of the face that we wish to recognize. Moreover, these examples are typically corrupted by nuisance variables, both linear (i.e., additive nuisance variables, such as bad lighting and wearing of glasses) and non-linear (i.e., non-additive pixel-wise nuisance variables, such as expression changes). The small number of labeled examples means that it is hard to remove these nuisance variables between the training and testing faces to obtain good recognition performance. To address the problem, we propose a method called semi-supervised sparse representation-based classification. This is based on recent work on sparsity, where faces are represented in terms of two dictionaries: a gallery dictionary consisting of one or more examples of each person, and a variation dictionary representing linear nuisance variables (e.g., different lighting conditions and different glasses). The main idea is that: 1) we use the variation dictionary to characterize the linear nuisance variables via the sparsity framework and 2) prototype face images are estimated as a gallery dictionary via a Gaussian mixture model, with mixed labeled and unlabeled samples in a semi-supervised manner, to deal with the non-linear nuisance variations between labeled and unlabeled samples. We have done experiments with insufficient labeled samples, even when there is only a single labeled sample per person. Our results on the AR, Multi-PIE, CAS-PEAL, and LFW databases demonstrate that the proposed method is able to deliver significantly improved performance over existing methods.

201 citations

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the recovery of signals exhibiting a sparse representation in a general (i.e., possibly redundant or incomplete) dictionary that are corrupted by additive noise admitting sparse representations in another general dictionary.
Abstract: We investigate the recovery of signals exhibiting a sparse representation in a general (i.e., possibly redundant or incomplete) dictionary that are corrupted by additive noise admitting a sparse representation in another general dictionary. This setup covers a wide range of applications, such as image inpainting, super-resolution, signal separation, and recovery of signals that are impaired by, e.g., clipping, impulse noise, or narrowband interference. We present deterministic recovery guarantees based on a novel uncertainty relation for pairs of general dictionaries and we provide corresponding practicable recovery algorithms. The recovery guarantees we find depend on the signal and noise sparsity levels, on the coherence parameters of the involved dictionaries, and on the amount of prior knowledge about the signal and noise support sets.

200 citations

Journal ArticleDOI
TL;DR: Various applications of sparse representation in wireless communications, with a focus on the most recent compressive sensing (CS)-enabled approaches, are discussed.
Abstract: Sparse representation can efficiently model signals in different applications to facilitate processing. In this article, we will discuss various applications of sparse representation in wireless communications, with a focus on the most recent compressive sensing (CS)-enabled approaches. With the help of the sparsity property, CS is able to enhance the spectrum efficiency (SE) and energy efficiency (EE) of fifth-generation (5G) and Internet of Things (IoT) networks.

200 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371