Topic
Sparse approximation
About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.
Papers published on a yearly basis
Papers
More filters
01 Jan 1982
318 citations
••
20 Jun 2011TL;DR: Real-time Com-pressive Sensing Tracking (RTCST) as mentioned in this paper exploits the signal recovery power of compressive sensing (CS), and adopts Dimensionality Reduction and a customized Orthogonal Matching Pursuit (OMP) algorithm to accelerate the CS tracking.
Abstract: The l 1 tracker obtains robustness by seeking a sparse representation of the tracking object via l 1 norm minimization. However, the high computational complexity involved in the l 1 tracker may hamper its applications in real-time processing scenarios. Here we propose Real-time Com-pressive Sensing Tracking (RTCST) by exploiting the signal recovery power of Compressive Sensing (CS). Dimensionality reduction and a customized Orthogonal Matching Pursuit (OMP) algorithm are adopted to accelerate the CS tracking. As a result, our algorithm achieves a realtime speed that is up to 5,000 times faster than that of the l 1 tracker. Meanwhile, RTCST still produces competitive (sometimes even superior) tracking accuracy compared to the l 1 tracker. Furthermore, for a stationary camera, a refined tracker is designed by integrating a CS-based background model (CSBM) into tracking. This CSBM-equipped tracker, termed RTCST-B, outperforms most state-of-the-art trackers in terms of both accuracy and robustness. Finally, our experimental results on various video sequences, which are verified by a new metric — Tracking Success Probability (TSP), demonstrate the excellence of the proposed algorithms.
317 citations
•
19 Dec 2014TL;DR: The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing, focusing on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.
Abstract: In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection - that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.
314 citations
••
TL;DR: A class of inverse problem estimators computed by mixing adaptively a family of linear estimators corresponding to different priors corresponding toDifferent priors are introduced, providing state-of-the-art numerical results.
Abstract: We introduce a class of inverse problem estimators computed by mixing adaptively a family of linear estimators corresponding to different priors. Sparse mixing weights are calculated over blocks of coefficients in a frame providing a sparse signal representation. They minimize an l1 norm taking into account the signal regularity in each block. Adaptive directional image interpolations are computed over a wavelet frame with an O(N log N) algorithm, providing state-of-the-art numerical results.
313 citations
••
TL;DR: The recoverability analysis shows that this two-stage cluster-then-l1-optimization approach for sparse representation of a data matrix can deal with the situation in which the sources are overlapped to some degree in the analyzed BSS.
Abstract: In this letter, we analyze a two-stage cluster-then-l1-optimization approach for sparse representation of a data matrix, which is also a promising approach for blind source separation (BSS) in which fewer sensors than sources are present. First, sparse representation (factorization) of a data matrix is discussed. For a given overcomplete basis matrix, the corresponding sparse solution (coefficient matrix) with minimum l1 norm is unique with probability one, which can be obtained using a standard linear programming algorithm. The equivalence of the l1 -norm solution and the l0-norm solution is also analyzed according to a probabilistic framework. If the obtained l1 -norm solution is sufficiently sparse, then it is equal to the l0 -norm solution with a high probability. Furthermore, the l1 -norm solution is robust to noise, but the l0 -norm solution is not, showing that the l1 -norm is a good sparsity measure. These results can be used as a recoverability analysis of BSS, as discussed. The basis matrix in this article is estimated using a clustering algorithm followed by normalization, in which the matrix columns are the cluster centers of normalized data column vectors. Zibulevsky, Pearlmutter, Boll, and Kisilev (2000) used this kind of two-stage approach in underdetermined BSS. Our recoverability analysis shows that this approach can deal with the situation in which the sources are overlapped to some degree in the analyzed domain and with the case in which the source number is unknown. It is also robust to additive noise and estimation error in the mixing matrix. Finally, four simulation examples and an EEG data analysis example are presented to illustrate the algorithm's utility and demonstrate its performance.
312 citations