scispace - formally typeset
Search or ask a question
Topic

Sparse approximation

About: Sparse approximation is a research topic. Over the lifetime, 18037 publications have been published within this topic receiving 497739 citations. The topic is also known as: Sparse approximation.


Papers
More filters
Journal Article
TL;DR: In this article, the authors combine the variational approach to sparse approximation and the spectral representation of Gaussian processes to obtain an approximation with the representational power and computational scalability of spectral representations.
Abstract: This work brings together two powerful concepts in Gaussian processes: the variational approach to sparse approximation and the spectral representation of Gaussian processes. This gives rise to an approximation that inherits the benefits of the variational approach but with the representational power and computational scalability of spectral representations. The work hinges on a key result that there exist spectral features related to a finite domain of the Gaussian process which exhibit almost-independent covariances. We derive these expressions for Matern kernels in one dimension, and generalize to more dimensions using kernels with specific structures. Under the assumption of additive Gaussian noise, our method requires only a single pass through the data set, making for very fast and accurate computation. We fit a model to 4 million training points in just a few minutes on a standard laptop. With non-conjugate likelihoods, our MCMC scheme reduces the cost of computation from O(NM2) (for a sparse Gaussian process) to O(NM) per iteration, where N is the number of data and M is the number of features.

123 citations

Proceedings ArticleDOI
05 Jul 2009
TL;DR: This paper develops CS algorithms for time-varying signals, based on the least-absolute shrinkage and selection operator (Lasso) that has been popular for sparse regression problems, and proposes two algorithms: the Group-Fused Lasso and the Dynamic Lasso.
Abstract: Compressed sensing (CS) lowers the number of measurements required for reconstruction and estimation of signals that are sparse when expanded over a proper basis. Traditional CS approaches deal with time-invariant sparse signals, meaning that, during the measurement process, the signal of interest does not exhibit variations. However, many signals encountered in practice are varying with time as the observation window increases (e.g., video imaging, where the signal is sparse and varies between different frames). The present paper develops CS algorithms for time-varying signals, based on the least-absolute shrinkage and selection operator (Lasso) that has been popular for sparse regression problems. The Lasso here is tailored for smoothing time-varying signals, which are modeled as vector valued discrete time series. Two algorithms are proposed: the Group-Fused Lasso, when the unknown signal support is time-invariant but signal samples are allowed to vary with time; and the Dynamic Lasso, for the general class of signals with time-varying amplitudes and support. Performance of these algorithms is compared with a sparsity-unaware Kalman smoother, a support-aware Kalman smoother, and the standard Lasso which does not account for time variations. The numerical results amply demonstrate the practical merits of the novel CS algorithms.

123 citations

Journal ArticleDOI
TL;DR: This paper presents a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint, which learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information.
Abstract: In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.

123 citations

Journal ArticleDOI
TL;DR: This work proposes a morphological graph model to describe the morphological structure of the error of the occlusion from two aspects: the error morphology and the error distribution.
Abstract: Face recognition with occlusion is common in the real world. Inspired by the works of structured sparse representation, we try to explore the structure of the error incurred by occlusion from two aspects: the error morphology and the error distribution. Since human beings recognize the occlusion mainly according to its region shape or profile without knowing accurately what the occlusion is, we argue that the shape of the occlusion is also an important feature. We propose a morphological graph model to describe the morphological structure of the error. Due to the uncertainty of the occlusion, the distribution of the error incurred by occlusion is also uncertain. However, we observe that the unoccluded part and the occluded part of the error measured by the correntropy induced metric follow the exponential distribution, respectively. Incorporating the two aspects of the error structure, we propose the structured sparse error coding for face recognition with occlusion. Our extensive experiments demonstrate that the proposed method is more stable and has higher breakdown point in dealing with the occlusion problems in face recognition as compared to the related state-of-the-art methods, especially for the extreme situation, such as the high level occlusion and the low feature dimension.

123 citations

Journal ArticleDOI
TL;DR: T theoretical analysis and numerical examples show how many simultaneous signals can be separated by W-CMSR on typical array geometries, and that the half-wavelength spacing restriction in avoiding ambiguity can be relaxed from the highest to the lowest frequency of the incident wideband signals.
Abstract: This paper focuses on direction-of-arrival (DOA) estimation of wideband signals, and a method named wideband covariance matrix sparse representation (W-CMSR) is proposed. In W-CMSR, the lower left triangular elements of the covariance matrix are aligned to form a new measurement vector, and DOA estimation is then realized by representing this vector on an over-complete dictionary under the constraint of sparsity. The a priori information of the incident signal number is not needed in W-CMSR, and no spectral decomposition or focusing is introduced. Simulation results demonstrate the satisfying performance of W-CMSR in wideband DOA estimation in various settings. Moreover, theoretical analysis and numerical examples show how many simultaneous signals can be separated by W-CMSR on typical array geometries, and that the half-wavelength spacing restriction in avoiding ambiguity can be relaxed from the highest to the lowest frequency of the incident wideband signals.

123 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
93% related
Image segmentation
79.6K papers, 1.8M citations
92% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023193
2022454
2021641
2020924
20191,208
20181,371