scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 2010"


Journal ArticleDOI
TL;DR: This paper presents a new approach to single-image superresolution, based upon sparse signal representation, which generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods.
Abstract: This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework.

4,958 citations


Journal ArticleDOI
TL;DR: In this paper, a new online optimization algorithm based on stochastic approximations is proposed to solve the large-scale matrix factorization problem, which scales up gracefully to large data sets with millions of training samples.
Abstract: Sparse coding--that is, modelling data vectors as sparse linear combinations of basis elements--is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set in order to adapt it to specific data. Variations of this problem include dictionary learning in signal processing, non-negative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations, which scales up gracefully to large data sets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems. A proof of convergence is presented, along with experiments with natural images and genomic data demonstrating that it leads to state-of-the-art performance in terms of speed and optimization for both small and large data sets.

2,348 citations


Journal ArticleDOI
29 Apr 2010
TL;DR: This review paper highlights a few representative examples of how the interaction between sparse signal representation and computer vision can enrich both fields, and raises a number of open questions for further study.
Abstract: Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact high-fidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learned from, the training samples themselves provide the key to obtaining state-of-the-art results and to attaching semantic meaning to sparse signal representations. Understanding the good performance of such unconventional dictionaries in turn demands new algorithmic and analytical techniques. This review paper highlights a few representative examples of how the interaction between sparse signal representation and computer vision can enrich both fields, and raises a number of open questions for further study.

1,871 citations


Proceedings Article
21 Jun 2010
TL;DR: Both theoretical and experimental results show that low-rank representation is a promising tool for subspace segmentation from corrupted data.
Abstract: We propose low-rank representation (LRR) to segment data drawn from a union of multiple linear (or affine) subspaces. Given a set of data vectors, LRR seeks the lowest-rank representation among all the candidates that represent all vectors as the linear combination of the bases in a dictionary. Unlike the well-known sparse representation (SR), which computes the sparsest representation of each data vector individually, LRR aims at finding the lowest-rank representation of a collection of vectors jointly. LRR better captures the global structure of data, giving a more effective tool for robust subspace segmentation from corrupted data. Both theoretical and experimental results show that LRR is a promising tool for subspace segmentation.

1,542 citations


Proceedings Article
21 Jun 2010
TL;DR: Two versions of a very fast algorithm that produces approximate estimates of the sparse code that can be used to compute good visual features, or to initialize exact iterative algorithms are proposed.
Abstract: In Sparse Coding (SC), input vectors are reconstructed using a sparse linear combination of basis vectors. SC has become a popular method for extracting features from data. For a given input, SC minimizes a quadratic reconstruction error with an L1 penalty term on the code. The process is often too slow for applications such as real-time pattern recognition. We proposed two versions of a very fast algorithm that produces approximate estimates of the sparse code that can be used to compute good visual features, or to initialize exact iterative algorithms. The main idea is to train a non-linear, feed-forward predictor with a specific architecture and a fixed depth to produce the best possible approximation of the sparse code. A version of the method, which can be seen as a trainable version of Li and Osher's coordinate descent method, is shown to produce approximate solutions with 10 times less computation than Li and Os-her's for the same approximation error. Unlike previous proposals for sparse code predictors, the system allows a kind of approximate "explaining away" to take place during inference. The resulting predictor is differentiable and can be included into globally-trained recognition systems.

1,533 citations


Journal ArticleDOI
22 Apr 2010
TL;DR: This paper surveys the various options such training has to offer, up to the most recent contributions and structures of the MOD, the K-SVD, the Generalized PCA and others.
Abstract: Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a pre-specified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: i) building a sparsifying dictionary based on a mathematical model of the data, or ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as wavelets, wavelet packets, contourlets, and curvelets, all aiming to exploit 1-D and 2-D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, attaching the dictionary to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD, the K-SVD, the Generalized PCA and others, this paper surveys the various options such training has to offer, up to the most recent contributions and structures.

1,345 citations


Journal ArticleDOI
TL;DR: The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
Abstract: We consider efficient methods for the recovery of block-sparse signals-ie, sparse signals that have nonzero entries occurring in clusters-from an underdetermined system of linear equations An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce We then show that a block-version of the orthogonal matching pursuit algorithm recovers block -sparse signals in no more than steps if the block-coherence is sufficiently small The same condition on block-coherence is shown to guarantee successful recovery through a mixed -optimization approach This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem

1,289 citations


Journal ArticleDOI
29 Apr 2010
TL;DR: This paper surveys the major practical algorithms for sparse approximation with specific attention to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available.
Abstract: The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications.

1,003 citations


01 Jan 2010
TL;DR: In this paper, a survey of the major practical algorithms for sparse approximation is presented, focusing on computational issues, circumstances in which individual methods tend to perform well, and theoretical guarantees available.
Abstract: The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statis- tics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications.

954 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: A clustering framework within the sparse modeling and dictionary learning setting is introduced, using a novel measurement for the quality of the sparse representation, inspired by the robustness of the ℓ1 regularization term in sparse coding.
Abstract: A clustering framework within the sparse modeling and dictionary learning setting is introduced in this work. Instead of searching for the set of centroid that best fit the data, as in k-means type of approaches that model the data as distributions around discrete points, we optimize for a set of dictionaries, one for each cluster, for which the signals are best reconstructed in a sparse coding manner. Thereby, we are modeling the data as a union of learned low dimensional subspaces, and data points associated to subspaces spanned by just a few atoms of the same learned dictionary are clustered together. An incoherence promoting term encourages dictionaries associated to different classes to be as independent as possible, while still allowing for different classes to share features. This term directly acts on the dictionaries, thereby being applicable both in the supervised and unsupervised settings. Using learned dictionaries for classification and clustering makes this method robust and well suited to handle large datasets. The proposed framework uses a novel measurement for the quality of the sparse representation, inspired by the robustness of the l 1 regularization term in sparse coding. In the case of unsupervised classification and/or clustering, a new initialization based on combining sparse coding with spectral clustering is proposed. This initialization clusters the dictionary atoms, and therefore is based on solving a low dimensional eigen-decomposition problem, being applicable to large datasets. We first illustrate the proposed framework with examples on standard image and speech datasets in the supervised classification setting, obtaining results comparable to the state-of-the-art with this simple approach. We then present experiments for fully unsupervised clustering on extended standard datasets and texture images, obtaining excellent performance.

822 citations


Journal ArticleDOI
TL;DR: A new unsupervised DR method called sparsity preserving projections (SPP), which aims to preserve the sparse reconstructive relationship of the data, which is achieved by minimizing a L1 regularization-related objective function.

Journal ArticleDOI
25 Feb 2010
TL;DR: The role of this recent model in image processing, its rationale, and models related to it are reviewed; ways to employ these tools for various image-processing tasks are discussed and several applications in which state-of-the-art results are obtained.
Abstract: Much of the progress made in image processing in the past decades can be attributed to better modeling of image content and a wise deployment of these models in relevant applications. This path of models spans from the simple l2-norm smoothness through robust, thus edge preserving, measures of smoothness (e.g. total variation), and until the very recent models that employ sparse and redundant representations. In this paper, we review the role of this recent model in image processing, its rationale, and models related to it. As it turns out, the field of image processing is one of the main beneficiaries from the recent progress made in the theory and practice of sparse and redundant representations. We discuss ways to employ these tools for various image-processing tasks and present several applications in which state-of-the-art results are obtained.

Journal ArticleDOI
TL;DR: A non-intrusive method that builds a sparse PC expansion and an adaptive regression-based algorithm is proposed for automatically detecting the significant coefficients of the PC expansion in a suitable polynomial chaos basis.

Journal ArticleDOI
TL;DR: The advantages of sparse dictionaries are discussed, and an efficient algorithm for training them are presented, and the advantages of the proposed structure for 3-D image denoising are demonstrated.
Abstract: An efficient and flexible dictionary structure is proposed for sparse and redundant signal representation. The proposed sparse dictionary is based on a sparsity model of the dictionary atoms over a base dictionary, and takes the form D = ? A, where ? is a fixed base dictionary and A is sparse. The sparse dictionary provides efficient forward and adjoint operators, has a compact representation, and can be effectively trained from given example data. In this, the sparse structure bridges the gap between implicit dictionaries, which have efficient implementations yet lack adaptability, and explicit dictionaries, which are fully adaptable but non-efficient and costly to deploy. In this paper, we discuss the advantages of sparse dictionaries, and present an efficient algorithm for training them. We demonstrate the advantages of the proposed structure for 3-D image denoising.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper reduces this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of ℓ1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence.
Abstract: This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of errors and a low-rank matrix of recovered aligned images. We reduce this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of l1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence. We verify the efficacy of the proposed robust alignment algorithm with extensive experiments with both controlled and uncontrolled real data, demonstrating higher accuracy and efficiency than existing methods over a wide range of realistic misalignments and corruptions.

Proceedings ArticleDOI
14 Jun 2010
TL;DR: Two new algorithms to efficiently solve convex optimization problems, based on the alternating direction method of multipliers, a method from the augmented Lagrangian family, are introduced and are shown to outperform off-the-shelf methods in terms of speed and accuracy.
Abstract: Convex optimization problems are common in hyperspectral unmixing. Examples are the constrained least squares (CLS) problem used to compute the fractional abundances in a linear mixture of known spectra, the constrained basis pursuit (CBP) to find sparse (i.e., with a small number of terms) linear mixtures of spectra, selected from large libraries, and the constrained basis pursuit denoising (CBPDN), which is a generalization of BP to admit modeling errors. In this paper, we introduce two new algorithms to efficiently solve these optimization problems, based on the alternating direction method of multipliers, a method from the augmented Lagrangian family. The algorithms are termed SUnSAL (sparse unmixing by variable splitting and augmented Lagrangian) and C-SUnSAL (constrained SUnSAL). C-SUnSAL solves the CBP and CBPDN problems, while SUnSAL solves CLS as well as a more general version thereof, called constrained sparse regression (CSR). C-SUnSAL and SUnSAL are shown to outperform off-the-shelf methods in terms of speed and accuracy.

Journal ArticleDOI
Bin Yang1, Shutao Li1
TL;DR: A sparse representation-based multifocus image fusion method that can simultaneously resolve the image restoration and fusion problem by changing the approximate criterion in the sparse representation algorithm is proposed.
Abstract: To obtain an image with every object in focus, we always need to fuse images taken from the same view point with different focal settings. Multiresolution transforms, such as pyramid decomposition and wavelet, are usually used to solve this problem. In this paper, a sparse representation-based multifocus image fusion method is proposed. In the method, first, the source image is represented with sparse coefficients using an overcomplete dictionary. Second, the coefficients are combined with the choose-max fusion rule. Finally, the fused image is reconstructed from the combined sparse coefficients and the dictionary. Furthermore, the proposed fusion scheme can simultaneously resolve the image restoration and fusion problem by changing the approximate criterion in the sparse representation algorithm. The proposed method is compared with spatial gradient (SG)-, morphological wavelet transform (MWT)-, discrete wavelet transform (DWT)-, stationary wavelet transform (SWT)-, curvelet transform (CVT)-, and nonsubsampling contourlet transform (NSCT)-based methods on several pairs of multifocus images. The experimental results demonstrate that the proposed approach performs better in both subjective and objective qualities.

Journal Article
TL;DR: In this paper, a new approach to sparse principal component analysis (sparse PCA) is proposed, which is based on the maximization of a convex function on a compact set.
Abstract: In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed.

Journal ArticleDOI
TL;DR: A novel examplar-based inpainting algorithm through investigating the sparsity of natural image patches that enables better discrimination of structure and texture, and the patch sparse representation forces the newly inpainted regions to be sharp and consistent with the surrounding textures.
Abstract: This paper introduces a novel examplar-based inpainting algorithm through investigating the sparsity of natural image patches. Two novel concepts of sparsity at the patch level are proposed for modeling the patch priority and patch representation, which are two crucial steps for patch propagation in the examplar-based inpainting approach. First, patch structure sparsity is designed to measure the confidence of a patch located at the image structure (e.g., the edge or corner) by the sparseness of its nonzero similarities to the neighboring patches. The patch with larger structure sparsity will be assigned higher priority for further inpainting. Second, it is assumed that the patch to be filled can be represented by the sparse linear combination of candidate patches under the local patch consistency constraint in a framework of sparse representation. Compared with the traditional examplar-based inpainting approach, structure sparsity enables better discrimination of structure and texture, and the patch sparse representation forces the newly inpainted regions to be sharp and consistent with the surrounding textures. Experiments on synthetic and natural images show the advantages of the proposed approach.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper proposes to use histogram intersection based kNN method to construct a Laplacian matrix, which can well characterize the similarity of local features, and incorporates it into the objective function of sparse coding to preserve the consistence in sparse representation of similar local features.
Abstract: Sparse coding which encodes the original signal in a sparse signal space, has shown its state-of-the-art performance in the visual codebook generation and feature quantization process of BoW based image representation. However, in the feature quantization process of sparse coding, some similar local features may be quantized into different visual words of the codebook due to the sensitiveness of quantization. In this paper, to alleviate the impact of this problem, we propose a Laplacian sparse coding method, which will exploit the dependence among the local features. Specifically, we propose to use histogram intersection based kNN method to construct a Laplacian matrix, which can well characterize the similarity of local features. In addition, we incorporate this Laplacian matrix into the objective function of sparse coding to preserve the consistence in sparse representation of similar local features. Comprehensive experimental results show that our method achieves or outperforms existing state-of-the-art results, and exhibits excellent performance on Scene 15 data set.

Book ChapterDOI
05 Sep 2010
TL;DR: The number of atoms is significantly reduced in the computed Gabor occlusion dictionary, which greatly reduces the computational cost in coding the occluded face images while improving greatly the SRC accuracy.
Abstract: By coding the input testing image as a sparse linear combination of the training samples via l1-norm minimization, sparse representation based classification (SRC) has been recently successfully used for face recognition (FR). Particularly, by introducing an identity occlusion dictionary to sparsely code the occluded portions in face images, SRC can lead to robust FR results against occlusion. However, the large amount of atoms in the occlusion dictionary makes the sparse coding computationally very expensive. In this paper, the image Gabor-features are used for SRC. The use of Gabor kernels makes the occlusion dictionary compressible, and a Gabor occlusion dictionary computing algorithm is then presented. The number of atoms is significantly reduced in the computed Gabor occlusion dictionary, which greatly reduces the computational cost in coding the occluded face images while improving greatly the SRC accuracy. Experiments on representative face databases with variations of lighting, expression, pose and occlusion demonstrated the effectiveness of the proposed Gabor-feature based SRC (GSRC) scheme.

Proceedings ArticleDOI
03 Dec 2010
TL;DR: A comprehensive review of five representative ℓ1-minimization methods, i.e., gradient projection, homotopy, iterative shrinkage-thresholding, proximal gradient, and augmented Lagrange multiplier, for face recognition is provided.
Abstract: We provide a comprehensive review of five representative l 1 -minimization methods, i.e., gradient projection, homotopy, iterative shrinkage-thresholding, proximal gradient, and augmented Lagrange multiplier. The repository is intended to fill in a gap in the existing literature to systematically benchmark the performance of these algorithms using a consistent experimental setting. The experiment will be focused on the application of face recognition, where a sparse representation framework has recently been developed to recover human identities from facial images that may be affected by illumination change, occlusion, and facial disguise. The paper also provides useful guidelines to practitioners working in similar fields.

Journal ArticleDOI
TL;DR: The achievable trade-offs between predictive accuracy and computational requirements are compared, and it is shown that these are typically superior to existing state-of-the-art sparse approximations.
Abstract: We present a new sparse Gaussian Process (GP) model for regression. The key novel idea is to sparsify the spectral representation of the GP. This leads to a simple, practical algorithm for regression tasks. We compare the achievable trade-offs between predictive accuracy and computational requirements, and show that these are typically superior to existing state-of-the-art sparse approximations. We discuss both the weight space and function space representations, and note that the new construction implies priors over functions which are always stationary, and can approximate any covariance function in this class.

Journal ArticleDOI
TL;DR: In the context of canonical sparse estimation problems, it is proved uniform superiority of this method over the minimum l1 solution in that, 1) it can never do worse when implemented with reweighted l1, and 2) for any dictionary and sparsity profile, there will always exist cases where it does better.
Abstract: A variety of practical methods have recently been introduced for finding maximally sparse representations from overcomplete dictionaries, a central computational task in compressive sensing applications as well as numerous others. Many of the underlying algorithms rely on iterative reweighting schemes that produce more focal estimates as optimization progresses. Two such variants are iterative reweighted l1 and l2 minimization; however, some properties related to convergence and sparse estimation, as well as possible generalizations, are still not clearly understood or fully exploited. In this paper, we make the distinction between separable and non-separable iterative reweighting algorithms. The vast majority of existing methods are separable, meaning the weighting of a given coefficient at each iteration is only a function of that individual coefficient from the previous iteration (as opposed to dependency on all coefficients). We examine two such separable reweighting schemes: an l2 method from Chartrand and Yin (2008) and an l1 approach from Cande's (2008), elaborating on convergence results and explicit connections between them. We then explore an interesting non-separable alternative that can be implemented via either l2 or l1 reweighting and maintains several desirable properties relevant to sparse recovery despite a highly non-convex underlying cost function. For example, in the context of canonical sparse estimation problems, we prove uniform superiority of this method over the minimum l1 solution in that, 1) it can never do worse when implemented with reweighted l1, and 2) for any dictionary and sparsity profile, there will always exist cases where it does better. These results challenge the prevailing reliance on strictly convex (and separable) penalty functions for finding sparse solutions. We then derive a new non-separable variant with similar properties that exhibits further performance improvements in empirical tests. Finally, we address natural extensions to group sparsity problems and non-negative sparse coding.

Proceedings Article
21 Jun 2010
TL;DR: This work considers a tree-structured sparse regularization to learn dictionaries embedded in a hierarchy, thus providing a competitive alternative to probabilistic topic models.
Abstract: We propose to combine two approaches for modeling data admitting sparse representations: on the one hand, dictionary learning has proven effective for various signal processing tasks. On the other hand, recent work on structured sparsity provides a natural framework for modeling dependencies between dictionary elements. We thus consider a tree-structured sparse regularization to learn dictionaries embedded in a hierarchy. The involved proximal operator is computable exactly via a primal-dual method, allowing the use of accelerated gradient techniques. Experiments show that for natural image patches, learned dictionary elements organize themselves in such a hierarchical structure, leading to an improved performance for restoration tasks. When applied to text documents, our method learns hierarchies of topics, thus providing a competitive alternative to probabilistic topic models.

Journal ArticleDOI
TL;DR: The recursive least squares dictionary learning algorithm, RLS-DLA, is presented, which can be used for learning overcomplete dictionaries for sparse signal representation and a forgetting factor can be introduced and easily implemented in the algorithm.
Abstract: We present the recursive least squares dictionary learning algorithm, RLS-DLA, which can be used for learning overcomplete dictionaries for sparse signal representation. Most DLAs presented earlier, for example ILS-DLA and K-SVD, update the dictionary after a batch of training vectors has been processed, usually using the whole set of training vectors as one batch. The training set is used iteratively to gradually improve the dictionary. The approach in RLS-DLA is a continuous update of the dictionary as each training vector is being processed. The core of the algorithm is compact and can be effectively implemented. The algorithm is derived very much along the same path as the recursive least squares (RLS) algorithm for adaptive filtering. Thus, as in RLS, a forgetting factor ? can be introduced and easily implemented in the algorithm. Adjusting ? in an appropriate way makes the algorithm less dependent on the initial dictionary and it improves both convergence properties of RLS-DLA as well as the representation ability of the resulting dictionary. Two sets of experiments are done to test different methods for learning dictionaries. The goal of the first set is to explore some basic properties of the algorithm in a simple setup, and for the second set it is the reconstruction of a true underlying dictionary. The first experiment confirms the conjectural properties from the derivation part, while the second demonstrates excellent performance.

Journal ArticleDOI
TL;DR: A review on the curvelet transform, including its history beginning from wavelets, its logical relationship to other multiresolution multidirectional methods like contourlets and shearlets, and its basic theory and discrete algorithm is presented.
Abstract: Multiresolution methods are deeply related to image processing, biological and computer vision, and scientific computing. The curvelet transform is a multiscale directional transform that allows an almost optimal nonadaptive sparse representation of objects with edges. It has generated increasing interest in the community of applied mathematics and signal processing over the years. In this article, we present a review on the curvelet transform, including its history beginning from wavelets, its logical relationship to other multiresolution multidirectional methods like contourlets and shearlets, its basic theory and discrete algorithm. Further, we consider recent applications in image/video processing, seismic exploration, fluid mechanics, simulation of partial different equations, and compressed sensing.

Journal ArticleDOI
10 May 2010
TL;DR: In this paper, the authors survey algorithms for sparse recovery problems that are based on sparse random matrices and discuss applications to several areas, including compressive sensing, data stream computing, and group testing.
Abstract: In this paper, we survey algorithms for sparse recovery problems that are based on sparse random matrices. Such matrices has several attractive properties: they support algorithms with low computational complexity, and make it easy to perform incremental updates to signals. We discuss applications to several areas, including compressive sensing, data stream computing, and group testing.

Book ChapterDOI
05 Sep 2010
TL;DR: KSR is essentially the sparse coding technique in a high dimensional feature space mapped by implicit mapping function that outperforms sparse coding and EMK, and achieves state-of-the-art performance for image classification and face recognition on publicly available datasets.
Abstract: Recent research has shown the effectiveness of using sparse coding(Sc) to solve many computer vision problems. Motivated by the fact that kernel trick can capture the nonlinear similarity of features, which may reduce the feature quantization error and boost the sparse coding performance, we propose Kernel Sparse Representation(KSR). KSR is essentially the sparse coding technique in a high dimensional feature space mapped by implicit mapping function. We apply KSR to both image classification and face recognition. By incorporating KSR into Spatial Pyramid Matching(SPM), we propose KSRSPM for image classification. KSRSPM can further reduce the information loss in feature quantization step compared with Spatial Pyramid Matching using Sparse Coding(ScSPM). KSRSPM can be both regarded as the generalization of Efficient Match Kernel(EMK) and an extension of ScSPM. Compared with sparse coding, KSR can learn more discriminative sparse codes for face recognition. Extensive experimental results show that KSR outperforms sparse coding and EMK, and achieves state-of-the-art performance for image classification and face recognition on publicly available datasets.

Journal ArticleDOI
TL;DR: Under a very mild condition on the sparsity and on the dictionary characteristics, it is shown that the probability of recovery failure decays exponentially in the number of channels, demonstrating that most of the time, multichannel sparse recovery is indeed superior to single channel methods.
Abstract: This paper considers recovery of jointly sparse multichannel signals from incomplete measurements. Several approaches have been developed to recover the unknown sparse vectors from the given observations, including thresholding, simultaneous orthogonal matching pursuit (SOMP), and convex relaxation based on a mixed matrix norm. Typically, worst case analysis is carried out in order to analyze conditions under which the algorithms are able to recover any jointly sparse set of vectors. However, such an approach is not able to provide insights into why joint sparse recovery is superior to applying standard sparse reconstruction methods to each channel individually. Previous work considered an average case analysis of thresholding and SOMP by imposing a probability model on the measured signals. Here, the main focus is on analysis of convex relaxation techniques. In particular, the mixed l 2,1 approach to multichannel recovery is investigated. Under a very mild condition on the sparsity and on the dictionary characteristics, measured for example by the coherence, it is shown that the probability of recovery failure decays exponentially in the number of channels. This demonstrates that most of the time, multichannel sparse recovery is indeed superior to single channel methods. The probability bounds are valid and meaningful even for a small number of signals. Using the tools developed to analyze the convex relaxation technique, also previous bounds for thresholding and SOMP recovery are tightened.