scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 2014"


Journal ArticleDOI
TL;DR: The proposed group-based sparse representation (GSR) is able to sparsely represent natural images in the domain of group, which enforces the intrinsic local sparsity and nonlocal self-similarity of images simultaneously in a unified framework.
Abstract: Traditional patch-based sparse representation modeling of natural images usually suffer from two problems. First, it has to solve a large-scale optimization problem with high computational complexity in dictionary learning. Second, each patch is considered independently in dictionary learning and sparse coding, which ignores the relationship among patches, resulting in inaccurate sparse coding coefficients. In this paper, instead of using patch as the basic unit of sparse representation, we exploit the concept of group as the basic unit of sparse representation, which is composed of nonlocal patches with similar structures, and establish a novel sparse representation modeling of natural images, called group-based sparse representation (GSR). The proposed GSR is able to sparsely represent natural images in the domain of group, which enforces the intrinsic local sparsity and nonlocal self-similarity of images simultaneously in a unified framework. In addition, an effective self-adaptive dictionary learning method for each group with low complexity is designed, rather than dictionary learning from natural images. To make GSR tractable and robust, a split Bregman-based technique is developed to solve the proposed GSR-driven l 0 minimization problem for image restoration efficiently. Extensive experiments on image inpainting, image deblurring and image compressive sensing recovery manifest that the proposed GSR modeling outperforms many current state-of-the-art schemes in both peak signal-to-noise ratio and visual perception.

597 citations


Proceedings Article
27 Jul 2014
TL;DR: This paper proposes a novel Markov chain method for Robust Multi-view Spectral Clustering (RMSC), which has a flavor of lowrank and sparse decomposition, and has superior performance over several state-of-the-art methods for multi-view clustering.
Abstract: Multi-view clustering, which seeks a partition of the data in multiple views that often provide complementary information to each other, has received considerable attention in recent years. In real life clustering problems, the data in each view may have considerable noise. However, existing clustering methods blindly combine the information from multi-view data with possibly considerable noise, which often degrades their performance. In this paper, we propose a novel Markov chain method for Robust Multi-view Spectral Clustering (RMSC). Our method has a flavor of lowrank and sparse decomposition, where we firstly construct a transition probability matrix from each single view, and then use these matrices to recover a shared low-rank transition probability matrix as a crucial input to the standard Markov chain method for clustering. The optimization problem of RMSC has a low-rank constraint on the transition probability matrix, and simultaneously a probabilistic simplex constraint on each of its rows. To solve this challenging optimization problem, we propose an optimization procedure based on the Augmented Lagrangian Multiplier scheme. Experimental results on various real world datasets show that the proposed method has superior performance over several state-of-the-art methods for multi-view clustering.

576 citations


Journal ArticleDOI
TL;DR: The proposed FDDL model is extensively evaluated on various image datasets, and it shows superior performance to many state-of-the-art dictionary learning methods in a variety of classification tasks.
Abstract: The employed dictionary plays an important role in sparse representation or sparse coding based image reconstruction and classification, while learning dictionaries from the training data has led to state-of-the-art results in image classification tasks. However, many dictionary learning models exploit only the discriminative information in either the representation coefficients or the representation residual, which limits their performance. In this paper we present a novel dictionary learning method based on the Fisher discrimination criterion. A structured dictionary, whose atoms have correspondences to the subject class labels, is learned, with which not only the representation residual can be used to distinguish different classes, but also the representation coefficients have small within-class scatter and big between-class scatter. The classification scheme associated with the proposed Fisher discrimination dictionary learning (FDDL) model is consequently presented by exploiting the discriminative information in both the representation residual and the representation coefficients. The proposed FDDL model is extensively evaluated on various image datasets, and it shows superior performance to many state-of-the-art dictionary learning methods in a variety of classification tasks.

474 citations


Journal ArticleDOI
TL;DR: This work aims to initiate a rigorous and comprehensive review of RPCA-PCP based methods for testing and ranking existing algorithms for foreground detection and investigates how these methods are solved and if incremental algorithms and real-time implementations can be achieved.

453 citations


Posted Content
TL;DR: In this article, the Least Squares Regression (LSR) method was proposed for subspace segmentation, which takes advantage of data correlation and encourages a grouping effect which tends to group highly correlated data together.
Abstract: This paper studies the subspace segmentation problem which aims to segment data drawn from a union of multiple linear subspaces. Recent works by using sparse representation, low rank representation and their extensions attract much attention. If the subspaces from which the data drawn are independent or orthogonal, they are able to obtain a block diagonal affinity matrix, which usually leads to a correct segmentation. The main differences among them are their objective functions. We theoretically show that if the objective function satisfies some conditions, and the data are sufficiently drawn from independent subspaces, the obtained affinity matrix is always block diagonal. Furthermore, the data sampling can be insufficient if the subspaces are orthogonal. Some existing methods are all special cases. Then we present the Least Squares Regression (LSR) method for subspace segmentation. It takes advantage of data correlation, which is common in real data. LSR encourages a grouping effect which tends to group highly correlated data together. Experimental results on the Hopkins 155 database and Extended Yale Database B show that our method significantly outperforms state-of-the-art methods. Beyond segmentation accuracy, all experiments demonstrate that LSR is much more efficient.

428 citations


Journal ArticleDOI
TL;DR: The proposed aerial scene classification method can be highly effective in developing a detection system that can be used to automatically scan large-scale high-resolution satellite imagery for detecting large facilities such as a shopping mall.
Abstract: The rich data provided by high-resolution satellite imagery allow us to directly model aerial scenes by understanding their spatial and structural patterns. While pixel- and object-based classification approaches are widely used for satellite image analysis, often these approaches exploit the high-fidelity image data in a limited way. In this paper, we explore an unsupervised feature learning approach for scene classification. Dense low-level feature descriptors are extracted to characterize the local spatial patterns. These unlabeled feature measurements are exploited in a novel way to learn a set of basis functions. The low-level feature descriptors are encoded in terms of the basis functions to generate new sparse representation for the feature descriptors. We show that the statistics generated from the sparse features characterize the scene well producing excellent classification accuracy. We apply our technique to several challenging aerial scene data sets: ORNL-I data set consisting of 1-m spatial resolution satellite imagery with diverse sensor and scene characteristics representing five land-use categories, UCMERCED data set representing twenty one different aerial scene categories with sub-meter resolution, and ORNL-II data set for large-facility scene detection. Our results are highly promising and, on the UCMERCED data set we outperform the previous best results. We demonstrate that the proposed aerial scene classification method can be highly effective in developing a detection system that can be used to automatically scan large-scale high-resolution satellite imagery for detecting large facilities such as a shopping mall.

415 citations


Posted Content
TL;DR: Zhang et al. as discussed by the authors exploited the concept of group as the basic unit of sparse representation, which is composed of nonlocal patches with similar structures, and established a novel sparse representation modeling of natural images, called group-based sparse representation (GSR).
Abstract: Traditional patch-based sparse representation modeling of natural images usually suffer from two problems. First, it has to solve a large-scale optimization problem with high computational complexity in dictionary learning. Second, each patch is considered independently in dictionary learning and sparse coding, which ignores the relationship among patches, resulting in inaccurate sparse coding coefficients. In this paper, instead of using patch as the basic unit of sparse representation, we exploit the concept of group as the basic unit of sparse representation, which is composed of nonlocal patches with similar structures, and establish a novel sparse representation modeling of natural images, called group-based sparse representation (GSR). The proposed GSR is able to sparsely represent natural images in the domain of group, which enforces the intrinsic local sparsity and nonlocal self-similarity of images simultaneously in a unified framework. Moreover, an effective self-adaptive dictionary learning method for each group with low complexity is designed, rather than dictionary learning from natural images. To make GSR tractable and robust, a split Bregman based technique is developed to solve the proposed GSR-driven minimization problem for image restoration efficiently. Extensive experiments on image inpainting, image deblurring and image compressive sensing recovery manifest that the proposed GSR modeling outperforms many current state-of-the-art schemes in both PSNR and visual perception.

389 citations


Posted Content
TL;DR: This paper presents a variational-based approach for fusing hyperspectral and multispectral images and demonstrates the efficiency of the proposed algorithm when compared with state-of-the-art fusion methods.
Abstract: This paper presents a variational based approach to fusing hyperspectral and multispectral images. The fusion process is formulated as an inverse problem whose solution is the target image assumed to live in a much lower dimensional subspace. A sparse regularization term is carefully designed, relying on a decomposition of the scene on a set of dictionaries. The dictionary atoms and the corresponding supports of active coding coefficients are learned from the observed images. Then, conditionally on these dictionaries and supports, the fusion problem is solved via alternating optimization with respect to the target image (using the alternating direction method of multipliers) and the coding coefficients. Simulation results demonstrate the efficiency of the proposed algorithm when compared with the state-of-the-art fusion methods.

384 citations


Journal ArticleDOI
TL;DR: This work proposes a fast local search method for recovering a sparse signal from measurements of its Fourier transform (or other linear transform) magnitude which it refers to as GESPAR: GrEedy Sparse PhAse Retrieval, which does not require matrix lifting, unlike previous approaches, and therefore is potentially suitable for large scale problems such as images.
Abstract: We consider the problem of phase retrieval, namely, recovery of a signal from the magnitude of its Fourier transform, or of any other linear transform. Due to the loss of Fourier phase information, this problem is ill-posed. Therefore, prior information on the signal is needed in order to enable its recovery. In this work we consider the case in which the signal is known to be sparse, i.e., it consists of a small number of nonzero elements in an appropriate basis. We propose a fast local search method for recovering a sparse signal from measurements of its Fourier transform (or other linear transform) magnitude which we refer to as GESPAR: GrEedy Sparse PhAse Retrieval. Our algorithm does not require matrix lifting, unlike previous approaches, and therefore is potentially suitable for large scale problems such as images. Simulation results indicate that GESPAR is fast and more accurate than existing techniques in a variety of settings.

337 citations


Journal ArticleDOI
TL;DR: This paper designs a patch-based nonlocal operator (PANO) to sparsify magnetic resonance images by making use of the similarity of image patches to achieve lower reconstruction error and higher visual quality than conventional CS-MRI methods.

329 citations


Journal ArticleDOI
TL;DR: This paper considers compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery.
Abstract: This paper considers compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool, which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while yielding sharp results. It is shown that for any given constant t ≥ 4/3, in compressed sensing, δtkA 0, δtkA <; √(t-1/t) + e is not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar results also hold for matrix recovery. In addition, the conditions δtkA <; √((t-)1/t) and δtrM <; √((t-1)/t) are also shown to be sufficient, respectively, for stable recovery of approximately sparse signals and low-rank matrices in the noisy case.

Book
19 Dec 2014
TL;DR: The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing, focusing on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.
Abstract: In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection - that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.

Journal ArticleDOI
TL;DR: Considering that regions of different scales incorporate the complementary yet correlated information for classification, a multiscale adaptive sparse representation (MASR) model is proposed and demonstrates the qualitative and quantitative superiority of the proposed MASR algorithm when compared to several well-known classifiers.
Abstract: Sparse representation has been demonstrated to be a powerful tool in classification of hyperspectral images (HSIs). The spatial context of an HSI can be exploited by first defining a local region for each test pixel and then jointly representing pixels within each region by a set of common training atoms (samples). However, the selection of the optimal region scale (size) for different HSIs with different types of structures is a nontrivial task. In this paper, considering that regions of different scales incorporate the complementary yet correlated information for classification, a multiscale adaptive sparse representation (MASR) model is proposed. The MASR effectively exploits spatial information at multiple scales via an adaptive sparse strategy. The adaptive sparse strategy not only restricts pixels from different scales to be represented by training atoms from a particular class but also allows the selected atoms for these pixels to be varied, thus providing an improved representation. Experiments on several real HSI data sets demonstrate the qualitative and quantitative superiority of the proposed MASR algorithm when compared to several well-known classifiers.

Journal ArticleDOI
TL;DR: A multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations, which simultaneously takes into account correlations as well as coupling information among biometric modalities.
Abstract: Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.

Journal ArticleDOI
TL;DR: The simultaneous orthogonal matching pursuit technique is used to solve the nonlocal weighted joint sparsity model (NLW-JSM) and the proposed classification algorithm performs better than the other sparsity-based algorithms and the classical support vector machine hyperspectral classifier.
Abstract: As a powerful and promising statistical signal modeling technique, sparse representation has been widely used in various image processing and analysis fields. For hyperspectral image classification, previous studies have shown the effectiveness of the sparsity-based classification methods. In this paper, we propose a nonlocal weighted joint sparse representation classification (NLW-JSRC) method to improve the hyperspectral image classification result. In the joint sparsity model (JSM), different weights are utilized for different neighboring pixels around the central test pixel. The weight of one specific neighboring pixel is determined by the structural similarity between the neighboring pixel and the central test pixel, which is referred to as a nonlocal weighting scheme. In this paper, the simultaneous orthogonal matching pursuit technique is used to solve the nonlocal weighted joint sparsity model (NLW-JSM). The proposed classification algorithm was tested on three hyperspectral images. The experimental results suggest that the proposed algorithm performs better than the other sparsity-based algorithms and the classical support vector machine hyperspectral classifier.

Journal ArticleDOI
TL;DR: Under compatibility conditions on the design matrix, the posterior distribution is shown to contract at the optimal rate for recovery of the unknown sparse vector, and to give optimal prediction of the response vector.
Abstract: We study full Bayesian procedures for high-dimensional linear regression under sparsity constraints. The prior is a mixture of point masses at zero and continuous distributions. Under compatibility conditions on the design matrix, the posterior distribution is shown to contract at the optimal rate for recovery of the unknown sparse vector, and to give optimal prediction of the response vector. It is also shown to select the correct sparse model, or at least the coefficients that are significantly different from zero. The asymptotic shape of the posterior distribution is characterized and employed to the construction and study of credible sets for uncertainty quantification.

Journal ArticleDOI
TL;DR: A half-quadratic (HQ) framework to solve the robust sparse representation problem is developed and it is shown that the ℓ1-regularization solved by soft-thresholding function has a dual relationship to Huber M-estimator, which theoretically guarantees the performance of robust sparse Representation in terms of M-ESTimation.
Abstract: Robust sparse representation has shown significant potential in solving challenging problems in computer vision such as biometrics and visual surveillance. Although several robust sparse models have been proposed and promising results have been obtained, they are either for error correction or for error detection, and learning a general framework that systematically unifies these two aspects and explores their relation is still an open problem. In this paper, we develop a half-quadratic (HQ) framework to solve the robust sparse representation problem. By defining different kinds of half-quadratic functions, the proposed HQ framework is applicable to performing both error correction and error detection. More specifically, by using the additive form of HQ, we propose an l1-regularized error correction method by iteratively recovering corrupted data from errors incurred by noises and outliers; by using the multiplicative form of HQ, we propose an l1-regularized error detection method by learning from uncorrupted data iteratively. We also show that the l1-regularization solved by soft-thresholding function has a dual relationship to Huber M-estimator, which theoretically guarantees the performance of robust sparse representation in terms of M-estimation. Experiments on robust face recognition under severe occlusion and corruption validate our framework and findings.

Journal ArticleDOI
TL;DR: This paper presents a method that derives a discrete tight frame system from the input image itself to provide a better sparse approximation to theinput image to perform better in image denoising.

Book ChapterDOI
06 Sep 2014
TL;DR: Comparison of the proposed approach with the state-of-the-art methods on both ground-based and remotely-sensed public hyperspectral image databases shows that the presented method achieves the lowest error rate on all test images in the three datasets.
Abstract: Existing hyperspectral imaging systems produce low spatial resolution images due to hardware constraints. We propose a sparse representation based approach for hyperspectral image super-resolution. The proposed approach first extracts distinct reflectance spectra of the scene from the available hyperspectral image. Then, the signal sparsity, non-negativity and the spatial structure in the scene are exploited to explain a high-spatial but low-spectral resolution image of the same scene in terms of the extracted spectra. This is done by learning a sparse code with an algorithm G-SOMP+. Finally, the learned sparse code is used with the extracted scene spectra to estimate the super-resolution hyperspectral image. Comparison of the proposed approach with the state-of-the-art methods on both ground-based and remotely-sensed public hyperspectral image databases shows that the presented method achieves the lowest error rate on all test images in the three datasets.

Proceedings ArticleDOI
23 Jun 2014
TL;DR: Two coupled dictionaries that relate to the gallery and probe cameras are jointly learned in the training phase from both labeled and unlabeled images, and experimental results on publicly available datasets demonstrate the superiority of this method.
Abstract: The desirability of being able to search for specific persons in surveillance videos captured by different cameras has increasingly motivated interest in the problem of person re-identification, which is a critical yet under-addressed challenge in multi-camera tracking systems. The main difficulty of person re-identification arises from the variations in human appearances from different camera views. In this paper, to bridge the human appearance variations across cameras, two coupled dictionaries that relate to the gallery and probe cameras are jointly learned in the training phase from both labeled and unlabeled images. The labeled training images carry the relationship between features from different cameras, and the abundant unlabeled training images are introduced to exploit the geometry of the marginal distribution for obtaining robust sparse representation. In the testing phase, the feature of each target image from the probe camera is first encoded by the sparse representation and then recovered in the feature space spanned by the images from the gallery camera. The features of the same person from different cameras are similar following the above transformation. Experimental results on publicly available datasets demonstrate the superiority of our method.

Proceedings ArticleDOI
28 Jan 2014
TL;DR: This paper extends SSC to non-linear manifolds by using the kernel trick, and shows that the alternating direction method of multipliers can be used to efficiently find kernel sparse representations.
Abstract: Subspace clustering refers to the problem of grouping data points that lie in a union of low-dimensional subspaces. One successful approach for solving this problem is sparse subspace clustering, which is based on a sparse representation of the data. In this paper, we extend SSC to non-linear manifolds by using the kernel trick. We show that the alternating direction method of multipliers can be used to efficiently find kernel sparse representations. Various experiments on synthetic as well real datasets show that non-linear mappings lead to sparse representation that give better clustering results than state-of-the-art methods.

Journal ArticleDOI
TL;DR: A statistical analysis of the properties of LcR is given together with experimental results on some public face databases and surveillance images to show the superiority of the proposed scheme over state-of-the-art face hallucination approaches.
Abstract: Recently, position-patch based approaches have been proposed to replace the probabilistic graph-based or manifold learning-based models for face hallucination. In order to obtain the optimal weights of face hallucination, these approaches represent one image patch through other patches at the same position of training faces by employing least square estimation or sparse coding. However, they cannot provide unbiased approximations or satisfy rational priors, thus the obtained representation is not satisfactory. In this paper, we propose a simpler yet more effective scheme called Locality-constrained Representation (LcR). Compared with Least Square Representation (LSR) and Sparse Representation (SR), our scheme incorporates a locality constraint into the least square inversion problem to maintain locality and sparsity simultaneously. Our scheme is capable of capturing the non-linear manifold structure of image patch samples while exploiting the sparse property of the redundant data representation. Moreover, when the locality constraint is satisfied, face hallucination is robust to noise, a property that is desirable for video surveillance applications. A statistical analysis of the properties of LcR is given together with experimental results on some public face databases and surveillance images to show the superiority of our proposed scheme over state-of-the-art face hallucination approaches.

Journal ArticleDOI
TL;DR: A novel self-learning based image decomposition framework, which is shown to outperform state-of-the-art image denoising algorithms and automatically determine the undesirable patterns from the derived image components directly from the input image, so that the task of single-image Denoising can be addressed.
Abstract: Decomposition of an image into multiple semantic components has been an effective research topic for various image processing applications such as image denoising, enhancement, and inpainting. In this paper, we present a novel self-learning based image decomposition framework. Based on the recent success of sparse representation, the proposed framework first learns an over-complete dictionary from the high spatial frequency parts of the input image for reconstruction purposes. We perform unsupervised clustering on the observed dictionary atoms (and their corresponding reconstructed image versions) via affinity propagation, which allows us to identify image-dependent components with similar context information. While applying the proposed method for the applications of image denoising, we are able to automatically determine the undesirable patterns (e.g., rain streaks or Gaussian noise) from the derived image components directly from the input image, so that the task of single-image denoising can be addressed. Different from prior image processing works with sparse representation, our method does not need to collect training image data in advance, nor do we assume image priors such as the relationship between input and output image dictionaries. We conduct experiments on two denoising problems: single-image denoising with Gaussian noise and rain removal. Our empirical results confirm the effectiveness and robustness of our approach, which is shown to outperform state-of-the-art image denoising algorithms.


Journal ArticleDOI
TL;DR: This work suggests SELL-$C$-$\sigma, a variant of Sliced ELLPACK, as a SIMD-friendly data format which combines long-standing ideas from general-purpose graphics processing units and vector computer programming and shows its suitability on a variety of hardware platforms.
Abstract: Sparse matrix-vector multiplication (spMVM) is the most time-consuming kernel in many numerical algorithms and has been studied extensively on all modern processor and accelerator architectures. However, the optimal sparse matrix data storage format is highly hardware-specific, which could become an obstacle when using heterogeneous systems. Also, it is as yet unclear how the wide single instruction multiple data (SIMD) units in current multi- and many-core processors should be used most efficiently if there is no structure in the sparsity pattern of the matrix. We suggest SELL-$C$-$\sigma$, a variant of Sliced ELLPACK, as a SIMD-friendly data format which combines long-standing ideas from general-purpose graphics processing units and vector computer programming. We discuss the advantages of SELL-$C$-$\sigma$ compared to established formats like Compressed Row Storage and ELLPACK and show its suitability on a variety of hardware platforms (Intel Sandy Bridge, Intel Xeon Phi, and Nvidia Tesla K20) for a wi...

Journal ArticleDOI
TL;DR: This work modify the standard l1l1-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refers to the resulting algorithm as weighted l1l 1- Minimization.

Journal ArticleDOI
TL;DR: This paper proposes a framework called adaptive sparse representation-based classification (ASRC) in which sparsity and correlation are jointly considered and the representation model is adaptive to the correlation structure that benefits from both ℓ1-norm andℓ2-norm.
Abstract: Sparse representation (or coding)-based classification (SRC) has gained great success in face recognition in recent years. However, SRC emphasizes the sparsity too much and overlooks the correlation information which has been demonstrated to be critical in real-world face recognition problems. Besides, some paper considers the correlation but overlooks the discriminative ability of sparsity. Different from these existing techniques, in this paper, we propose a framework called adaptive sparse representation-based classification (ASRC) in which sparsity and correlation are jointly considered. Specifically, when the samples are of low correlation, ASRC selects the most discriminative samples for representation, like SRC; when the training samples are highly correlated, ASRC selects most of the correlated and discriminative samples for representation, rather than choosing some related samples randomly. In general, the representation model is adaptive to the correlation structure that benefits from both l1-norm and l2-norm. Extensive experiments conducted on publicly available data sets verify the effectiveness and robustness of the proposed algorithm by comparing it with the state-of-the-art methods.

Journal ArticleDOI
TL;DR: This paper uses extended multiattribute profiles (EMAPs) to integrate the spatial and spectral information contained in the data to exploit the inherent low-dimensional structure of the EMAPs to provide state-of-the-art classification results for different multi/hyperspectral data sets.
Abstract: In recent years, sparse representations have been widely studied in the context of remote sensing image analysis. In this paper, we propose to exploit sparse representations of morphological attribute profiles for remotely sensed image classification. Specifically, we use extended multiattribute profiles (EMAPs) to integrate the spatial and spectral information contained in the data. EMAPs provide a multilevel characterization of an image created by the sequential application of morphological attribute filters that can be used to model different kinds of structural information. Although the EMAPs' feature vectors may have high dimensionality, they lie in class-dependent low-dimensional subpaces or submanifolds. In this paper, we use the sparse representation classification framework to exploit this characteristic of the EMAPs. In short, by gathering representative samples of the low-dimensional class-dependent structures, any given sample may by sparsely represented, and thus classified, with respect to the gathered samples. Our experiments reveal that the proposed approach exploits the inherent low-dimensional structure of the EMAPs to provide state-of-the-art classification results for different multi/hyperspectral data sets.

Journal ArticleDOI
TL;DR: The proposed novel structured dictionary learning method achieves better results than the existing sparse representation based face recognition methods, especially in dealing with large region contiguous occlusion and severe illumination variation, while the computational cost is much lower.

Journal ArticleDOI
TL;DR: This paper proposes a novel patch-driven level set method for the segmentation of neonatal brain MR images by taking advantage of sparse representation techniques and builds a subject-specific atlas from a library of aligned, manually segmented images by using sparse representation in a patch-based fashion.