scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 2016"


Journal ArticleDOI
TL;DR: A recently emerged signal decomposition model known as convolutional sparse representation (CSR) is introduced into image fusion to address this problem, motivated by the observation that the CSR model can effectively overcome the above two drawbacks.
Abstract: As a popular signal modeling technique, sparse representation (SR) has achieved great success in image fusion over the last few years with a number of effective algorithms being proposed. However, due to the patch-based manner applied in sparse coding, most existing SR-based fusion methods suffer from two drawbacks, namely, limited ability in detail preservation and high sensitivity to misregistration, while these two issues are of great concern in image fusion. In this letter, we introduce a recently emerged signal decomposition model known as convolutional sparse representation (CSR) into image fusion to address this problem, which is motivated by the observation that the CSR model can effectively overcome the above two drawbacks. We propose a CSR-based image fusion framework, in which each source image is decomposed into a base layer and a detail layer, for multifocus image fusion and multimodal image fusion. Experimental results demonstrate that the proposed fusion methods clearly outperform the SR-based methods in terms of both objective assessment and visual quality.

615 citations


Journal ArticleDOI
TL;DR: A new hyperspectral image super-resolution method from a low-resolution (LR) image and a HR reference image of the same scene to improve the accuracy of non-negative sparse coding and to exploit the spatial correlation among the learned sparse codes.
Abstract: Hyperspectral imaging has many applications from agriculture and astronomy to surveillance and mineralogy. However, it is often challenging to obtain high-resolution (HR) hyperspectral images using existing hyperspectral imaging techniques due to various hardware limitations. In this paper, we propose a new hyperspectral image super-resolution method from a low-resolution (LR) image and a HR reference image of the same scene. The estimation of the HR hyperspectral image is formulated as a joint estimation of the hyperspectral dictionary and the sparse codes based on the prior knowledge of the spatial-spectral sparsity of the hyperspectral image. The hyperspectral dictionary representing prototype reflectance spectra vectors of the scene is first learned from the input LR image. Specifically, an efficient non-negative dictionary learning algorithm using the block-coordinate descent optimization technique is proposed. Then, the sparse codes of the desired HR hyperspectral image with respect to learned hyperspectral basis are estimated from the pair of LR and HR reference images. To improve the accuracy of non-negative sparse coding, a clustering-based structured sparse coding method is proposed to exploit the spatial correlation among the learned sparse codes. The experimental results on both public datasets and real LR hypspectral images suggest that the proposed method substantially outperforms several existing HR hyperspectral image recovery techniques in the literature in terms of both objective quality metrics and computational efficiency.

404 citations


Journal ArticleDOI
TL;DR: A novel method for anomaly detection in hyperspectral images (HSIs) is proposed based on low-rank and sparse representation based on the separation of the background and the anomalies in the observed data.
Abstract: A novel method for anomaly detection in hyperspectral images (HSIs) is proposed based on low-rank and sparse representation. The proposed method is based on the separation of the background and the anomalies in the observed data. Since each pixel in the background can be approximately represented by a background dictionary and the representation coefficients of all pixels form a low-rank matrix, a low-rank representation is used to model the background part. To better characterize each pixel's local representation, a sparsity-inducing regularization term is added to the representation coefficients. Moreover, a dictionary construction strategy is adopted to make the dictionary more stable and discriminative. Then, the anomalies are determined by the response of the residual matrix. An important advantage of the proposed algorithm is that it combines the global and local structure in the HSI. Experimental results have been conducted using both simulated and real data sets. These experiments indicate that our algorithm achieves very promising anomaly detection performance.

366 citations


Journal ArticleDOI
TL;DR: New, efficient algorithms that substantially improve on the performance of other recent methods of sparse representation are presented, contributing to the development of this type of representation as a practical tool for a wider range of problems.
Abstract: When applying sparse representation techniques to images, the standard approach is to independently compute the representations for a set of overlapping image patches. This method performs very well in a variety of applications, but results in a representation that is multi-valued and not optimized with respect to the entire image. An alternative representation structure is provided by a convolutional sparse representation, in which a sparse representation of an entire image is computed by replacing the linear combination of a set of dictionary vectors by the sum of a set of convolutions with dictionary filters. The resulting representation is both single-valued and jointly optimized over the entire image. While this form of a sparse representation has been applied to a variety of problems in signal and image processing and computer vision, the computational expense of the corresponding optimization problems has restricted application to relatively small signals and images. This paper presents new, efficient algorithms that substantially improve on the performance of other recent methods, contributing to the development of this type of representation as a practical tool for a wider range of problems.

331 citations


Journal ArticleDOI
TL;DR: This paper proposes to consider the patch-level sparse representation when hiding the secret data, and significantly outperforms the state-of-the-art methods in terms of the embedding rate and the image quality.
Abstract: Reversible data hiding in encrypted images has attracted considerable attention from the communities of privacy security and protection. The success of the previous methods in this area has shown that a superior performance can be achieved by exploiting the redundancy within the image. Specifically, because the pixels in the local structures (like patches or regions) have a strong similarity, they can be heavily compressed, thus resulting in a large hiding room. In this paper, to better explore the correlation between neighbor pixels, we propose to consider the patch-level sparse representation when hiding the secret data. The widely used sparse coding technique has demonstrated that a patch can be linearly represented by some atoms in an over-complete dictionary. As the sparse coding is an approximation solution, the leading residual errors are encoded and self-embedded within the cover image. Furthermore, the learned dictionary is also embedded into the encrypted image. Thanks to the powerful representation of sparse coding, a large vacated room can be achieved, and thus the data hider can embed more secret messages in the encrypted image. Extensive experiments demonstrate that the proposed method significantly outperforms the state-of-the-art methods in terms of the embedding rate and the image quality.

323 citations


Journal ArticleDOI
TL;DR: The robustness of this approach under various types of distortions, such as deformation, noise, outliers, rotation, and occlusion, greatly outperforms the state-of-the-art methods, especially when the data is badly degraded.
Abstract: In previous work on point registration, the input point sets are often represented using Gaussian mixture models and the registration is then addressed through a probabilistic approach, which aims to exploit global relationships on the point sets. For non-rigid shapes, however, the local structures among neighboring points are also strong and stable and thus helpful in recovering the point correspondence. In this paper, we formulate point registration as the estimation of a mixture of densities, where local features, such as shape context, are used to assign the membership probabilities of the mixture model. This enables us to preserve both global and local structures during matching. The transformation between the two point sets is specified in a reproducing kernel Hilbert space and a sparse approximation is adopted to achieve a fast implementation. Extensive experiments on both synthesized and real data show the robustness of our approach under various types of distortions, such as deformation, noise, outliers, rotation, and occlusion. It greatly outperforms the state-of-the-art methods, especially when the data is badly degraded.

311 citations


Journal ArticleDOI
TL;DR: It is demonstrated that a sparse coding model particularly designed for SR can be incarnated as a neural network with the merit of end-to-end optimization over training data and significantly outperforms the existing state-of-the-art methods for various scaling factors both quantitatively and perceptually.
Abstract: Single image super-resolution (SR) is an ill-posed problem, which tries to recover a high-resolution image from its low-resolution observation. To regularize the solution of the problem, previous methods have focused on designing good priors for natural images, such as sparse representation, or directly learning the priors from a large data set with models, such as deep neural networks. In this paper, we argue that domain expertise from the conventional sparse coding model can be combined with the key ingredients of deep learning to achieve further improved results. We demonstrate that a sparse coding model particularly designed for SR can be incarnated as a neural network with the merit of end-to-end optimization over training data. The network has a cascaded structure, which boosts the SR performance for both fixed and incremental scaling factors. The proposed training and testing schemes can be extended for robust handling of images with additional degradation, such as noise and blurring. A subjective assessment is conducted and analyzed in order to thoroughly evaluate various SR techniques. Our proposed model is tested on a wide range of images, and it significantly outperforms the existing state-of-the-art methods for various scaling factors both quantitatively and perceptually.

265 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of unsupervised domain transfer learning in which no labels are available in the target domain by the inexact augmented Lagrange multiplier method and can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise.
Abstract: In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html .

259 citations


Journal ArticleDOI
TL;DR: An adaptive fusion scheme is proposed based on collaborative sparse representation in Bayesian filtering framework and jointly optimize sparse codes and the reliable weights of different modalities in an online way to perform robust object tracking in challenging scenarios.
Abstract: Integrating multiple different yet complementary feature representations has been proved to be an effective way for boosting tracking performance. This paper investigates how to perform robust object tracking in challenging scenarios by adaptively incorporating information from grayscale and thermal videos, and proposes a novel collaborative algorithm for online tracking. In particular, an adaptive fusion scheme is proposed based on collaborative sparse representation in Bayesian filtering framework. We jointly optimize sparse codes and the reliable weights of different modalities in an online way. In addition, this paper contributes a comprehensive video benchmark, which includes 50 grayscale-thermal sequences and their ground truth annotations for tracking purpose. The videos are with high diversity and the annotations were finished by one single person to guarantee consistency. Extensive experiments against other state-of-the-art trackers with both grayscale and grayscale-thermal inputs demonstrate the effectiveness of the proposed tracking approach. Through analyzing quantitative results, we also provide basic insights and potential future research directions in grayscale-thermal tracking.

226 citations


Journal ArticleDOI
TL;DR: This paper uses convolutional neural networks to extract deep features from high levels of the image data using a sparse representation classification framework and reveals that the proposed method exploits the inherent low-dimensional structure of the deep features to provide better classification results.
Abstract: In recent years, deep learning has been widely studied for remote sensing image analysis. In this paper, we propose a method for remotely-sensed image classification by using sparse representation of deep learning features. Specifically, we use convolutional neural networks (CNN) to extract deep features from high levels of the image data. Deep features provide high level spatial information created by hierarchical structures. Although the deep features may have high dimensionality, they lie in class-dependent sub-spaces or sub-manifolds. We investigate the characteristics of deep features by using a sparse representation classification framework. The experimental results reveal that the proposed method exploits the inherent low-dimensional structure of the deep features to provide better classification results as compared to the results obtained by widely-used feature exploration algorithms, such as the extended morphological attribute profiles (EMAPs) and sparse coding (SC).

208 citations


Journal ArticleDOI
TL;DR: A generative model for robust tensor factorization in the presence of both missing data and outliers that can discover the groundtruth of CP rank and automatically adapt the sparsity inducing priors to various types of outliers is proposed.
Abstract: We propose a generative model for robust tensor factorization in the presence of both missing data and outliers. The objective is to explicitly infer the underlying low-CANDECOMP/PARAFAC (CP)-rank tensor capturing the global information and a sparse tensor capturing the local information (also considered as outliers), thus providing the robust predictive distribution over missing entries. The low-CP-rank tensor is modeled by multilinear interactions between multiple latent factors on which the column sparsity is enforced by a hierarchical prior, while the sparse tensor is modeled by a hierarchical view of Student- $t$ distribution that associates an individual hyperparameter with each element independently. For model learning, we develop an efficient variational inference under a fully Bayesian treatment, which can effectively prevent the overfitting problem and scales linearly with data size. In contrast to existing related works, our method can perform model selection automatically and implicitly without the need of tuning parameters. More specifically, it can discover the groundtruth of CP rank and automatically adapt the sparsity inducing priors to various types of outliers. In addition, the tradeoff between the low-rank approximation and the sparse representation can be optimized in the sense of maximum model evidence. The extensive experiments and comparisons with many state-of-the-art algorithms on both synthetic and real-world data sets demonstrate the superiorities of our method from several perspectives.

Journal ArticleDOI
TL;DR: Extensive experiments on handwritten digit classification, landmark recognition and face recognition demonstrate that the proposed hybrid classifier outperforms ELM and SRC in classification accuracy with outstanding computational efficiency.

Journal ArticleDOI
TL;DR: The proposed method mainly focuses on multitask joint sparse representation (MJSR) and a stepwise Markov random filed framework and retains necessary correlation in spectral field during classification, which significantly enhances the classification accuracy and robustness.
Abstract: Hyperspectral image (HSI) classification is a crucial issue in remote sensing. Accurate classification benefits a large number of applications such as land use analysis and marine resource utilization. But high data correlation brings difficulty to reliable classification, especially for HSI with abundant spectral information. Furthermore, the traditional methods often fail to well consider the spatial coherency of HSI that also limits the classification performance. To address these inherent obstacles, a novel spectral–spatial classification scheme is proposed in this paper. The proposed method mainly focuses on multitask joint sparse representation (MJSR) and a stepwise Markov random filed framework, which are claimed to be two main contributions in this procedure. First, the MJSR not only reduces the spectral redundancy, but also retains necessary correlation in spectral field during classification. Second, the stepwise optimization further explores the spatial correlation that significantly enhances the classification accuracy and robustness. As far as several universal quality evaluation indexes are concerned, the experimental results on Indian Pines and Pavia University demonstrate the superiority of our method compared with the state-of-the-art competitors.

Journal ArticleDOI
TL;DR: The proposed method can be exploited in undersampled magnetic resonance imaging to reduce data acquisition time and reconstruct images with better image quality and the computation of the proposed approach is much faster than the typical K-SVD dictionary learning method in magnetic resonance image reconstruction.
Abstract: Objective: Improve the reconstructed image with fast and multiclass dictionaries learning when magnetic resonance imaging is accelerated by undersampling the k-space data. Methods: A fast orthogonal dictionary learning method is introduced into magnetic resonance image reconstruction to provide adaptive sparse representation of images. To enhance the sparsity, image is divided into classified patches according to the same geometrical direction and dictionary is trained within each class. A new sparse reconstruction model with the multiclass dictionaries is proposed and solved using a fast alternating direction method of multipliers. Results: Experiments on phantom and brain imaging data with acceleration factor up to 10 and various undersampling patterns are conducted. The proposed method is compared with state-of-the-art magnetic resonance image reconstruction methods. Conclusion: Artifacts are better suppressed and image edges are better preserved than the compared methods. Besides, the computation of the proposed approach is much faster than the typical K-SVD dictionary learning method in magnetic resonance image reconstruction. Significance: The proposed method can be exploited in undersampled magnetic resonance imaging to reduce data acquisition time and reconstruct images with better image quality.

Journal ArticleDOI
TL;DR: The goal of this survey article is to impart a working knowledge of the underlying theory and practice of sparse direct methods for solving linear systems and least-squares problems, and to provide an overview of the algorithms, data structures, and software available to solve these problems.
Abstract: Wilkinson defined a sparse matrix as one with enough zeros that it pays to take advantage of them.1 This informal yet practical definition captures the essence of the goal of direct methods for solving sparse matrix problems. They exploit the sparsity of a matrix to solve problems economically: much faster and using far less memory than if all the entries of a matrix were stored and took part in explicit computations. These methods form the backbone of a wide range of problems in computational science. A glimpse of the breadth of applications relying on sparse solvers can be seen in the origins of matrices in published matrix benchmark collections (Duff and Reid 1979a, Duff, Grimes and Lewis 1989a, Davis and Hu 2011). The goal of this survey article is to impart a working knowledge of the underlying theory and practice of sparse direct methods for solving linear systems and least-squares problems, and to provide an overview of the algorithms, data structures, and software available to solve these problems, so that the reader can both understand the methods and know how best to use them.

Journal ArticleDOI
TL;DR: A fast and accurate video anomaly detection and localisation method is presented that has a better performance especially in run-time measure than state-of-the-art methods on two UMN and UCSD benchmarks.
Abstract: A fast and accurate video anomaly detection and localisation method is presented. The speed and localisation accuracy are two ongoing challenges in real-world anomaly detection. We introduce two novel cubic-patch-based anomaly detector where one works based on power of an auto-encoder (AE) on reconstituting an input video patch and another one is based on the power of sparse representation of an input video patch. It is found that if an AE is efficiently trained on all normal patches, the anomaly patch in testing phase has a more reconstruction error than a normal patch. Also if a sparse AE is learned based on normal training patches, we expect that the given patch to AE is represented sparsely. If the representation is not enough sparse it is considered as a good candidate to be anomaly. For being more fast, these two detectors are combined as a cascade classifier. First, all small patches on test video frame are scanned, those which have not enough sparse representation are resized and sent to next detector for more careful evaluation. The experiment results show that the method mentioned here has a better performance especially in run-time measure than state-of-the-art methods on two UMN and UCSD benchmarks.

Journal ArticleDOI
TL;DR: A clustering-based dictionary learning method based on a joint patch clustering for multimodal image fusion that requires lower processing time with better fusion quality and the experimental results validate effectiveness.

Journal ArticleDOI
TL;DR: A novel sparsity-based detector named the hybrid sparsity and statistics detector (HSSD) for target detection in hyperspectral imagery, which can effectively deal with the above two problems.
Abstract: Hyperspectral images provide great potential for target detection, however, new challenges are also introduced for hyperspectral target detection, resulting that hyperspectral target detection should be treated as a new problem and modeled differently. Many classical detectors are proposed based on the linear mixing model and the sparsity model. However, the former type of model cannot deal well with spectral variability in limited endmembers, and the latter type of model usually treats the target detection as a simple classification problem and pays less attention to the low target probability. In this case, can we find an efficient way to utilize both the high-dimension features behind hyperspectral images and the limited target information to extract small targets? This paper proposes a novel sparsity-based detector named the hybrid sparsity and statistics detector (HSSD) for target detection in hyperspectral imagery, which can effectively deal with the above two problems. The proposed algorithm designs a hypothesis-specific dictionary based on the prior hypotheses for the test pixel, which can avoid the imbalanced number of training samples for a class-specific dictionary. Then, a purification process is employed for the background training samples in order to construct an effective competition between the two hypotheses. Next, a sparse representation-based binary hypothesis model merged with additive Gaussian noise is proposed to represent the image. Finally, a generalized likelihood ratio test is performed to obtain a more robust detection decision than the reconstruction residual-based detection methods. Extensive experimental results with three hyperspectral data sets confirm that the proposed HSSD algorithm clearly outperforms the state-of-the-art target detectors.

Journal ArticleDOI
TL;DR: A unified sparse learning framework is proposed by introducing the sparsity or L1 -norm learning, which further extends the LLE-based methods to sparse cases and can be viewed as a general model for sparse linear and nonlinear subspace learning.
Abstract: Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or $L_{1}$ -norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes.

Journal ArticleDOI
TL;DR: This paper proposes a multimodal task-driven dictionary learning algorithm under the joint sparsity constraint (prior) to enforce collaborations among multiple homogeneous/heterogeneous sources of information and presents an extension of the proposed formulation using a mixed joint and independent sparsity prior, which facilitates more flexible fusion of the modalities at feature level.
Abstract: Dictionary learning algorithms have been successfully used for both reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are mostly developed for single-modality scenarios, recent studies have demonstrated the advantages of feature-level fusion based on the joint sparse representation of the multimodal inputs. In this paper, we propose a multimodal task-driven dictionary learning algorithm under the joint sparsity constraint (prior) to enforce collaborations among multiple homogeneous/heterogeneous sources of information. In this task-driven formulation, the multimodal dictionaries are learned simultaneously with their corresponding classifiers. The resulting multimodal dictionaries can generate discriminative latent features (sparse codes) from the data that are optimized for a given task such as binary or multiclass classification. Moreover, we present an extension of the proposed formulation using a mixed joint and independent sparsity prior, which facilitates more flexible fusion of the modalities at feature level. The efficacy of the proposed algorithms for multimodal classification is illustrated on four different applications—multimodal face recognition, multi-view face recognition, multi-view action recognition, and multimodal biometric recognition. It is also shown that, compared with the counterpart reconstructive-based dictionary learning algorithms, the task-driven formulations are more computationally efficient in the sense that they can be equipped with more compact dictionaries and still achieve superior performance.

Proceedings ArticleDOI
27 Jun 2016
TL;DR: Both qualitative and quantitative evaluations on challenging benchmark sequences demonstrate that CST performs better than all other sparse trackers and favorably against state-of-the-art methods.
Abstract: Sparse representation has been introduced to visual tracking by finding the best target candidate with minimal reconstruction error within the particle filter framework. However, most sparse representation based trackers have high computational cost, less than promising tracking performance, and limited feature representation. To deal with the above issues, we propose a novel circulant sparse tracker (CST), which exploits circulant target templates. Because of the circulant structure property, CST has the following advantages: (1) It can refine and reduce particles using circular shifts of target templates. (2) The optimization can be efficiently solved entirely in the Fourier domain. (3) High dimensional features can be embedded into CST to significantly improve tracking performance without sacrificing much computation time. Both qualitative and quantitative evaluations on challenging benchmark sequences demonstrate that CST performs better than all other sparse trackers and favorably against state-of-the-art methods.

Journal ArticleDOI
TL;DR: Based on sparse representation theories, a new approach for fault diagnosis of rolling element bearing is proposed in this article, where the over-complete dictionary is constructed by the unit impulse response function of damped second-order system, whose natural frequencies and relative damping ratios are directly identified from the fault signal by correlation filtering method.

Journal ArticleDOI
TL;DR: A novel fusion method based on a multi-task robust sparse representation (MRSR) model and spatial context information to address the fusion of multi-focus gray-level images with misregistration.
Abstract: We present a novel fusion method based on a multi-task robust sparse representation (MRSR) model and spatial context information to address the fusion of multi-focus gray-level images with misregistration. First, we present a robust sparse representation (RSR) model by replacing the conventional least-squared reconstruction error by a sparse reconstruction error. We then propose a multi-task version of the RSR model, viz., the MRSR model. The latter is then applied to multi-focus image fusion by employing the detailed information regarding each image patch and its spatial neighbors to collaboratively determine both the focused and defocused regions in the input images. To achieve this, we formulate the problem of extracting details from multiple image patches as a joint multi-task sparsity pursuit based on the MRSR model. Experimental results demonstrate that the suggested algorithm is competitive with the current state-of-the-art and superior to some approaches that use traditional sparse representation methods when input images are misregistered.

Journal ArticleDOI
TL;DR: Comparisons with competing methods on benchmark real-world databases consistently show the superiority of the proposed methods for both color FR and reconstruction.
Abstract: Collaborative representation-based classification (CRC) and sparse RC (SRC) have recently achieved great success in face recognition (FR). Previous CRC and SRC are originally designed in the real setting for grayscale image-based FR. They separately represent the color channels of a query color image and ignore the structural correlation information among the color channels. To remedy this limitation, in this paper, we propose two novel RC methods for color FR, namely, quaternion CRC (QCRC) and quaternion SRC (QSRC) using quaternion $\ell _{1}$ minimization. By modeling each color image as a quaternionic signal, they naturally preserve the color structures of both query and gallery color images while uniformly coding the query channel images in a holistic manner. Despite the empirical success of CRC and SRC on FR, a few theoretical results are developed to guarantee their effectiveness. Another purpose of this paper is to establish the theoretical guarantee for QCRC and QSRC under mild conditions. Comparisons with competing methods on benchmark real-world databases consistently show the superiority of the proposed methods for both color FR and reconstruction.

Proceedings ArticleDOI
13 Aug 2016
TL;DR: This paper explores and exploits an additional sparse component to handle tail labels behaving as outliers, in order to make the classical low-rank principle in multi-label learning valid and increase the scalability of the proposed algorithm.
Abstract: Tail labels in the multi-label learning problem undermine the low-rank assumption Nevertheless, this problem has rarely been investigated In addition to using the low-rank structure to depict label correlations, this paper explores and exploits an additional sparse component to handle tail labels behaving as outliers, in order to make the classical low-rank principle in multi-label learning valid The divide-and-conquer optimization technique is employed to increase the scalability of the proposed algorithm while theoretically guaranteeing its performance A theoretical analysis of the generalizability of the proposed algorithm suggests that it can be improved by the low-rank and sparse decomposition given tail labels Experimental results on real-world data demonstrate the significance of investigating tail labels and the effectiveness of the proposed algorithm

Journal ArticleDOI
TL;DR: This paper presents a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint, which learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information.
Abstract: In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.

Journal ArticleDOI
TL;DR: The proposed SSASR method aims at improving noise-free estimation for noisy HSI by making full use of highly correlated spectral information and highly similar spatial information via sparse representation, which consists of the following three steps.
Abstract: In this paper, a novel spectral–spatial adaptive sparse representation (SSASR) method is proposed for hyperspectral image (HSI) denoising. The proposed SSASR method aims at improving noise-free estimation for noisy HSI by making full use of highly correlated spectral information and highly similar spatial information via sparse representation, which consists of the following three steps. First, according to spectral correlation across bands, the HSI is partitioned into several nonoverlapping band subsets. Each band subset contains multiple continuous bands with highly similar spectral characteristics. Then, within each band subset, shape-adaptive local regions consisting of spatially similar pixels are searched in spatial domain. This way, spectral–spatial similar pixels can be grouped. Finally, the highly correlated and similar spectral–spatial information in each group is effectively used via the joint sparse coding, in order to generate better noise-free estimation. The proposed SSASR method is evaluated by different objective metrics in both real and simulated experiments. The numerical and visual comparison results demonstrate the effectiveness and superiority of the proposed method.

Journal ArticleDOI
TL;DR: A novel computational model combining weighted sparse representation based classifier (WSRC) and global encoding (GE) of amino acid sequence is introduced to predict protein interaction class and is a very efficient method to predict PPIs and may be a useful supplementary tool for future proteomics studies.
Abstract: Proteins are the important molecules which participate in virtually every aspect of cellular function within an organism in pairs. Although high-throughput technologies have generated considerable protein-protein interactions (PPIs) data for various species, the processes of experimental methods are both time-consuming and expensive. In addition, they are usually associated with high rates of both false positive and false negative results. Accordingly, a number of computational approaches have been developed to effectively and accurately predict protein interactions. However, most of these methods typically perform worse when other biological data sources (e.g., protein structure information, protein domains, or gene neighborhoods information) are not available. Therefore, it is very urgent to develop effective computational methods for prediction of PPIs solely using protein sequence information. In this study, we present a novel computational model combining weighted sparse representation based classifier (WSRC) and global encoding (GE) of amino acid sequence. Two kinds of protein descriptors, composition and transition, are extracted for representing each protein sequence. On the basis of such a feature representation, novel weighted sparse representation based classifier is introduced to predict protein interaction class. When the proposed method was evaluated with the PPIs data of S. cerevisiae, Human and H. pylori, it achieved high prediction accuracies of 96.82, 97.66 and 92.83 % respectively. Extensive experiments were performed for cross-species PPIs prediction and the prediction accuracies were also very promising. To further evaluate the performance of the proposed method, we then compared its performance with the method based on support vector machine (SVM). The results show that the proposed method achieved a significant improvement. Thus, the proposed method is a very efficient method to predict PPIs and may be a useful supplementary tool for future proteomics studies.

Journal ArticleDOI
TL;DR: A novel supervised learning method, called Sparsity Preserving Discriminant Projections (SPDP), which attempts to preserve the sparse representation structure of the data and maximize the between-class separability simultaneously, can be regarded as a combiner of manifold learning and sparse representation.
Abstract: Dimensionality reduction is extremely important for understanding the intrinsic structure hidden in high-dimensional data. In recent years, sparse representation models have been widely used in dimensionality reduction. In this paper, a novel supervised learning method, called Sparsity Preserving Discriminant Projections (SPDP), is proposed. SPDP, which attempts to preserve the sparse representation structure of the data and maximize the between-class separability simultaneously, can be regarded as a combiner of manifold learning and sparse representation. Specifically, SPDP first creates a concatenated dictionary by classwise PCA decompositions and learns the sparse representation structure of each sample under the constructed dictionary using the least square method. Secondly, a local between-class separability function is defined to characterize the scatter of the samples in the different submanifolds. Then, SPDP integrates the learned sparse representation information with the local between-class relationship to construct a discriminant function. Finally, the proposed method is transformed into a generalized eigenvalue problem. Extensive experimental results on several popular face databases demonstrate the feasibility and effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: This work proposes a transductive low-rank and sparse principal feature coding (LSPFC) formulation that decomposes given data into a component part that encodes low- rank sparse principal features and a noise-fitting error part, and presents an inductive LSPFC (I-L SPFC), which incorporates embedded low- Rank and sparse Principal features by a projection into one problem for direct minimization.
Abstract: Recovering low-rank and sparse subspaces jointly for enhanced robust representation and classification is discussed. Technically, we first propose a transductive low-rank and sparse principal feature coding (LSPFC) formulation that decomposes given data into a component part that encodes low-rank sparse principal features and a noise-fitting error part. To well handle the outside data, we then present an inductive LSPFC (I-LSPFC). I-LSPFC incorporates embedded low-rank and sparse principal features by a projection into one problem for direct minimization, so that the projection can effectively map both inside and outside data into the underlying subspaces to learn more powerful and informative features for representation. To ensure that the learned features by I-LSPFC are optimal for classification, we further combine the classification error with the feature coding error to form a unified model, discriminative LSPFC (D-LSPFC), to boost performance. The model of D-LSPFC seamlessly integrates feature coding and discriminative classification, so the representation and classification powers can be enhanced. The proposed approaches are more general, and several recent existing low-rank or sparse coding algorithms can be embedded into our problems as special cases. Visual and numerical results demonstrate the effectiveness of our methods for representation and classification.