scispace - formally typeset
Search or ask a question
Author

Le Sun

Bio: Le Sun is an academic researcher from Nanjing University of Information Science and Technology. The author has contributed to research in topics: Hyperspectral imaging & Computer science. The author has an hindex of 18, co-authored 67 publications receiving 1047 citations. Previous affiliations of Le Sun include Nanjing University of Science and Technology & Sungkyunkwan University.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: This model takes full advantage of exploiting the spatial and contextual information present in the hyperspectral image, and outperforms many state-of-the-art methods in terms of the overall accuracy, average accuracy, and kappa (k) statistic.
Abstract: This paper presents a new approach for hyperspectral image classification exploiting spectral-spatial information. Under the maximum a posteriori framework, we propose a supervised classification model which includes a spectral data fidelity term and a spatially adaptive Markov random field (MRF) prior in the hidden field. The data fidelity term adopted in this paper is learned from the sparse multinomial logistic regression (SMLR) classifier, while the spatially adaptive MRF prior is modeled by a spatially adaptive total variation (SpATV) regularization to enforce a spatially smooth classifier. To further improve the classification accuracy, the true labels of training samples are fixed as an additional constraint in the proposed model. Thus, our model takes full advantage of exploiting the spatial and contextual information present in the hyperspectral image. An efficient hyperspectral image classification algorithm, named SMLR-SpATV, is then developed to solve the final proposed model using the alternating direction method of multipliers. Experimental results on real hyperspectral data sets demonstrate that the proposed approach outperforms many state-of-the-art methods in terms of the overall accuracy, average accuracy, and kappa (k) statistic.

210 citations

Journal ArticleDOI
TL;DR: The novelty of this work consists in presenting a framework of spatial-spectral KSRC and measuring the spatial similarity by means of neighborhood filtering in the kernel feature space, which opens a wide field for future developments in which filtering methods can be easily incorporated.
Abstract: Kernel sparse representation classification (KSRC), a nonlinear extension of sparse representation classification, shows its good performance for hyperspectral image classification. However, KSRC only considers the spectra of unordered pixels, without incorporating information on the spatially adjacent data. This paper proposes a neighboring filtering kernel to spatial-spectral kernel sparse representation for enhanced classification of hyperspectral images. The novelty of this work consists in: 1) presenting a framework of spatial-spectral KSRC; and 2) measuring the spatial similarity by means of neighborhood filtering in the kernel feature space. Experiments on several hyperspectral images demonstrate the effectiveness of the presented method, and the proposed neighboring filtering kernel outperforms the existing spatial-spectral kernels. In addition, the proposed spatial-spectral KSRC opens a wide field for future developments in which filtering methods can be easily incorporated.

164 citations

Journal ArticleDOI
TL;DR: Experimental results validate that the proposed LRCISSK method can effectively explore the spatial-spectral information and deliver superior performance with at least 1.30% higher OA and 1.03% higher AA on average when compared to other state-of-the-art classifiers.
Abstract: Kernel methods, e.g., composite kernels (CKs) and spatial-spectral kernels (SSKs), have been demonstrated to be an effective way to exploit the spatial-spectral information nonlinearly for improving the classification performance of hyperspectral image (HSI). However, these methods are always conducted with square-shaped window or superpixel techniques. Both techniques are likely to misclassify the pixels that lie at the boundaries of class, and thus a small target is always smoothed away. To alleviate these problems, in this paper, we propose a novel patch-based low rank component induced spatial-spectral kernel method, termed LRCISSK, for HSI classification. First, the latent low-rank features of spectra in each cubic patch of HSI are reconstructed by a low rank matrix recovery (LRMR) technique, and then, to further explore more accurate spatial information, they are used to identify a homogeneous neighborhood for the target pixel (i.e., the centroid pixel) adaptively. Finally, the adaptively identified homogenous neighborhood which consists of the latent low-rank spectra is embedded into the spatial-spectral kernel framework. It can easily map the spectra into the nonlinearly complex manifolds and enable a classifier (e.g., support vector machine, SVM) to distinguish them effectively. Experimental results on three real HSI datasets validate that the proposed LRCISSK method can effectively explore the spatial-spectral information and deliver superior performance with at least 1.30% higher OA and 1.03% higher AA on average when compared to other state-of-the-art classifiers.

83 citations

Journal ArticleDOI
TL;DR: A novel sparse unmixing method, which considers highly similar patches in nonlocal regions of a hyperspectral image, is proposed in this article, which exploits spectral correlation by using collaborative sparsity regularization and spatial information by employing total variation and weighted nonlocal low-rank tensor regularization.
Abstract: The low spatial resolution of hyperspectral images leads to the coexistence of multiple ground objects in a single pixel (called mixed pixels). A large number of mixed pixels in a hyperspectral image hinders the subsequent analysis and application of the image. In order to solve this problem, a novel sparse unmixing method, which considers highly similar patches in nonlocal regions of a hyperspectral image, is proposed in this article. This method exploits spectral correlation by using collaborative sparsity regularization and spatial information by employing total variation and weighted nonlocal low-rank tensor regularization. To effectively utilize the tensor decomposition, nonlocal similar patches are first grouped together. Then, these nonlocal patches are stacked to form a patch group tensor. Finally, weighted low-rank tensor regularization is enforced to constrain the patch group to obtain an estimated low-rank abundance image. Experiments on simulated and real hyperspectral datasets validated the superiority of the proposed method in better maintaining fine details and obtaining better unmixing results.

73 citations

Journal ArticleDOI
TL;DR: A computationally efficient parallel implementation for a spectral-spatial classification method based on spatially adaptive Markov random fields (MRFs) that exploits the massively parallel nature of GPUs to achieve significant acceleration factors with regards to the serial and multicore versions of the same classifier on an NVIDIA Tesla K20C platform.
Abstract: Image classification is a very important tool for remotely sensed hyperspectral image processing. Techniques able to exploit the rich spectral information contained in the data, as well as its spatial-contextual information, have shown success in recent years. Due to the high dimensionality of hyperspectral data, spectral-spatial classification techniques are quite demanding from a computational viewpoint. In this paper, we present a computationally efficient parallel implementation for a spectral-spatial classification method based on spatially adaptive Markov random fields (MRFs). The method learns the spectral information from a sparse multinomial logistic regression classifier, and the spatial information is characterized by modeling the potential function associated with a weighted MRF as a spatially adaptive vector total variation function. The parallel implementation has been carried out using commodity graphics processing units (GPUs) and the NVIDIA's Compute Unified Device Architecture. It optimizes the work allocation and input/output transfers between the central processing unit and the GPU, taking full advantages of the computational power of GPUs as well as the high bandwidth and low latency of shared memory. As a result, the algorithm exploits the massively parallel nature of GPUs to achieve significant acceleration factors (higher than 70x) with regards to the serial and multicore versions of the same classifier on an NVIDIA Tesla K20C platform.

63 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The concept of deep learning is introduced into hyperspectral data classification for the first time, and a new way of classifying with spatial-dominated information is proposed, which is a hybrid of principle component analysis (PCA), deep learning architecture, and logistic regression.
Abstract: Classification is one of the most popular topics in hyperspectral remote sensing. In the last two decades, a huge number of methods were proposed to deal with the hyperspectral data classification problem. However, most of them do not hierarchically extract deep features. In this paper, the concept of deep learning is introduced into hyperspectral data classification for the first time. First, we verify the eligibility of stacked autoencoders by following classical spectral information-based classification. Second, a new way of classifying with spatial-dominated information is proposed. We then propose a novel deep learning framework to merge the two features, from which we can get the highest classification accuracy. The framework is a hybrid of principle component analysis (PCA), deep learning architecture, and logistic regression. Specifically, as a deep learning architecture, stacked autoencoders are aimed to get useful high-level features. Experimental results with widely-used hyperspectral data indicate that classifiers built in this deep learning-based framework provide competitive performance. In addition, the proposed joint spectral-spatial deep neural network opens a new window for future research, showcasing the deep learning-based methods' huge potential for accurate hyperspectral data classification.

2,071 citations

Journal ArticleDOI
TL;DR: A new feature extraction (FE) and image classification framework are proposed for hyperspectral data analysis based on deep belief network (DBN) and a novel deep architecture is proposed, which combines the spectral-spatial FE and classification together to get high classification accuracy.
Abstract: Hyperspectral data classification is a hot topic in remote sensing community. In recent years, significant effort has been focused on this issue. However, most of the methods extract the features of original data in a shallow manner. In this paper, we introduce a deep learning approach into hyperspectral image classification. A new feature extraction (FE) and image classification framework are proposed for hyperspectral data analysis based on deep belief network (DBN). First, we verify the eligibility of restricted Boltzmann machine (RBM) and DBN by the following spectral information-based classification. Then, we propose a novel deep architecture, which combines the spectral–spatial FE and classification together to get high classification accuracy. The framework is a hybrid of principal component analysis (PCA), hierarchical learning-based FE, and logistic regression (LR). Experimental results with hyperspectral data indicate that the classifier provide competitive solution with the state-of-the-art methods. In addition, this paper reveals that deep learning system has huge potential for hyperspectral data classification.

1,028 citations

Journal ArticleDOI
TL;DR: This paper proposes a simple but effective method to learn discriminative CNNs (D-CNNs) to boost the performance of remote sensing image scene classification and comprehensively evaluates the proposed method on three publicly available benchmark data sets using three off-the-shelf CNN models.
Abstract: Remote sensing image scene classification is an active and challenging task driven by many applications. More recently, with the advances of deep learning models especially convolutional neural networks (CNNs), the performance of remote sensing image scene classification has been significantly improved due to the powerful feature representations learnt through CNNs. Although great success has been obtained so far, the problems of within-class diversity and between-class similarity are still two big challenges. To address these problems, in this paper, we propose a simple but effective method to learn discriminative CNNs (D-CNNs) to boost the performance of remote sensing image scene classification. Different from the traditional CNN models that minimize only the cross entropy loss, our proposed D-CNN models are trained by optimizing a new discriminative objective function. To this end, apart from minimizing the classification error, we also explicitly impose a metric learning regularization term on the CNN features. The metric learning regularization enforces the D-CNN models to be more discriminative so that, in the new D-CNN feature spaces, the images from the same scene class are mapped closely to each other and the images of different classes are mapped as farther apart as possible. In the experiments, we comprehensively evaluate the proposed method on three publicly available benchmark data sets using three off-the-shelf CNN models. Experimental results demonstrate that our proposed D-CNN methods outperform the existing baseline methods and achieve state-of-the-art results on all three data sets.

1,001 citations