scispace - formally typeset
Search or ask a question
Author

Xudong Kang

Bio: Xudong Kang is an academic researcher from Hunan University. The author has contributed to research in topics: Hyperspectral imaging & Feature extraction. The author has an hindex of 30, co-authored 83 publications receiving 4929 citations. Previous affiliations of Xudong Kang include Hunan Institute of Science and Technology.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.
Abstract: A fast and effective image fusion method is proposed for creating a highly informative fused image through merging multiple images. The proposed method is based on a two-scale decomposition of an image into a base layer containing large scale variations in intensity, and a detail layer capturing small scale details. A novel guided filtering-based weighted average technique is proposed to make full use of spatial consistency for fusion of the base and detail layers. Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.

1,300 citations

Journal ArticleDOI
TL;DR: It is concluded that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications and the researches in the image fusion field are still expected to significantly grow in the coming years.

871 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed edge-preserving filtering based classification method can improve the classification accuracy significantly in a very short time and can be easily applied in real applications.
Abstract: The integration of spatial context in the classification of hyperspectral images is known to be an effective way in improving classification accuracy. In this paper, a novel spectral-spatial classification framework based on edge-preserving filtering is proposed. The proposed framework consists of the following three steps. First, the hyperspectral image is classified using a pixelwise classifier, e.g., the support vector machine classifier. Then, the resulting classification map is represented as multiple probability maps, and edge-preserving filtering is conducted on each probability map, with the first principal component or the first three principal components of the hyperspectral image serving as the gray or color guidance image. Finally, according to the filtered probability maps, the class of each pixel is selected based on the maximum probability. Experimental results demonstrate that the proposed edge-preserving filtering based classification method can improve the classification accuracy significantly in a very short time. Thus, it can be easily applied in real applications.

640 citations

Journal ArticleDOI
TL;DR: In this article, the curse of dimensionality of hyperspectral images (HSIs) has been discussed, which is a challenge to conventional techniques for accurate analysis of HSIs.
Abstract: Hyperspectral images (HSIs) provide detailed spectral information through hundreds of (narrow) spectral channels (also known as dimensionality or bands), which can be used to accurately classify diverse materials of interest. The increased dimensionality of such data makes it possible to significantly improve data information content but provides a challenge to conventional techniques (the so-called curse of dimensionality) for accurate analysis of HSIs.

391 citations

Journal ArticleDOI
TL;DR: Considering that regions of different scales incorporate the complementary yet correlated information for classification, a multiscale adaptive sparse representation (MASR) model is proposed and demonstrates the qualitative and quantitative superiority of the proposed MASR algorithm when compared to several well-known classifiers.
Abstract: Sparse representation has been demonstrated to be a powerful tool in classification of hyperspectral images (HSIs). The spatial context of an HSI can be exploited by first defining a local region for each test pixel and then jointly representing pixels within each region by a set of common training atoms (samples). However, the selection of the optimal region scale (size) for different HSIs with different types of structures is a nontrivial task. In this paper, considering that regions of different scales incorporate the complementary yet correlated information for classification, a multiscale adaptive sparse representation (MASR) model is proposed. The MASR effectively exploits spatial information at multiple scales via an adaptive sparse strategy. The adaptive sparse strategy not only restricts pixels from different scales to be represented by training atoms from a particular class but also allows the selected atoms for these pixels to be varied, thus providing an improved representation. Experiments on several real HSI data sets demonstrate the qualitative and quantitative superiority of the proposed MASR algorithm when compared to several well-known classifiers.

304 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A general framework of DL for RS data is provided, and the state-of-the-art DL methods in RS are regarded as special cases of input-output data combined with various deep networks and tuning tricks.
Abstract: Deep-learning (DL) algorithms, which learn the representative and discriminative features in a hierarchical manner from the data, have recently become a hotspot in the machine-learning area and have been introduced into the geoscience and remote sensing (RS) community for RS big data analysis. Considering the low-level features (e.g., spectral and texture) as the bottom level, the output feature representation from the top level of the network can be directly fed into a subsequent classifier for pixel-based classification. As a matter of fact, by carefully addressing the practical demands in RS applications and designing the input?output levels of the whole network, we have found that DL is actually everywhere in RS data analysis: from the traditional topics of image preprocessing, pixel-based classification, and target recognition, to the recent challenging tasks of high-level semantic feature extraction and RS scene understanding.

1,625 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.
Abstract: A fast and effective image fusion method is proposed for creating a highly informative fused image through merging multiple images. The proposed method is based on a two-scale decomposition of an image into a base layer containing large scale variations in intensity, and a detail layer capturing small scale details. A novel guided filtering-based weighted average technique is proposed to make full use of spatial consistency for fusion of the base and detail layers. Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.

1,300 citations

Journal ArticleDOI
TL;DR: An end-to-end spectral–spatial residual network that takes raw 3-D cubes as input data without feature engineering for hyperspectral image classification and achieves the state-of-the-art HSI classification accuracy in agricultural, rural–urban, and urban data sets.
Abstract: In this paper, we designed an end-to-end spectral–spatial residual network (SSRN) that takes raw 3-D cubes as input data without feature engineering for hyperspectral image classification. In this network, the spectral and spatial residual blocks consecutively learn discriminative features from abundant spectral signatures and spatial contexts in hyperspectral imagery (HSI). The proposed SSRN is a supervised deep learning framework that alleviates the declining-accuracy phenomenon of other deep learning models. Specifically, the residual blocks connect every other 3-D convolutional layer through identity mapping, which facilitates the backpropagation of gradients. Furthermore, we impose batch normalization on every convolutional layer to regularize the learning process and improve the classification performance of trained models. Quantitative and qualitative results demonstrate that the SSRN achieved the state-of-the-art HSI classification accuracy in agricultural, rural–urban, and urban data sets: Indian Pines, Kennedy Space Center, and University of Pavia.

1,105 citations

Journal ArticleDOI
TL;DR: The authors attempt to fill the gap by providing a critical description and extensive comparisons of some of the main state-of-the-art pansharpening methods by offering a detailed comparison of their performances with respect to the different instruments.
Abstract: Pansharpening aims at fusing a multispectral and a panchromatic image, featuring the result of the processing with the spectral resolution of the former and the spatial resolution of the latter. In the last decades, many algorithms addressing this task have been presented in the literature. However, the lack of universally recognized evaluation criteria, available image data sets for benchmarking, and standardized implementations of the algorithms makes a thorough evaluation and comparison of the different pansharpening techniques difficult to achieve. In this paper, the authors attempt to fill this gap by providing a critical description and extensive comparisons of some of the main state-of-the-art pansharpening methods. In greater details, several pansharpening algorithms belonging to the component substitution or multiresolution analysis families are considered. Such techniques are evaluated through the two main protocols for the assessment of pansharpening results, i.e., based on the full- and reduced-resolution validations. Five data sets acquired by different satellites allow for a detailed comparison of the algorithms, characterization of their performances with respect to the different instruments, and consistency of the two validation procedures. In addition, the implementation of all the pansharpening techniques considered in this paper and the framework used for running the simulations, comprising the two validation procedures and the main assessment indexes, are collected in a MATLAB toolbox that is made available to the community.

980 citations

Journal ArticleDOI
TL;DR: A general image fusion framework by combining MST and SR to simultaneously overcome the inherent defects of both the MST- and SR-based fusion methods is presented and experimental results demonstrate that the proposed fusion framework can obtain state-of-the-art performance.

952 citations