scispace - formally typeset
Search or ask a question
Author

Hongjun Su

Other affiliations: Nanjing Normal University
Bio: Hongjun Su is an academic researcher from Hohai University. The author has contributed to research in topics: Hyperspectral imaging & Support vector machine. The author has an hindex of 20, co-authored 50 publications receiving 1872 citations. Previous affiliations of Hongjun Su include Nanjing Normal University.


Papers
More filters
Journal ArticleDOI
TL;DR: The proposed framework employs local binary patterns to extract local image features, such as edges, corners, and spots, and employs the efficient extreme learning machine with a very simple structure as the classifier.
Abstract: It is of great interest in exploiting texture information for classification of hyperspectral imagery (HSI) at high spatial resolution. In this paper, a classification paradigm to exploit rich texture information of HSI is proposed. The proposed framework employs local binary patterns (LBPs) to extract local image features, such as edges, corners, and spots. Two levels of fusion (i.e., feature-level fusion and decision-level fusion) are applied to the extracted LBP features along with global Gabor features and original spectral features, where feature-level fusion involves concatenation of multiple features before the pattern classification process while decision-level fusion performs on probability outputs of each individual classification pipeline and soft-decision fusion rule is adopted to merge results from the classifier ensemble. Moreover, the efficient extreme learning machine with a very simple structure is employed as the classifier. Experimental results on several HSI data sets demonstrate that the proposed framework is superior to some traditional alternatives.

574 citations

Journal ArticleDOI
TL;DR: A new supervised band-selection algorithm that uses the known class signatures only without examining the original bands or the need of class training samples is proposed, which can complete the task much faster than traditional methods that test bands or band combinations.
Abstract: Band selection is often applied to reduce the dimensionality of hyperspectral imagery. When the desired object information is known, it can be achieved by finding the bands that contain the most object information. It is expected that these bands can provide an overall satisfactory detection and classification performance. In this letter, we propose a new supervised band-selection algorithm that uses the known class signatures only without examining the original bands or the need of class training samples. Thus, it can complete the task much faster than traditional methods that test bands or band combinations. The experimental result shows that our approach can generally yield better results than other popular supervised band-selection methods in the literature.

249 citations

Journal ArticleDOI
TL;DR: This paper proposes to integrate spectral-spatial information for hyperspectral image classification and exploit the benefits of using spatial features for the kernel based ELM (KELM) classifier and demonstrates that the proposed methods outperform the conventional pixel-wise classifiers as well as Gabor-filtering-based support vector machine (SVM) and MH-prediction-based SVM in challenging small training sample size conditions.
Abstract: Extreme learning machine (ELM) is a single-layer feedforward neural network based classifier that has attracted significant attention in computer vision and pattern recognition due to its fast learning speed and strong generalization. In this paper, we propose to integrate spectral-spatial information for hyperspectral image classification and exploit the benefits of using spatial features for the kernel based ELM (KELM) classifier. Specifically, Gabor filtering and multihypothesis (MH) prediction preprocessing are two approaches employed for spatial feature extraction. Gabor features have currently been successfully applied for hyperspectral image analysis due to the ability to represent useful spatial information. MH prediction preprocessing makes use of the spatial piecewise-continuous nature of hyperspectral imagery to integrate spectral and spatial information. The proposed Gabor-filtering-based KELM classifier and MH-prediction-based KELM classifier have been validated on two real hyperspectral datasets. Classification results demonstrate that the proposed methods outperform the conventional pixel-wise classifiers as well as Gabor-filtering-based support vector machine (SVM) and MH-prediction-based SVM in challenging small training sample size conditions.

212 citations

Journal ArticleDOI
TL;DR: The experimental results show that the 2PSO-based algorithm outperforms the popular sequential forward selection (SFS) method and PSO with one particle swarm in band selection.
Abstract: A particle swarm optimization (PSO)-based system is proposed to select bands and determine the optimal number of bands to be selected simultaneously, which is near-automatic with only a few data-independent parameters. The proposed system includes two particle swarms, i.e., the outer one for estimating the optimal number of bands and the inner one for the corresponding band selection. To avoid employing an actual classifier within PSO so as to greatly reduce computational cost, criterion functions that can gauge class separability are preferred; specifically, minimum estimated abundance covariance (MEAC) and Jeffreys-Matusita (JM) distance are adopted in this research. The experimental results show that the 2PSO-based algorithm outperforms the popular sequential forward selection (SFS) method and PSO with one particle swarm in band selection.

151 citations

Journal ArticleDOI
TL;DR: To further improve the representation power of CLBP, a multi-scale CLBP (MS-CLBP) descriptor is proposed to characterize the dominant texture features in multiple resolutions to improve classification accuracy and computational complexity.
Abstract: In this paper, we introduce the completed local binary patterns (CLBP) operator for the first time on remote sensing land-use scene classification. To further improve the representation power of CLBP, we propose a multi-scale CLBP (MS-CLBP) descriptor to characterize the dominant texture features in multiple resolutions. Two different kinds of implementations of MS-CLBP equipped with the kernel-based extreme learning machine are investigated and compared in terms of classification accuracy and computational complexity. The proposed approach is extensively tested on the 21-class land-use dataset and the 19-class satellite scene dataset showing a consistent increase on performance when compared to the state of the arts.

140 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: A large-scale data set, termed “NWPU-RESISC45,” is proposed, which is a publicly available benchmark for REmote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU).
Abstract: Remote sensing image scene classification plays an important role in a wide range of applications and hence has been receiving remarkable attention. During the past years, significant efforts have been made to develop various datasets or present a variety of approaches for scene classification from remote sensing images. However, a systematic review of the literature concerning datasets and methods for scene classification is still lacking. In addition, almost all existing datasets have a number of limitations, including the small scale of scene classes and the image numbers, the lack of image variations and diversity, and the saturation of accuracy. These limitations severely limit the development of new approaches especially deep learning-based methods. This paper first provides a comprehensive review of the recent progress. Then, we propose a large-scale dataset, termed "NWPU-RESISC45", which is a publicly available benchmark for REmote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU). This dataset contains 31,500 images, covering 45 scene classes with 700 images in each class. The proposed NWPU-RESISC45 (i) is large-scale on the scene classes and the total image number, (ii) holds big variations in translation, spatial resolution, viewpoint, object pose, illumination, background, and occlusion, and (iii) has high within-class diversity and between-class similarity. The creation of this dataset will enable the community to develop and evaluate various data-driven algorithms. Finally, several representative methods are evaluated using the proposed dataset and the results are reported as a useful baseline for future research.

1,424 citations

Journal ArticleDOI
TL;DR: The Aerial Image Data Set (AID) as mentioned in this paper is a large-scale data set for aerial scene classification, which contains more than 10,000 aerial images from remote sensing images.
Abstract: Aerial scene classification, which aims to automatically label an aerial image with a specific semantic category, is a fundamental problem for understanding high-resolution remote sensing imagery. In recent years, it has become an active task in the remote sensing area, and numerous algorithms have been proposed for this task, including many machine learning and data-driven approaches. However, the existing data sets for aerial scene classification, such as UC-Merced data set and WHU-RS19, contain relatively small sizes, and the results on them are already saturated. This largely limits the development of scene classification algorithms. This paper describes the Aerial Image data set (AID): a large-scale data set for aerial scene classification. The goal of AID is to advance the state of the arts in scene classification of remote sensing images. For creating AID, we collect and annotate more than 10000 aerial scene images. In addition, a comprehensive review of the existing aerial scene classification techniques as well as recent widely used deep learning methods is given. Finally, we provide a performance analysis of typical aerial scene classification and deep learning approaches on AID, which can be served as the baseline results on this benchmark.

1,081 citations

Journal ArticleDOI
TL;DR: This paper proposes a simple but effective method to learn discriminative CNNs (D-CNNs) to boost the performance of remote sensing image scene classification and comprehensively evaluates the proposed method on three publicly available benchmark data sets using three off-the-shelf CNN models.
Abstract: Remote sensing image scene classification is an active and challenging task driven by many applications. More recently, with the advances of deep learning models especially convolutional neural networks (CNNs), the performance of remote sensing image scene classification has been significantly improved due to the powerful feature representations learnt through CNNs. Although great success has been obtained so far, the problems of within-class diversity and between-class similarity are still two big challenges. To address these problems, in this paper, we propose a simple but effective method to learn discriminative CNNs (D-CNNs) to boost the performance of remote sensing image scene classification. Different from the traditional CNN models that minimize only the cross entropy loss, our proposed D-CNN models are trained by optimizing a new discriminative objective function. To this end, apart from minimizing the classification error, we also explicitly impose a metric learning regularization term on the CNN features. The metric learning regularization enforces the D-CNN models to be more discriminative so that, in the new D-CNN feature spaces, the images from the same scene class are mapped closely to each other and the images of different classes are mapped as farther apart as possible. In the experiments, we comprehensively evaluate the proposed method on three publicly available benchmark data sets using three off-the-shelf CNN models. Experimental results demonstrate that our proposed D-CNN methods outperform the existing baseline methods and achieve state-of-the-art results on all three data sets.

1,001 citations

Journal ArticleDOI
TL;DR: The Aerial Image data set (AID), a large-scale data set for aerial scene classification, is described to advance the state of the arts in scene classification of remote sensing images and can be served as the baseline results on this benchmark.
Abstract: Aerial scene classification, which aims to automatically label an aerial image with a specific semantic category, is a fundamental problem for understanding high-resolution remote sensing imagery. In recent years, it has become an active task in remote sensing area and numerous algorithms have been proposed for this task, including many machine learning and data-driven approaches. However, the existing datasets for aerial scene classification like UC-Merced dataset and WHU-RS19 are with relatively small sizes, and the results on them are already saturated. This largely limits the development of scene classification algorithms. This paper describes the Aerial Image Dataset (AID): a large-scale dataset for aerial scene classification. The goal of AID is to advance the state-of-the-arts in scene classification of remote sensing images. For creating AID, we collect and annotate more than ten thousands aerial scene images. In addition, a comprehensive review of the existing aerial scene classification techniques as well as recent widely-used deep learning methods is given. Finally, we provide a performance analysis of typical aerial scene classification and deep learning approaches on AID, which can be served as the baseline results on this benchmark.

872 citations