scispace - formally typeset
Search or ask a question
Author

Wenzhi Zhao

Bio: Wenzhi Zhao is an academic researcher from Peking University. The author has contributed to research in topics: Deep learning & Hyperspectral imaging. The author has an hindex of 1, co-authored 1 publications receiving 296 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Comparative experiments conducted over widely used hyperspectral data indicate that DCNNs-LR classifier built in this proposed deep learning framework provides better classification accuracy than previous hyperspectRAL classification methods.
Abstract: In this letter, a novel deep learning framework for hyperspectral image classification using both spectral and spatial features is presented. The framework is a hybrid of principal component analysis, deep convolutional neural networks (DCNNs) and logistic regression (LR). The DCNNs for hierarchically extract deep features is introduced into hyperspectral image classification for the first time. The proposed technique consists of two steps. First, feature map generation algorithm is presented to generate the spectral and spatial feature maps. Second, the DCNNs-LR classifier is trained to get useful high-level features and to fine-tune the whole model. Comparative experiments conducted over widely used hyperspectral data indicate that DCNNs-LR classifier built in this proposed deep learning framework provides better classification accuracy than previous hyperspectral classification methods.

422 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A general framework of DL for RS data is provided, and the state-of-the-art DL methods in RS are regarded as special cases of input-output data combined with various deep networks and tuning tricks.
Abstract: Deep-learning (DL) algorithms, which learn the representative and discriminative features in a hierarchical manner from the data, have recently become a hotspot in the machine-learning area and have been introduced into the geoscience and remote sensing (RS) community for RS big data analysis. Considering the low-level features (e.g., spectral and texture) as the bottom level, the output feature representation from the top level of the network can be directly fed into a subsequent classifier for pixel-based classification. As a matter of fact, by carefully addressing the practical demands in RS applications and designing the input?output levels of the whole network, we have found that DL is actually everywhere in RS data analysis: from the traditional topics of image preprocessing, pixel-based classification, and target recognition, to the recent challenging tasks of high-level semantic feature extraction and RS scene understanding.

1,625 citations

Journal ArticleDOI
TL;DR: An overview of machine learning from an applied perspective focuses on the relatively mature methods of support vector machines, single decision trees (DTs), Random Forests, boosted DTs, artificial neural networks, and k-nearest neighbours (k-NN).
Abstract: Machine learning offers the potential for effective and efficient classification of remotely sensed imagery. The strengths of machine learning include the capacity to handle data of high dimensionality and to map classes with very complex characteristics. Nevertheless, implementing a machine-learning classification is not straightforward, and the literature provides conflicting advice regarding many key issues. This article therefore provides an overview of machine learning from an applied perspective. We focus on the relatively mature methods of support vector machines, single decision trees (DTs), Random Forests, boosted DTs, artificial neural networks, and k-nearest neighbours (k-NN). Issues considered include the choice of algorithm, training data requirements, user-defined parameter selection and optimization, feature space impacts and reduction, and computational costs. We illustrate these issues through applying machine-learning classification to two publically available remotely sensed dat...

919 citations

Journal ArticleDOI
TL;DR: A spectral-spatial feature based classification (SSFC) framework that jointly uses dimension reduction and deep learning techniques for spectral and spatial feature extraction, respectively is proposed.
Abstract: In this paper, we propose a spectral–spatial feature based classification (SSFC) framework that jointly uses dimension reduction and deep learning techniques for spectral and spatial feature extraction, respectively. In this framework, a balanced local discriminant embedding algorithm is proposed for spectral feature extraction from high-dimensional hyperspectral data sets. In the meantime, convolutional neural network is utilized to automatically find spatial-related features at high levels. Then, the fusion feature is extracted by stacking spectral and spatial features together. Finally, the multiple-feature-based classifier is trained for image classification. Experimental results on well-known hyperspectral data sets show that the proposed SSFC method outperforms other commonly used methods for hyperspectral image classification.

872 citations

Journal ArticleDOI
TL;DR: An end-to-end framework for the dense, pixelwise classification of satellite imagery with convolutional neural networks (CNNs) and design a multiscale neuron module that alleviates the common tradeoff between recognition and precise localization is proposed.
Abstract: We propose an end-to-end framework for the dense, pixelwise classification of satellite imagery with convolutional neural networks (CNNs). In our framework, CNNs are directly trained to produce classification maps out of the input images. We first devise a fully convolutional architecture and demonstrate its relevance to the dense classification problem. We then address the issue of imperfect training data through a two-step training approach: CNNs are first initialized by using a large amount of possibly inaccurate reference data, and then refined on a small amount of accurately labeled data. To complete our framework, we design a multiscale neuron module that alleviates the common tradeoff between recognition and precise localization. A series of experiments show that our networks consider a large amount of context to provide fine-grained classification maps.

859 citations

Journal ArticleDOI
TL;DR: A 3D convolutional neural network framework is proposed for accurate HSI classification, which is lighter, less likely to over-fit, and easier to train, and requires fewer parameters than other deep learning-based methods.
Abstract: Recent research has shown that using spectral–spatial information can considerably improve the performance of hyperspectral image (HSI) classification. HSI data is typically presented in the format of 3D cubes. Thus, 3D spatial filtering naturally offers a simple and effective method for simultaneously extracting the spectral–spatial features within such images. In this paper, a 3D convolutional neural network (3D-CNN) framework is proposed for accurate HSI classification. The proposed method views the HSI cube data altogether without relying on any preprocessing or post-processing, extracting the deep spectral–spatial-combined features effectively. In addition, it requires fewer parameters than other deep learning-based methods. Thus, the model is lighter, less likely to over-fit, and easier to train. For comparison and validation, we test the proposed method along with three other deep learning-based HSI classification methods—namely, stacked autoencoder (SAE), deep brief network (DBN), and 2D-CNN-based methods—on three real-world HSI datasets captured by different sensors. Experimental results demonstrate that our 3D-CNN-based method outperforms these state-of-the-art methods and sets a new record.

835 citations