scispace - formally typeset
Search or ask a question
Author

Xiuping Jia

Bio: Xiuping Jia is an academic researcher from University of New South Wales. The author has contributed to research in topics: Hyperspectral imaging & Feature extraction. The author has an hindex of 45, co-authored 300 publications receiving 8158 citations. Previous affiliations of Xiuping Jia include Beijing Normal University & Information Technology University.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes a 3-D CNN-based FE model with combined regularization to extract effective spectral-spatial features of hyperspectral imagery and reveals that the proposed models with sparse constraints provide competitive results to state-of-the-art methods.
Abstract: Due to the advantages of deep learning, in this paper, a regularized deep feature extraction (FE) method is presented for hyperspectral image (HSI) classification using a convolutional neural network (CNN). The proposed approach employs several convolutional and pooling layers to extract deep features from HSIs, which are nonlinear, discriminant, and invariant. These features are useful for image classification and target detection. Furthermore, in order to address the common issue of imbalance between high dimensionality and limited availability of training samples for the classification of HSI, a few strategies such as L2 regularization and dropout are investigated to avoid overfitting in class data modeling. More importantly, we propose a 3-D CNN-based FE model with combined regularization to extract effective spectral-spatial features of hyperspectral imagery. Finally, in order to further improve the performance, a virtual sample enhanced method is proposed. The proposed approaches are carried out on three widely used hyperspectral data sets: Indian Pines, University of Pavia, and Kennedy Space Center. The obtained results reveal that the proposed models with sparse constraints provide competitive results to state-of-the-art methods. In addition, the proposed deep FE opens a new window for further research.

2,059 citations

Journal ArticleDOI
TL;DR: A new feature extraction (FE) and image classification framework are proposed for hyperspectral data analysis based on deep belief network (DBN) and a novel deep architecture is proposed, which combines the spectral-spatial FE and classification together to get high classification accuracy.
Abstract: Hyperspectral data classification is a hot topic in remote sensing community. In recent years, significant effort has been focused on this issue. However, most of the methods extract the features of original data in a shallow manner. In this paper, we introduce a deep learning approach into hyperspectral image classification. A new feature extraction (FE) and image classification framework are proposed for hyperspectral data analysis based on deep belief network (DBN). First, we verify the eligibility of restricted Boltzmann machine (RBM) and DBN by the following spectral information-based classification. Then, we propose a novel deep architecture, which combines the spectral–spatial FE and classification together to get high classification accuracy. The framework is a hybrid of principal component analysis (PCA), hierarchical learning-based FE, and logistic regression (LR). Experimental results with hyperspectral data indicate that the classifier provide competitive solution with the state-of-the-art methods. In addition, this paper reveals that deep learning system has huge potential for hyperspectral data classification.

1,028 citations

Journal ArticleDOI
TL;DR: A segmented, and possibly multistage, principal components transformation (PCT) is proposed for efficient hyperspectral remote-sensing image classification and display and results have been obtained in terms of classification accuracy, speed, and quality of color image display using two airborne visible/infrared imaging spectrometer (AVIRIS) data sets.
Abstract: A segmented, and possibly multistage, principal components transformation (PCT) is proposed for efficient hyperspectral remote-sensing image classification and display. The scheme requires, initially, partitioning the complete set of bands into several highly correlated subgroups. After separate transformation of each subgroup, the single-band separabilities are used as a guide to carry out feature selection. The selected features can then be transformed again to achieve a satisfactory data reduction ratio and generate the three most significant components for color display. The scheme reduces the computational load significantly for feature extraction, compared with the conventional PCT. A reduced number of features will also accelerate the maximum likelihood classification process significantly, and the process will not suffer the limitations encountered by trying to use the full set of hyperspectral data when training samples are limited. Encouraging results have been obtained in terms of classification accuracy, speed, and quality of color image display using two airborne visible/infrared imaging spectrometer (AVIRIS) data sets.

408 citations

Journal ArticleDOI
05 Feb 2013
TL;DR: An overview of both conventional and advanced feature reduction methods, with details on a few techniques that are commonly used for analysis of hyperspectral data.
Abstract: Hyperspectral sensors record the reflectance from the Earth's surface over the full range of solar wavelengths with high spectral resolution. The resulting high-dimensional data contain rich information for a wide range of applications. However, for a specific application, not all the measurements are important and useful. The original feature space may not be the most effective space for representing the data. Feature mining, which includes feature generation, feature selection (FS), and feature extraction (FE), is a critical task for hyperspectral data classification. Significant research effort has focused on this issue since hyperspectral data became available in the late 1980s. The feature mining techniques which have been developed include supervised and unsupervised, parametric and nonparametric, linear and nonlinear methods, which all seek to identify the informative subspace. This paper provides an overview of both conventional and advanced feature reduction methods, with details on a few techniques that are commonly used for analysis of hyperspectral data. A general form that represents several linear and nonlinear FE methods is also presented. Experiments using two widely available hyperspectral data sets are included to illustrate selected FS and FE methods.

359 citations

Journal ArticleDOI
TL;DR: A novel multiple kernel learning (MKL) framework to incorporate both spectral and spatial features for hyperspectral image classification, which is called multiple-structure-element nonlinear MKL (MultiSE-NMKL).
Abstract: In this paper, we propose a novel multiple kernel learning (MKL) framework to incorporate both spectral and spatial features for hyperspectral image classification, which is called multiple-structure-element nonlinear MKL (MultiSE-NMKL). In the proposed framework, multiple structure elements (MultiSEs) are employed to generate extended morphological profiles (EMPs) to present spatial–spectral information. In order to better mine interscale and interstructure similarity among EMPs, a nonlinear MKL (NMKL) is introduced to learn an optimal combined kernel from the predefined linear base kernels. We integrate this NMKL with support vector machines (SVMs) and reduce the min–max problem to a simple minimization problem. The optimal weight for each kernel matrix is then solved by a projection-based gradient descent algorithm. The advantages of using nonlinear combination of base kernels and multiSE-based EMP are that similarity information generated from the nonlinear interaction of different kernels is fully exploited, and the discriminability of the classes of interest is deeply enhanced. Experiments are conducted on three real hyperspectral data sets. The experimental results show that the proposed method achieves better performance for hyperspectral image classification, compared with several state-of-the-art algorithms. The MultiSE EMPs can provide much higher classification accuracy than using a single-SE EMP.

215 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of the classification of hyperspectral remote sensing images by support vector machines by understanding and assessing the potentialities of SVM classifiers in hyperdimensional feature spaces and concludes that SVMs are a valid and effective alternative to conventional pattern recognition approaches.
Abstract: This paper addresses the problem of the classification of hyperspectral remote sensing images by support vector machines (SVMs) First, we propose a theoretical discussion and experimental analysis aimed at understanding and assessing the potentialities of SVM classifiers in hyperdimensional feature spaces Then, we assess the effectiveness of SVMs with respect to conventional feature-reduction-based approaches and their performances in hypersubspaces of various dimensionalities To sustain such an analysis, the performances of SVMs are compared with those of two other nonparametric classifiers (ie, radial basis function neural networks and the K-nearest neighbor classifier) Finally, we study the potentially critical issue of applying binary SVMs to multiclass problems in hyperspectral data In particular, four different multiclass strategies are analyzed and compared: the one-against-all, the one-against-one, and two hierarchical tree-based strategies Different performance indicators have been used to support our experimental studies in a detailed and accurate way, ie, the classification accuracy, the computational time, the stability to parameter setting, and the complexity of the multiclass architecture The results obtained on a real Airborne Visible/Infrared Imaging Spectroradiometer hyperspectral dataset allow to conclude that, whatever the multiclass strategy adopted, SVMs are a valid and effective alternative to conventional pattern recognition approaches (feature-reduction procedures combined with a classification method) for the classification of hyperspectral remote sensing data

3,607 citations

Journal ArticleDOI
TL;DR: This paper reviews remote sensing implementations of support vector machines (SVMs), a promising machine learning methodology that is particularly appealing in the remote sensing field due to their ability to generalize well even with limited training samples.
Abstract: A wide range of methods for analysis of airborne- and satellite-derived imagery continues to be proposed and assessed. In this paper, we review remote sensing implementations of support vector machines (SVMs), a promising machine learning methodology. This review is timely due to the exponentially increasing number of works published in recent years. SVMs are particularly appealing in the remote sensing field due to their ability to generalize well even with limited training samples, a common limitation for remote sensing applications. However, they also suffer from parameter assignment issues that can significantly affect obtained results. A summary of empirical results is provided for various applications of over one hundred published works (as of April, 2010). It is our hope that this survey will provide guidelines for future applications of SVMs and possible areas of algorithm enhancement.

2,546 citations

Book
01 Jan 1997
TL;DR: The Nature of Remote Sensing: Introduction, Sensor Characteristics and Spectral Stastistics, and Spatial Transforms: Introduction.
Abstract: The Nature of Remote Sensing: Introduction. Remote Sensing. Information Extraction from Remote-Sensing Images. Spectral Factors in Remote Sensing. Spectral Signatures. Remote-Sensing Systems. Optical Sensors. Temporal Characteristics. Image Display Systems. Data Systems. Summary. Exercises. References. Optical Radiation Models: Introduction. Visible to Short Wave Infrared Region. Solar Radiation. Radiation Components. Surface-Reflected. Unscattered Component. Surface-Reflected. Atmosphere-Scattered Component. Path-Scattered Component. Total At-Sensor. Solar Radiance. Image Examples in the Solar Region. Terrain Shading. Shadowing. Atmospheric Correction. Midwave to Thermal Infrared Region. Thermal Radiation. Radiation Components. Surface-Emitted Component. Surface-Reflected. Atmosphere-Emitted Component. Path-Emitted Component. Total At-Sensor. Emitted Radiance. Total Solar and Thermal Upwelling Radiance. Image Examples in the Thermal Region. Summary. Exercises. References. Sensor Models: Introduction. Overall Sensor Model. Resolution. The Instrument Response. Spatial Resolution. Spectral Resolution. Spectral Response. Spatial Response. Optical PSFopt. Image Motion PSFIM. Detector PSFdet. Electronics PSFel. Net PSFnet. Comparison of Sensor PSFs. PSF Summary for TM. Imaging System Simulation. Amplification. Sampling and Quantization. Simplified Sensor Model. Geometric Distortion. Orbit Models. Platform Attitude Models. Scanner Models. Earth Model. Line and Whiskbroom ScanGeometry. Pushbroom Scan Geometry. Topographic Distortion. Summary. Exercises. References. Data Models: Introduction. A Word on Notation. Univariate Image Statistics. Histogram. Normal Distribution. Cumulative Histogram. Statistical Parameters. Multivariate Image Statistics. Reduction to Univariate Statistics. Noise Models. Statistical Measures of Image Quality. Contrast. Modulation. Signal-to-Noise Ratio (SNR). Noise Equivalent Signal. Spatial Statistics. Visualization of Spatial Covariance. Covariance with Semivariogram. Separability and Anisotropy. Power Spectral Density. Co-occurrence Matrix. Fractal Geometry. Topographic and Sensor Effects. Topography and Spectral Statistics. Sensor Characteristics and Spectral Stastistics. Sensor Characteristics and Spectral Scattergrams. Summary. Exercises. References. Spectral Transforms: Introduction. Feature Space. Multispectral Ratios. Vegetation Indexes. Image Examples. Principal Components. Standardized Principal Components (SPC) Transform. Maximum Noise Fraction (MNF) Transform. Tasseled Cap Tranformation. Contrast Enhancement. Transformations Based on Global Statistics. Linear Transformations. Nonlinear Transformations. Normalization Stretch. Reference Stretch. Thresholding. Adaptive Transformation. Color Image Contrast Enhancement. Min-max Stretch. Normalization Stretch. Decorrelation Stretch. Color Spacer Transformations. Summary. Exercises. References. Spatial Transforms: Introduction. An Image Model for Spatial Filtering. Convolution Filters. Low Pass and High Pass Filters. High Boost Filters. Directional Filters. The Border Region. Characterization of Filtered Images. The Box Filter Algorithm. Cascaded Linear Filters. Statistical Filters. Gradient Filters. Fourier Synthesis. Discrete Fourier Transforms in 2-D. The Fourier Components. Filtering with the Fourier Transform. Transfer Functions. The Power Spectrum. Scale Space Transforms. Image Resolution Pyramids. Zero-Crossing Filters. Laplacian-of-Gaussian (LoG) Filters. Difference-of-Gaussians (DoG) Filters.Wavelet Transforms. Summary. Exercises. References. Correction and Calibration: Introduction. Noise Correction. Global Noise. Sigma Filter. Nagao-Matsuyama Filter. Local Noise. Periodic Noise. Distriping 359. Global,Linear Detector Matching. Nonlinear Detector Matching. Statistical Modification to Linear and Nonlinear Detector. Matching. Spatial Filtering Approaches. Radiometric Calibration. Sensor Calibration. Atmospheric Correction. Solar and Topographic Correction. Image Examples. Calibration and Normalization of Hyperspectral Imagery. AVIRIS Examples. Distortion Correction. Polynomial Distortion Models. Ground Control Points (GCPs). Coordinate Transformation. Map Projections. Resampling. Summary. Exercises References. Registration and Image Fusion: Introduction. What is Registration? Automated GCP Location. Area Correlation. Other Spatial Features. Orthrectification. Low-Resolution DEM. High-Resolution DEM. Hierarchical Warp Stereo. Multi-Image Fusion. Spatial Domain Fusion. High Frequency Modulation. Spectral Domain Fusion. Fusion Image Examples. Summary. Exercises. References. Thematic Classification: Introduction. The Importance of Image Scale. The Notion of Similarity. Hard Versus Soft Classification. Training the Classifier. Supervised Training. Unsupervised Training. K-Means Clustering Algorithm. Clustering Examples. Hybrid Supervised/Unsupervised Training. Non-Parametric Classification Algorithms. Level-Slice. Nearest-Mean. Artificial Neural Networks (ANNs). Back-Propagation Algorithm. Nonparametric Classification Examples. Parametric Classification Algorithms. Estimation of Model-Parameters. Discriminant Functions. The Normal Distribution Model. Relation to the Nearest-Mean Classifier. Supervised Classification Examples and Comparison to Nonparametric Classifiers. Segmentation. Region Growing. Region Labeling. Sub-Pixel Classification. The Linear Mixing Model. Unmixing Model. Hyperspectral Image Analysis. Visualization of the Image Cube. Feature Extraction. Image Residuals. Pre-Classification Processing and Feature Extraction. Classification Algorithms. Exercises. Error Analysis. Multitemporal Images. Summary. References. Index.

2,290 citations

Dissertation
01 Jan 1975

2,119 citations