scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Hyperspectral target detection based on transform domain adaptive constrained energy minimization

TL;DR: Wang et al. as discussed by the authors proposed a fractional domain-based revised constrained energy minimization detector, where sliding double window strategy is used to make the best of the local spatial statistical characteristics of testing pixel.
About: This article is published in International Journal of Applied Earth Observation and Geoinformation.The article was published on 2021-12-01 and is currently open access. It has received 3 citations till now. The article focuses on the topics: Fractional Fourier transform & Hyperspectral imaging.
Citations
More filters
Journal ArticleDOI
Wei Yao, Lu Li, Hongyu Ni, Wei Liu, Ran Tao 
TL;DR: This work proposes a non-convex regularized approximation model based on low-rank and sparse matrix decomposition (LRSNCR), which is closer to the original problem than RPCA and has better detection performance.
Abstract: The low-rank and sparse decomposition model has been favored by the majority of hyperspectral image anomaly detection personnel, especially the robust principal component analysis(RPCA) model, over recent years. However, in the RPCA model, ℓ0 operator minimization is an NP-hard problem, which is applicable in both low-rank and sparse items. A general approach is to relax the ℓ0 operator to ℓ1-norm in the traditional RPCA model, so as to approximately transform it to the convex optimization field. However, the solution obtained by convex optimization approximation often brings the problem of excessive punishment and inaccuracy. On this basis, we propose a non-convex regularized approximation model based on low-rank and sparse matrix decomposition (LRSNCR), which is closer to the original problem than RPCA. The WNNM and Capped ℓ2,1-norm are used to replace the low-rank item and sparse item of the matrix, respectively. Based on the proposed model, an effective optimization algorithm is then given. Finally, the experimental results on four real hyperspectral image datasets show that the proposed LRSNCR has better detection performance.

9 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a novel collaborative-guided spectral abundance learning model (denoted as CGSAL) for subpixel target detection based on the bilinear mixing model in hyperspectral images.

1 citations

Journal ArticleDOI
TL;DR: In this article , a methodology is developed using features extracted from hyperspectral reflectance (HR), chlorophyll fluorescence imaging (CFI), and high-throughput phenotyping (HTP) for asymptomatic to symptomatic disease detection from two consecutive years of experiments.
Abstract: The growth of the fusarium head blight (FHB) pathogen at the grain formation stage is a deadly threat to wheat production through disruption of the photosynthetic processes of wheat spikes. Real-time nondestructive and frequent proxy detection approaches are necessary to control pathogen propagation and targeted fungicide application. Therefore, this study examined the ch\lorophyll-related phenotypes or features from spectral and chlorophyll fluorescence for FHB monitoring. A methodology is developed using features extracted from hyperspectral reflectance (HR), chlorophyll fluorescence imaging (CFI), and high-throughput phenotyping (HTP) for asymptomatic to symptomatic disease detection from two consecutive years of experiments. The disease-sensitive features were selected using the Boruta feature-selection algorithm, and subjected to machine learning-sequential floating forward selection (ML-SFFS) for optimum feature combination. The results demonstrated that the biochemical parameters, HR, CFI, and HTP showed consistent alterations during the spike–pathogen interaction. Among the selected disease sensitive features, reciprocal reflectance (RR=1/700) demonstrated the highest coefficient of determination (R 2) of 0.81, with root mean square error (RMSE) of 11.1. The multivariate k-nearest neighbor model outperformed the competing multivariate and univariate models with an overall accuracy of R 2 = 0.92 and RMSE = 10.21. A combination of two to three kinds of features was found optimum for asymptomatic disease detection using ML-SFFS with an average classification accuracy of 87.04% that gradually improved to 95% for a disease severity level of 20%. The study demonstrated the fusion of chlorophyll-related phenotypes with the ML-SFFS might be a good choice for crop disease detection.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: A technique which simultaneously reduces the data dimensionality, suppresses undesired or interfering spectral signatures, and detects the presence of a spectral signature of interest is described.
Abstract: Most applications of hyperspectral imagery require processing techniques which achieve two fundamental goals: 1) detect and classify the constituent materials for each pixel in the scene; 2) reduce the data volume/dimensionality, without loss of critical information, so that it can be processed efficiently and assimilated by a human analyst. The authors describe a technique which simultaneously reduces the data dimensionality, suppresses undesired or interfering spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel vector onto a subspace which is orthogonal to the undesired signatures. This operation is an optimal interference suppression process in the least squares sense. Once the interfering signatures have been nulled, projecting the residual onto the signature of interest maximizes the signal-to-noise ratio and results in a single component image that represents a classification for the signature of interest. The orthogonal subspace projection (OSP) operator can be extended to k-signatures of interest, thus reducing the dimensionality of k and classifying the hyperspectral image simultaneously. The approach is applicable to both spectrally pure as well as mixed pixels. >

1,570 citations

Journal ArticleDOI
TL;DR: This work focuses on detection algorithms that assume multivariate normal distribution models for HSI data and presents some results which illustrate the performance of some detection algorithms using real hyperspectral imaging (HSI) data.
Abstract: We introduce key concepts and issues including the effects of atmospheric propagation upon the data, spectral variability, mixed pixels, and the distinction between classification and detection algorithms. Detection algorithms for full pixel targets are developed using the likelihood ratio approach. Subpixel target detection, which is more challenging due to background interference, is pursued using both statistical and subspace models for the description of spectral variability. Finally, we provide some results which illustrate the performance of some detection algorithms using real hyperspectral imaging (HSI) data. Furthermore, we illustrate the potential deviation of HSI data from normality and point to some distributions that may serve in the development of algorithms with better or more robust performance. We therefore focus on detection algorithms that assume multivariate normal distribution models for HSI data.

1,170 citations

Journal ArticleDOI
TL;DR: A new minibatch GCN is developed that is capable of inferring out-of-sample data without retraining networks and improving classification performance, and three fusion strategies are explored: additive fusion, elementwise multiplicative fusion, and concatenation fusion to measure the obtained performance gain.
Abstract: Convolutional neural networks (CNNs) have been attracting increasing attention in hyperspectral (HS) image classification due to their ability to capture spatial–spectral feature representations. Nevertheless, their ability in modeling relations between the samples remains limited. Beyond the limitations of grid sampling, graph convolutional networks (GCNs) have been recently proposed and successfully applied in irregular (or nongrid) data representation and analysis. In this article, we thoroughly investigate CNNs and GCNs (qualitatively and quantitatively) in terms of HS image classification. Due to the construction of the adjacency matrix on all the data, traditional GCNs usually suffer from a huge computational cost, particularly in large-scale remote sensing (RS) problems. To this end, we develop a new minibatch GCN (called miniGCN hereinafter), which allows to train large-scale GCNs in a minibatch fashion. More significantly, our miniGCN is capable of inferring out-of-sample data without retraining networks and improving classification performance. Furthermore, as CNNs and GCNs can extract different types of HS features, an intuitive solution to break the performance bottleneck of a single model is to fuse them. Since miniGCNs can perform batchwise network training (enabling the combination of CNNs and GCNs), we explore three fusion strategies: additive fusion, elementwise multiplicative fusion, and concatenation fusion to measure the obtained performance gain. Extensive experiments, conducted on three HS data sets, demonstrate the advantages of miniGCNs over GCNs and the superiority of the tested fusion strategies with regard to the single CNN or GCN models. The codes of this work will be available at https://github.com/danfenghong/IEEE_TGRS_GCN for the sake of reproducibility.

560 citations

Journal ArticleDOI
TL;DR: The experimental results demonstrate that the new hyperspectral measure, the spectral information measure (SIM), can characterize spectral variability more effectively than the commonly used spectral angle mapper (SAM).
Abstract: A hyperspectral image can be considered as an image cube where the third dimension is the spectral domain represented by hundreds of spectral wavelengths. As a result, a hyperspectral image pixel is actually a column vector with dimension equal to the number of spectral bands and contains valuable spectral information that can be used to account for pixel variability, similarity and discrimination. We present a new hyperspectral measure, the spectral information measure (SIM), to describe spectral variability and two criteria, spectral information divergence and spectral discriminatory probability for spectral similarity and discrimination, respectively. The spectral information measure is an information-theoretic measure which treats each pixel as a random variable using its spectral signature histogram as the desired probability distribution. Spectral information divergence (SID) compares the similarity between two pixels by measuring the probabilistic discrepancy between two corresponding spectral signatures. The spectral discriminatory probability calculates spectral probabilities of a spectral database (library) relative to a pixel to be identified so as to achieve material identification. In order to compare the discriminatory power of one spectral measure relative to another, a criterion is also introduced for performance evaluation, which is based on the power of discriminating one pixel from another relative to a reference pixel. The experimental results demonstrate that the new hyperspectral measure can characterize spectral variability more effectively than the commonly used spectral angle mapper (SAM).

505 citations

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed detector may outperform the traditional detection methods such as the classic Reed-Xiaoli (RX) algorithm, the kernel RX algorithm, and the state-of-the-art robust principal component analysis based and sparse-representation-based anomaly detectors, with low computational cost.
Abstract: In this paper, collaborative representation is proposed for anomaly detection in hyperspectral imagery. The algorithm is directly based on the concept that each pixel in background can be approximately represented by its spatial neighborhoods, while anomalies cannot. The representation is assumed to be the linear combination of neighboring pixels, and the collaboration of representation is reinforced by l 2 -norm minimization of the representation weight vector. To adjust the contribution of each neighboring pixel, a distance-weighted regularization matrix is included in the optimization problem, which has a simple and closed-form solution. By imposing the sum-to-one constraint to the weight vector, the stability of the solution can be enhanced. The major advantage of the proposed algorithm is the capability of adaptively modeling the background even when anomalous pixels are involved. A kernel extension of the proposed approach is also studied. Experimental results indicate that our proposed detector may outperform the traditional detection methods such as the classic Reed-Xiaoli (RX) algorithm, the kernel RX algorithm, and the state-of-the-art robust principal component analysis based and sparse-representation-based anomaly detectors, with low computational cost.

480 citations