scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Applied Remote Sensing in 2018"


Journal ArticleDOI
TL;DR: In this article, two most common classification algorithms, random forests (RF) and support vector machine (SVM), were applied to conduct cropland classification from multispectral sensor (MSI) data.
Abstract: The identification and mapping of crops are important for estimating potential harvest as well as for agricultural field management. Optical remote sensing is one of the most attractive options because it offers vegetation indices and some data have been distributed free of charge. Especially, Sentinel-2A, which is equipped with a multispectral sensor (MSI) with blue, green, red, and near-infrared-1 bands at 10 m; red edge 1 to 3, near-infrared-2, and shortwave infrared 1 and 2 at 20 m; and 3 atmospheric bands (band 1, band 9, and band 10) at 60 m, offer some vegetation indices calculated to assess vegetation status. However, sufficient consideration has not been given to the potential of vegetation indices calculated from MSI data. Thus, 82 published indices were calculated and their importance were evaluated for classifying crop types. The two most common classification algorithms, random forests (RF) and support vector machine (SVM), were applied to conduct cropland classification from MSI data. Additionally, super learning was applied for more improvement, achieving overall accuracies of 90.2% to 92.2%. Of the two algorithms applied (RF and SVM), the accuracy of SVM was superior and 89.3% to 92.0% of overall accuracies were confirmed. Furthermore, stacking contributed to higher overall accuracies (90.2% to 92.2%), and significant differences were confirmed with the results of SVM and RF. Our results showed that vegetation indices had the greatest contributions in identifying specific crop types.

102 citations


Journal ArticleDOI
TL;DR: In this article, the authors test Sentinel-1 SAR multitemporal data, supported by multispectral and SAR data at other wavelengths, for fine-scale mapping of above-ground biomass (AGB) at the provincial level in a Mediterranean forested landscape.
Abstract: The objective of this research is to test Sentinel-1 SAR multitemporal data, supported by multispectral and SAR data at other wavelengths, for fine-scale mapping of above-ground biomass (AGB) at the provincial level in a Mediterranean forested landscape. The regression results indicate good accuracy of prediction (R2=0.7) using integrated sensors when an upper bound of 400 Mg ha−1 is used in modeling. Multitemporal SAR information was relevant, allowing the selection of optimal Sentinel-1 data, as broadleaf forests showed a different response in backscatter throughout the year. Similar accuracy in predictions was obtained when using SAR multifrequency data or joint SAR and optical data. Predictions based on SAR data were more conservative, and in line with those from an independent sample from the National Forest Inventory, than those based on joint data types. The potential of S1 data in predicting AGB can possibly be improved if models are developed per specific groups (deciduous or evergreen species) or forest types and using a larger range of ground data. Overall, this research shows the usefulness of Sentinel-1 data to map biomass at very high resolution for local study and at considerable carbon density.

95 citations


Journal ArticleDOI
TL;DR: The Multi-Angle Imager for Aerosols (MAIA) as mentioned in this paper improved on MISR's sensitivity to airborne particle composition by incorporating polarimetry and expanded spectral range, and used the resulting exposure estimates over globally distributed target areas to investigate the association of particle species with population health effects.
Abstract: Inhalation of airborne particulate matter (PM) is associated with a variety of adverse health outcomes However, the relative toxicity of specific PM types—mixtures of particles of varying sizes, shapes, and chemical compositions—is not well understood A major impediment has been the sparse distribution of surface sensors, especially those measuring speciated PM Aerosol remote sensing from Earth orbit offers the opportunity to improve our understanding of the health risks associated with different particle types and sources The Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard NASA’s Terra satellite has demonstrated the value of near-simultaneous observations of backscattered sunlight from multiple view angles for remote sensing of aerosol abundances and particle properties over land The Multi-Angle Imager for Aerosols (MAIA) instrument, currently in development, improves on MISR’s sensitivity to airborne particle composition by incorporating polarimetry and expanded spectral range Spatiotemporal regression relationships generated using collocated surface monitor and chemical transport model data will be used to convert fractional aerosol optical depths retrieved from MAIA observations to near-surface PM10, PM25, and speciated PM25 Health scientists on the MAIA team will use the resulting exposure estimates over globally distributed target areas to investigate the association of particle species with population health effects

63 citations


Journal ArticleDOI
TL;DR: A blocks-based object-based image classification (BOBIC) method that takes advantage of CNN to automatically extract complex features from the original image data, thereby avoiding the uncertainty caused by the manual extraction of features in OBIC.
Abstract: Convolutional neural network (CNN) has shown great success in computer vision tasks, but their application in land-use type classifications within the context of object-based image analysis has been rarely explored, especially in terms of the identification of irregular segmentation objects. Thus, a blocks-based object-based image classification (BOBIC) method was proposed to carry out end-to-end classification for segmentation objects using CNN. Specifically, BOBIC takes advantage of CNN to automatically extract complex features from the original image data, thereby avoiding the uncertainty caused by the manual extraction of features in OBIC. Additionally, OBIC compensates for the shortcomings of CNN whereby it is difficult to delineate a clear right boundary for ground objects at the pixel level. Using three high-resolution test images, the proposed BOBIC was compared with support vector machine (SVM) and random forest (RF) classifiers, and then, the effect of image blocks and mixed objects on classification accuracy was evaluated for the proposed BOBIC. Compared with conventional SVM and RF classifiers, the inclusion of CNN improved the OBIC classification performance substantially (5% to 10% increases in overall accuracy), and it also alleviated the effect derived from mixed objects.

51 citations


Journal ArticleDOI
TL;DR: The proposed UFCN (U-shaped FCN) is an FCN architecture, which is comprised of a stack of convolutions followed by corresponding stack of mirrored deconvolutions with the usage of skip connections in between for preserving the local information.
Abstract: Road extraction in imagery acquired by low altitude remote sensing (LARS) carried out using an unmanned aerial vehicle (UAV) is presented. LARS is carried out using a fixed wing UAV with a high spatial resolution vision spectrum (RGB) camera as the payload. Deep learning techniques, particularly fully convolutional network (FCN), are adopted to extract roads by dense semantic segmentation. The proposed model, UFCN (U-shaped FCN) is an FCN architecture, which is comprised of a stack of convolutions followed by corresponding stack of mirrored deconvolutions with the usage of skip connections in between for preserving the local information. The limited dataset (76 images and their ground truths) is subjected to real-time data augmentation during training phase to increase the size effectively. Classification performance is evaluated using precision, recall, accuracy, F1 score, and brier score parameters. The performance is compared with support vector machine (SVM) classifier, a one-dimensional convolutional neural network (1D-CNN) model, and a standard two-dimensional CNN (2D-CNN). The UFCN model outperforms the SVM, 1D-CNN, and 2D-CNN models across all the performance parameters. Further, the prediction time of the proposed UFCN model is comparable with SVM, 1D-CNN, and 2D-CNN models. (C) 2018 Society of Photo-Optical Instrumentation Engineers (SPIE)

49 citations


Journal ArticleDOI
TL;DR: In this paper, a histogram-matching approach was used to transfer AVIRIS-derived oil volume to MODIS pixel-scale dimensions, after masking clouds under both sun glint and nonglint conditions.
Abstract: The Deepwater Horizon (DWH) oil blowout in the Gulf of Mexico (GoM) led to the largest offshore oil spill in U.S. history. The accident resulted in oil slicks that covered between 10,000 and upward of 40,000 km2 of the Gulf between April and July 2010. Quantifying the actual spatial extent of oil over such synoptic scales on an operational basis and, in particular, estimating the oil volume (or slick thickness) of large oil slicks on the ocean surface has proven to be a challenge to researchers and responders alike. This challenge must be addressed to assess and understand impacts on marine and coastal resources and to prepare a response to future spills. We estimated surface oil volume and probability of occurrence of different oil thicknesses during the DWH blowout in the GoM by combining synoptic measurements (2330-km swath) from the satellite-borne NASA Moderate Resolution Imaging Spectroradiometer (MODIS) and near-concurrent, much narrower swath (∼5 km) hyperspectral observations from the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). A histogram-matching approach was used to transfer AVIRIS-derived oil volume to MODIS pixel-scale dimensions, after masking clouds under both sun glint and nonglint conditions. Probability functions were used to apply the transformation to 19 MODIS images collected during the DWH event. This generated three types of MODIS oil maps: maps of surface oil volume, maps of relative oil thickness with four different classes (i.e., 0 μm, 8 μm), and maps of probability distributions of different thicknesses. The results were compared with satellite-based synthetic aperture radar measurements and evaluated with concurrent aerial photographs. Although the methods may not be ideal and the results may contain large uncertainties, the current attempt suggests that coarse-resolution optical remote sensing observations can provide estimates of relative oil thickness/volume for large oil slicks captured by satellites.

43 citations


Journal ArticleDOI
TL;DR: Experiments on three widely used hyperspectral datasets demonstrate that the proposed transfer learning method can improve the classification performance and competitive classification results can be achieved when compared with state-of-the-art methods.
Abstract: The deep learning methods have recently been successfully explored for hyperspectral image classification. However, it may not perform well when training samples are scarce. A deep transfer learning method is proposed to improve the hyperspectral image classification performance in the situation of limited training samples. First, a Siamese network composed of two convolutional neural networks is designed for local image descriptors extraction. Subsequently, the pretrained Siamese network model is reused to transfer knowledge to the hyperspectral image classification tasks by feeding deep features extracted from each band into a recurrent neural network. Indeed, a deep convolutional recurrent neural network is constructed for hyperspectral image classification by this way. Finally, the entire network is tuned by a small number of labeled samples. The important characteristic of the designed model is that the deep convolutional recurrent neural network provides a way of utilizing the spatial–spectral features without dimension reduction. Furthermore, the transfer learning method provides an opportunity to train such deep model with limited labeled samples. Experiments on three widely used hyperspectral datasets demonstrate that the proposed transfer learning method can improve the classification performance and competitive classification results can be achieved when compared with state-of-the-art methods.

39 citations


Journal ArticleDOI
TL;DR: In this article, the utility of vegetation health (VH) indices derived from the advance very high-resolution radiometer (AVHRR) and visible infrared imaging radiometer suite (VIIRS) as a proxy for modeling Australian wheat from the National Oceanic and Atmospheric Administration (NOAA) operational afternoon polar-orbiting satellites was discussed.
Abstract: An early warning of crop losses in response to weather fluctuations helps farmers, governments, traders, and policy makers better monitor global food supply and demand and identifies nations in need of aid. This paper discusses the utility of vegetation health (VH) indices, derived from the advance very high-resolution radiometer (AVHRR) and visible infrared imaging radiometer suite (VIIRS), as a proxy for modeling Australian wheat from the National Oceanic and Atmospheric Administration (NOAA) operational afternoon polar-orbiting satellites. These models are used to assess wheat production and to provide an early warning of drought-related losses. The NOAA AVHRR- and VIIRS-based VH indices were used to model wheat yield in Australia. A strong correlation (≥0.7) between wheat yield and VH indices was found during the critical reproductive stage of development (enhanced crops sensitivity to weather), which starts 2 to 3 weeks before and ends 2 to 3 weeks after wheat heading. The results of modeling and independent testing proved that the VH indices (especially those estimating thermal and health conditions) are a good proxy providing 1 to 2 months before harvest yield prediction (with 3% to 6% error). With the new generation of NOAA-20 operational polar-orbiting satellites, launched in November 2017, the VH method will be improved considerably both in an advanced crop/pasture prediction, spatial resolution, and accuracy.

36 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a brief review of MODIS L1B calibration algorithms, including a number of improvements made in recent years, and present an update of sensor calibration uncertainty assessments with a focus on several new contributors resulting from on-orbit changes in sensor characteristics.
Abstract: The Moderate Resolution Imaging Spectroradiometer (MODIS) instruments have successfully operated for more than 18 and 16 years, respectively, on-board the NASA’s Earth Observing System Terra and Aqua spacecraft. Both Terra and Aqua MODIS have significantly contributed to the advance of global Earth remote sensing applications with a broad range of science products that have been continuously produced since the beginning of each mission and freely distributed to users worldwide. MODIS collects data in 20 reflective solar bands (RSB) and 16 thermal emissive bands (TEB), covering wavelengths from 0.41 to 14.4 μm. Its level 1B (L1B) data products, which provide the input for the MODIS high-level science products, include the top of the atmosphere reflectance factors for the RSB, radiances for both the RSB and TEB, and associated uncertainty indices (UI) at a pixel-by-pixel level. This paper provides a brief review of MODIS L1B calibration algorithms, including a number of improvements made in recent years. It presents an update of sensor calibration uncertainty assessments with a focus on several new contributors resulting from on-orbit changes in sensor characteristics, approaches developed to address these changes, and the impact due to on-orbit changes on the L1B data quality. Also discussed are remaining challenges and potential improvements to be made to continuously maintain sensor calibration and data quality, particularly those related to the quality of MODIS L1B uncertainty.

36 citations


Journal ArticleDOI
TL;DR: An effective framework, LASC-CNN, obtained by locality-constrained affine subspace coding (LASC) pooling of a CNN filter bank, which builds on the top convolutional layers of CNNs, which can incorporate multiscale information and regions of arbitrary resolution and sizes.
Abstract: The features extracted from deep convolutional neural networks (CNNs) have shown their promise as generic descriptors for land-use scene recognition. However, most of the work directly adopts the deep features for the classification of remote sensing images, and does not encode the deep features for improving their discriminative power, which can affect the performance of deep feature representations. To address this issue, we propose an effective framework, LASC-CNN, obtained by locality-constrained affine subspace coding (LASC) pooling of a CNN filter bank. LASC-CNN obtains more discriminative deep features than directly extracted from CNNs. Furthermore, LASC-CNN builds on the top convolutional layers of CNNs, which can incorporate multiscale information and regions of arbitrary resolution and sizes. Our experiments have been conducted using two widely used remote sensing image databases, and the results show that the proposed method significantly improves the performance when compared to other state-of-the-art methods.

35 citations


Journal ArticleDOI
TL;DR: In this article, a multiered unsupervised classification was performed using the Southwest Regional Gap Analysis Project (SWReGAP) map as a guide to map vegetation types in the San Carlos Apache Tribe.
Abstract: Mapping of vegetation types is of great importance to the San Carlos Apache Tribe and their management of forestry and fire fuels. Various remote sensing techniques were applied to classify multitemporal Landsat 8 satellite data, vegetation index, and digital elevation model data. A multitiered unsupervised classification generated over 900 classes that were then recoded to one of the 16 generalized vegetation/land cover classes using the Southwest Regional Gap Analysis Project (SWReGAP) map as a guide. A supervised classification was also run using field data collected in the SWReGAP project and our field campaign. Field data were gathered and accuracy assessments were generated to compare outputs. Our hypothesis was that a resulting map would update and potentially improve upon the vegetation/land cover class distributions of the older SWReGAP map over the 24,000 km2 study area. The estimated overall accuracies ranged between 43% and 75%, depending on which method and field dataset were used. The findings demonstrate the complexity of vegetation mapping, the importance of recent, high-quality-field data, and the potential for misleading results when insufficient field data are collected.

Journal ArticleDOI
TL;DR: A new loss function and convolutional neural network model are proposed that can maximize the intraclass compactness and interclass separation simultaneously and is suitable for the task of SAR ship classification.
Abstract: Ship classification in synthetic aperture radar (SAR) images is essential in remote sensing but still full of challenges in the deep learning era. The unbalanced dataset and lack of models are two limitations. Upsampling with data augmentation and ratio batching are proposed to solve the first problem. Upsampling with data augmentation is upsampling by cropping and flipping. It can improve the diversity of the dataset. Ratio batching is realized by choosing the same amount of ships per class in each minibatch. It can make the model converge faster and better. To solve the second problem, a new loss function and convolutional neural network model are proposed. The new loss function can maximize the intraclass compactness and interclass separation simultaneously. Dense residual network has two submodules. One is the identity mapping through elementwise summation to reuse old features. The other is dense connection through concatenation to exploit new features. The designed architecture is suitable for the task of SAR ship classification. We use the confusion matrix and accuracy averaged on classes to measure the performance. From the experiments, we can find that the proposed ideas have excellent performance especially on the rare classes.

Journal ArticleDOI
TL;DR: The Geostationary Environment Monitoring Spectrometer (GEMS) as mentioned in this paper is a hyperspectral spectrometer that covers the ultraviolet-visible range (300 to 500 nm) with full-width at half-maximum of 0.6 nm.
Abstract: To consistently observe deteriorating air quality over East Asia, the National Institute of Environmental Research, Republic of Korea, is planning to launch an environmental observation sensor, the Geostationary Environment Monitoring Spectrometer (GEMS), onboard the GK-2B satellite (a successor to the GeoKOMPSAT-1) in late 2019. GEMS is a hyperspectral spectrometer that covers the ultraviolet–visible range (300 to 500 nm) with full-width at half-maximum of 0.6 nm. It has been designed for the observation of air pollutants and short-lived climate pollutants. GEMS captures images at hourly intervals in the daytime, alternating with the Geostationary Ocean Color Imager-II every 30 min. Over the Seoul Special Metropolitan area, South Korea, the spatial sampling resolution of GEMS is 3.5 × 8 km (north–south and east–west, respectively). There are 16 baseline products, including aerosol optical depth and the vertical column density of trace gases such as nitrogen dioxide, sulfur dioxide, formaldehyde, and ozone. Research continues into additional applications (e.g., ground-level concentrations and emissions).

Journal ArticleDOI
TL;DR: A hybrid method for subpixel target detection of an HSI is developed that improves more than 10 times with regard to the average DR compared with that of the traditional MF and ACE algorithms, which use N-FINDR target extraction and Reed–Xiaoli detector for background estimation.
Abstract: Abstract. A hyperspectral image (HSI) has high-spectral and low-spatial resolution. As a result, most targets exist as subpixels, which pose challenges during target detection. Moreover, limitations of target and background samples always hinder the detection performance. In this study, a hybrid method for subpixel target detection of an HSI is developed. The scores of matched filter (MF) and adaptive cosine estimator (ACE) are used to construct a hybrid detection space. The reference target spectrum and background covariance matrix are improved iteratively based on the distribution property of targets, using the hybrid detection space. As the iterative process proceeds, the reference target spectra get closer to the central line, which connects the centers of the target and the background, resulting in a noticeable improvement in target detection. One synthetic dataset and two real datasets are used in the experiments. The results are evaluated based on the mean detection rate (DR), receiver operating characteristic curve, and observations of the detection results. For the synthetic experiment, the hybrid method improves more than 10 times with regard to the average DR compared with that of the traditional MF and ACE algorithms, which use N-FINDR target extraction and Reed–Xiaoli detector for background estimation.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the global and longitudinal trends of MODIS land surface temperature (LST) data applications and found that the publications of papers related to MODIS LST data have been steadily rising annually.
Abstract: The Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Terra and Aqua satellites, which provides a very high temporal (four times per day) and spatial (1 km) resolution, has become one of the most important and widely used sensors for a broad range of applications. We analyze 529 articles from 159 journals in the Scopus database from 2009 to 2018 to understand the global and longitudinal trends of MODIS land surface temperature (LST) data applications. The results show that the publications of papers related to MODIS LST data have been steadily rising annually. They spanned 19 subject areas from environmental, agricultural, and biological science to social science and medicine, indicating a wide range of MODIS LST data applications. Among the 159 journals, Remote Sensing of Environment, Remote Sensing, and the International Journal of Remote Sensing published the most articles. The study also showed that urban heat island (UHI), air temperature estimation/mapping (Ta estimation), soil moisture, evapotranspiration estimation, and drought monitoring/estimation were the most popular applications of MODIS LST data. Furthermore, we discuss the strengths, limitations, and future direction of research using MODIS LST data.

Journal ArticleDOI
TL;DR: In this paper, the authors explored a parameter: soil moisture (SM); and examined its influence on the desert locust wingless juveniles, and used two machine learning algorithms (generalized linear model and random forest) to evaluate the link between hopper presences and soil moisture conditions under different time scenarios.
Abstract: Desert locusts have attacked crops since antiquity. To prevent or mitigate its effects on local communities, it is necessary to precisely locate its breeding areas. Previous works have relied on precipitation and vegetation index datasets obtained by satellite remote sensing. However, these products present some limitations in arid or semiarid environments. We have explored a parameter: soil moisture (SM); and examined its influence on the desert locust wingless juveniles. We have used two machine learning algorithms (generalized linear model and random forest) to evaluate the link between hopper presences and SM conditions under different time scenarios. RF obtained the best model performance with very good validation results according to the true skill statistic and receiver operating characteristic curve statistics. It was found that an area becomes suitable for breeding when the minimum SM values are over 0.07 m3 / m3 during 6 days or more. These results demonstrate the possibility to identify breeding areas in Mauritania by means of SM, and the suitability of ESA CCI SM product to complement or substitute current monitoring techniques based on precipitation datasets.

Journal ArticleDOI
TL;DR: The design of an improved 3D convolutional neural network model for HSI classification that extracts features from both the spectral and spatial dimensions through the application of 3-D convolutions, thereby capturing the important discrimination information encoded in multiple adjacent bands.
Abstract: Recently, hyperspectral image (HSI) classification has become a focus of research. However, the complex structure of an HSI makes feature extraction difficult to achieve. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. The design of an improved 3-D convolutional neural network (3D-CNN) model for HSI classification is described. This model extracts features from both the spectral and spatial dimensions through the application of 3-D convolutions, thereby capturing the important discrimination information encoded in multiple adjacent bands. The designed model views the HSI cube data altogether without relying on any pre- or postprocessing. In addition, the model is trained in an end-to-end fashion without any handcrafted features. The designed model was applied to three widely used HSI datasets. The experimental results demonstrate that the 3D-CNN-based method outperforms conventional methods even with limited labeled training samples.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors explored three techniques for estimating the soil salt content from Landsat data, including partial least square regression (PLSR), support vector machine (SVM), and deep learning (DL), respectively.
Abstract: This study explored three techniques for estimating the soil salt content from Landsat data. First, the 127 items of in situ measured hyperspectral reflectance data were collected and resampled to the spectral resolution of the reflectance bands of Landsat 5 and Landsat 8, respectively. Second, 12 soil salt indices (SSI) summarized from previous literature were determined based on the simulated Landsat bands. Third, 127 measurement groups with Landsat bands and SSI were randomly divided into training (102) and testing subgroups (25). Three techniques including partial least square regression (PLSR), support vector machine (SVM), and deep learning (DL) were used to establish a soil salinity model using SSI and the simulated Landsat bands as independent variables (IV), respectively. Results indicated that PLSR with SSI performed best for both simulated Landsat 5 and Landsat 8 data. Compared with PLSR, SVM underestimated soil salt content, whereas DL obtained centralized simulations and failed to capture the lower and upper observations. We recommend the PLSR model with SSI as IV to estimate soil salt content because it can identify >66% moderate-to-high-saline soils, which indicates its great potential for soil salt monitoring in arid or semiarid regions.

Journal ArticleDOI
TL;DR: In this article, high-power laser pulses irradiated a concrete surface to generate vibration that can be detected by an optical interferometer, which was constructed using photorefractive crystal.
Abstract: High-speed laser remote sensing of defects inside a concrete specimen was demonstrated. In the proposed measurement setup, high-power laser pulses irradiated a concrete surface to generate vibration that can be detected by an optical interferometer, which was constructed using photorefractive crystal. The laser-based remote sensing system achieved inspection speeds of 25 Hz. The predominant frequency of a mock-up defect that was embedded in a concrete specimen was measured. The inspection result was identical to that obtained using a conventional hammering method.

Journal ArticleDOI
TL;DR: The experimental results prove that the proposed feature extraction technique attains significant classification performance in terms of the OA, average accuracy, and Cohen’s kappa coefficient (k) when compared to the other competing methods.
Abstract: The presence of a significant amount of information in the hyperspectral image makes it suitable for numerous applications. However, extraction of the suitable and informative features from the high-dimensional data is a tedious task. A feature extraction technique using expectation–maximization (EM) clustering and weighted average fusion technique is proposed. Bhattacharya distance measure is used for computing the distance among all the spectral bands. With this distance information, the spectral bands are grouped into the clusters by employing the EM clustering method. The EM algorithm automatically converges to an optimum number of clusters, thereby specifying the absence of need for the required number of clusters. The bands in each cluster are fused together applying the weighted average fusion method. The weight of each band is calculated on the basis of the criteria of minimizing the distance inside the cluster and maximizing the distance among the different clusters. The fused bands from each cluster are then considered as the extracted features. These features are used to train the support vector machine for classification of the hyperspectral image. The performance of the proposed technique has been validated against three small-size standard bench-mark datasets, Indian Pines, Pavia University, Salinas, and one large-size dataset, Botswana. The proposed method achieves an overall accuracy (OA) of 92.19%, 94.10%, 93.96%, and 84.92% for Indian Pines, Pavia University, Salinas, and Botswana datasets, respectively. The experimental results prove that the proposed technique attains significant classification performance in terms of the OA, average accuracy, and Cohen’s kappa coefficient (k) when compared to the other competing methods.

Journal ArticleDOI
TL;DR: A fast IAA (FIAA)-based super-resolution DBS imaging method, taking advantage of the rich matrix structures of the classical narrow-band filtering, using the Hermitian feature of the echo autocorrelation matrix R to achieve its fast solution and uses the Toeplitz structure of R to realize its fast inversion.
Abstract: Doppler beam sharpening (DBS) is a critical technology for airborne radar ground mapping in forward-squint region. In conventional DBS technology, the narrow-band Doppler filter groups formed by fast Fourier transform (FFT) method suffer from low spectral resolution and high side lobe levels. The iterative adaptive approach (IAA), based on the weighted least squares (WLS), is applied to the DBS imaging applications, forming narrower Doppler filter groups than the FFT with lower side lobe levels. Regrettably, the IAA is iterative, and requires matrix multiplication and inverse operation when forming the covariance matrix, its inverse and traversing the WLS estimate for each sampling point, resulting in a notably high computational complexity for cubic time. We propose a fast IAA (FIAA)-based super-resolution DBS imaging method, taking advantage of the rich matrix structures of the classical narrow-band filtering. First, we formulate the covariance matrix via the FFT instead of the conventional matrix multiplication operation, based on the typical Fourier structure of the steering matrix. Then, by exploiting the Gohberg–Semencul representation, the inverse of the Toeplitz covariance matrix is computed by the celebrated Levinson–Durbin (LD) and Toeplitz-vector algorithm. Finally, the FFT and fast Toeplitz-vector algorithm are further used to traverse the WLS estimates based on the data-dependent trigonometric polynomials. The method uses the Hermitian feature of the echo autocorrelation matrix R to achieve its fast solution and uses the Toeplitz structure of R to realize its fast inversion. The proposed method enjoys a lower computational complexity without performance loss compared with the conventional IAA-based super-resolution DBS imaging method. The results based on simulations and measured data verify the imaging performance and the operational efficiency.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.
Abstract: A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.

Journal ArticleDOI
TL;DR: In this paper, the accuracy of remotely sensed algal biomass determined with machine-learning algorithms and Landsat TM/ETM+ imagery was evaluated using the National Lake Assessment (NLA) data.
Abstract: Accumulating remotely sensed and ground-measured data and improvements in data mining such as machine-learning techniques open new opportunities for monitoring and managing algal blooms over large spatial scales. The goal of this study was to test the accuracy of remotely sensed algal biomass determined with machine-learning algorithms and Landsat TM/ETM+ imagery. We used chlorophyll-a concentration data from the 2007 National Lake Assessment (NLA) (lake N = 1157) by the US Environmental Protection Agency to train and test Landsat TM/ETM+ algorithms. Results showed significant improvements in chlorophyll-a retrieval accuracy using machine-learning algorithms compared with traditional empirical models using linear regression. Specifically, the results from boosted regression trees and random forest explained, respectively, 45.8% and 44.5% of chlorophyll-a variation. Multiple linear regression could only explain 39.8% of chlorophyll-a variation. The chlorophyll-a concentration derived from Landsat TM/ETM+ and a simple to use Google Earth Engine application, accurately characterized a 2009 algal bloom in western Lake Erie to show the model worked well for the analysis of temporal changes in algal conditions. Compared with chlorophyll-a data from the NLA, chlorophyll-a measurements with our Landsat TM/ETM+ model had almost the same correlation with lake’s total phosphorus concentrations, especially when using multiple Landsat images. Therefore, Landsat measurements of chlorophyll-a have value for ecological assessments and managing algal problems in lakes.

Journal ArticleDOI
TL;DR: A semisupervised classification method that produces an overall accuracy of 88.3% and obtains an improvement of up to 9% against a CNN classifier trained from scratch is proposed.
Abstract: Aerial images can greatly facilitate rescue efforts and recovery in the aftermath of hurricane disasters. Although supervised classification methods have been successfully applied to aerial imaging for building damage evaluation, their use remains challenging since supervised classifiers have to be trained using a large number of labeled samples, which are not available soon after disasters. However, rapid response is crucial for rescue tasks, which places greater demands on classification methods. To accelerate their deployment, a semisupervised classification method is proposed in this paper using a large number of unlabeled samples and only a few labeled samples that could be rapidly obtained. The proposed approach consists of three steps: segmentation, unsupervised pretraining using convolutional autoencoders (CAE), and supervised fine-tuning using convolutional neural networks (CNN). Leveraging the representation capability of CAE, the learned knowledge from CAE could be transferred to the counterparts of CNN. After pretraining, the CNN classifier is further refined with a few labeled samples to improve feature discrimination. To demonstrate this methodology, a recognition strategy of damaged buildings based on context information using only vertical postevent aerial two-dimensional images is presented in this paper. As a case study, a coastal area affected by the 2012 Sandy hurricane is investigated. Experimental results show that the proposed semisupervised method produces an overall accuracy of 88.3% and obtains an improvement of up to 9% against a CNN classifier trained from scratch.

Journal ArticleDOI
TL;DR: In this paper, the physical basis of the interaction of the radar signal with various water surfaces of different geographical conditions, the response of inundated regions under different frequency, polarization, and incidence angle is discussed.
Abstract: The use and importance of synthetic aperture radar (SAR) data for flood area mapping studies have been proved beyond the doubts as SAR signals are able to penetrate the thick formation of clouds and are able to receive the reflected signals of surface objects even during extreme weather conditions. At the same time, the accuracies of an SAR image-based flood area mapping model has a direct relationship with the frequency of the source SAR signal, the polarization mode that has been used, and the incidence angle by which the disaster region has been sensed. In addition to this, while evolving SAR image-based flood area mapping models, it is a must to understand the response of the inundated regions of the different geographical regions, as well as the response of the same geographical region during different climatic and seasonal periods as the same object, produces different signatures in varying situations. As of this date, there is no single article that can synchronize such information, which is widely disseminated across various research publications. This article mainly focuses on gathering and reviewing such vital information as well as bringing out the details about the physical basis of the interaction of the radar signal with various water surfaces of different geographical conditions, the response of inundated regions under different frequency, polarization, and incidence angle. Such information is mainly used to understand the difficulties that arise when mapping the inundated regions using the SAR image. In the end, the significant observations of the literature reviews are highlighted, which is very useful for young researchers who are interested in building flood area application models using different sets of SAR data.

Journal ArticleDOI
TL;DR: Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.
Abstract: The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.

Journal ArticleDOI
TL;DR: In this article, a method for detecting flooding in urban areas using near real-time SAR data is developed and extensively tested under a variety of scenarios involving different flood events and different images.
Abstract: Flooding is a major hazard in both rural and urban areas worldwide, but it is in urban areas that the impacts are most severe. High resolution Synthetic Aperture Radar (SAR) sensors are able to detect flood extents in urban areas during both day- and night-time. If obtained in near real-time, these flood extents can be used for emergency flood relief management or as observations for assimilation into flood forecasting models. In this paper a method for detecting flooding in urban areas using near real-time SAR data is developed and extensively tested under a variety of scenarios involving different flood events and different images. The method uses a SAR simulator in conjunction with LiDAR data of the urban area to predict areas of radar shadow and layover in the image caused by buildings and taller vegetation. Of the urban water pixels visible to the SAR, the flood detection accuracy averaged over the test examples was 83%, with a false alarm rate of 9%. The results indicate that flooding can be detected in the urban area to reasonable accuracy, but that this accuracy is limited partly by the SAR’s poor visibility of the urban ground surface due to shadow and layover.

Journal ArticleDOI
TL;DR: A deep learning decision fusion approach is presented to perform multisensor urban remote sensing data classification by utilizing joint spectral–spatial information and a context-aware object-based postprocessing is used to enhance the classification results.
Abstract: Multisensor data fusion is one of the most common and popular remote sensing data classification topics by considering a robust and complete description about the objects of interest. Furthermore, deep feature extraction has recently attracted significant interest and has become a hot research topic in the geoscience and remote sensing research community. A deep learning decision fusion approach is presented to perform multisensor urban remote sensing data classification. After deep features are extracted by utilizing joint spectral–spatial information, a soft-decision made classifier is applied to train high-level feature representations and to fine-tune the deep learning framework. Next, a decision-level fusion classifies objects of interest by the joint use of sensors. Finally, a context-aware object-based postprocessing is used to enhance the classification results. A series of comparative experiments are conducted on the widely used dataset of 2014 IEEE GRSS data fusion contest. The obtained results illustrate the considerable advantages of the proposed deep learning decision fusion over the traditional classifiers.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed transform-domain-based feature extraction technique, three-dimensional discrete cosine transform (3-D DCT), produces a good classification in terms of overall accuracy, average accuracy as well as Cohen’s kappa coefficient when compared with some traditional aswell as transform- based feature extraction algorithms.
Abstract: The hyperspectral remote sensor acquires hundreds of contiguous spectral images, resulting in large data that contain a significant amount of redundant information. This high-dimensional and redundant data always influence the efficiency of the data processing. Therefore, feature extraction becomes one of the critical tasks in hyperspectral image classification. A transform-domain-based feature extraction technique, three-dimensional discrete cosine transform (3-D DCT), is proposed. The reason behind the transform domains is that, generally, an invertible linear transform reconstructs the image data to provide the independent information about the spectra or more separable transformation coefficients. Moreover, DCT has excellent energy compaction properties for highly correlated images, such as hyperspectral images, which reduces the complexity of the separation significantly. Unlike the discrete wavelet transform that requires sequential transform to obtain the approximation and detailed coefficients, DCT extracts all coefficients simultaneously. As a result, computation time in the feature extraction can be reduced. The experimental results on three benchmark datasets (Indian Pines, Pavia University, and Salinas) show that the proposed approach produces a good classification in terms of overall accuracy, average accuracy as well as Cohen’s kappa coefficient (κ) when compared with some traditional as well as transform-based feature extraction algorithms. Experimental result also shows that the proposed method requires less computational time than the transform-based feature extraction method.

Journal ArticleDOI
Min Yang1, Guangli Ren, Ling Han1, Huan Yi, Ting Gao 
TL;DR: In this paper, the integration of Landsat 8 OLI and ASTER data is an efficient tool for interpreting lead-zinc mineralization in the Huoshaoyun Pb-Zn mining region located in the west Kunlun mountains at high altitude and very rugged terrain.
Abstract: The integration of Landsat 8 OLI and ASTER data is an efficient tool for interpreting lead–zinc mineralization in the Huoshaoyun Pb–Zn mining region located in the west Kunlun mountains at high altitude and very rugged terrain, where traditional geological work becomes limited and time-consuming. This task was accomplished by using band ratios (BRs), principal component analysis, and spectral matched filtering methods. It is concluded that some BR color composites and principal components of each imagery contain useful information for lithological mapping. SMF technique is useful for detecting lead–zinc mineralization zones, and the results could be verified by handheld portable X-ray fluorescence analysis. Therefore, the proposed methodology shows strong potential of Landsat 8 OLI and ASTER data in lithological mapping and lead–zinc mineralization zone extraction in carbonate stratum.