scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing in 2015"


Journal ArticleDOI
TL;DR: A new feature extraction (FE) and image classification framework are proposed for hyperspectral data analysis based on deep belief network (DBN) and a novel deep architecture is proposed, which combines the spectral-spatial FE and classification together to get high classification accuracy.
Abstract: Hyperspectral data classification is a hot topic in remote sensing community. In recent years, significant effort has been focused on this issue. However, most of the methods extract the features of original data in a shallow manner. In this paper, we introduce a deep learning approach into hyperspectral image classification. A new feature extraction (FE) and image classification framework are proposed for hyperspectral data analysis based on deep belief network (DBN). First, we verify the eligibility of restricted Boltzmann machine (RBM) and DBN by the following spectral information-based classification. Then, we propose a novel deep architecture, which combines the spectral–spatial FE and classification together to get high classification accuracy. The framework is a hybrid of principal component analysis (PCA), hierarchical learning-based FE, and logistic regression (LR). Experimental results with hyperspectral data indicate that the classifier provide competitive solution with the state-of-the-art methods. In addition, this paper reveals that deep learning system has huge potential for hyperspectral data classification.

1,028 citations


Journal ArticleDOI
TL;DR: A noise-adjusted iterative low-rank matrix approximation (NAILRMA) method is proposed for HSI denoising that can effectively preserve the high- SNR bands and denoise the low-SNR bands.
Abstract: Due to the low-dimensional property of clean hyperspectral images (HSIs), many low-rank-based methods have been proposed to denoise HSIs. However, in an HSI, the noise intensity in different bands is often different, and most of the existing methods do not take this fact into consideration. In this paper, a noise-adjusted iterative low-rank matrix approximation (NAILRMA) method is proposed for HSI denoising. Based on the low-rank property of HSIs, the patchwise low-rank matrix approximation (LRMA) is established. To further separate the noise from the signal subspaces, an iterative regularization framework is proposed. Considering that the noise intensity in different bands is different, an adaptive iteration factor selection based on the noise variance of each HSI band is adopted. This noise-adjusted iteration strategy can effectively preserve the high-SNR bands and denoise the low-SNR bands. The randomized singular value decomposition (RSVD) method is then utilized to solve the NAILRMA optimization problem. A number of experiments were conducted in both simulated and real data conditions to illustrate the performance of the proposed NAILRMA method for HSI denoising.

229 citations


Journal ArticleDOI
TL;DR: This study presents a novel STRS methodology which uses Bayesian theory to impute missing spectral information in the multispectral imagery and introduces observation uncertainties into the interpolations.
Abstract: Precision agriculture requires detailed crop status information at high spatial and temporal resolutions Remote sensing can provide such information, but single sensor observations are often incapable of meeting all data requirements Spectral–temporal response surfaces (STRSs) provide continuous reflectance spectra at high temporal intervals This is the first study to combine multispectral satellite imagery (from Formosat-2) with hyperspectral imagery acquired with an unmanned aerial vehicle (UAV) to construct STRS This study presents a novel STRS methodology which uses Bayesian theory to impute missing spectral information in the multispectral imagery and introduces observation uncertainties into the interpolations This new method is compared to two earlier published methods for constructing STRS: a direct interpolation of the original data and a direct interpolation along the temporal dimension after imputation along the spectral dimension The STRS derived through all three methods are compared to field measured reflectance spectra, leaf area index (LAI), and canopy chlorophyll of potato plants The results indicate that the proposed Bayesian approach has the highest correlation (r = 0953) and lowest RMSE (0032) to field spectral reflectance measurements Although the optimized soil-adjusted vegetation index (OSAVI) obtained from all methods have similar correlations to field data, the modified chlorophyll absorption in reflectance index (MCARI) obtained from the Bayesian STRS outperform the other two methods A correlation of 083 with LAI and 077 with canopy chlorophyll measurements are obtained, compared to correlations of 027 and 009, respectively, for the directly interpolated STRS

217 citations


Journal ArticleDOI
TL;DR: An overview of the GMI instrument and a report of early on-orbit commissioning activities are provided, which discusses the on- orbit radiometric sensitivity, absolute calibration accuracy, and stability for each radiometric channel.
Abstract: The Global Precipitation Measurement (GPM) mission is an international satellite mission that uses measurements from an advanced radar/radiometer system on a core observatory as reference standards to unify and advance precipitation estimates made by a constellation of research and operational microwave sensors. The GPM core observatory was launched on February 27, 2014 at 18:37 UT in a 65° inclination nonsun-synchronous orbit. GPM focuses on precipitation as a key component of the Earth’s water and energy cycle, and has the capability to provide near-real-time observations for tracking severe weather events, monitoring freshwater resources, and other societal applications. The GPM microwave imager (GMI) on the core observatory provides the direct link to the constellation radiometer sensors, which fly mainly in polar orbits. The GMI sensitivity, accuracy, and stability play a crucial role in unifying the measurements from the GPM constellation of satellites. The instrument has exhibited highly stable operations through the duration of the calibration/validation period. This paper provides an overview of the GMI instrument and a report of early on-orbit commissioning activities. It discusses the on-orbit radiometric sensitivity, absolute calibration accuracy, and stability for each radiometric channel.

214 citations


Journal ArticleDOI
TL;DR: The Contest was proposed as a double-track competition: one aiming at accurate landcover classification and the other seeking innovation in the fusion of thermal hyperspectral and color data, resulting in the results obtained by the winners of both tracks.
Abstract: This paper reports the outcomes of the 2014 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (IEEE GRSS). As for previous years, the IADF TC organized a data fusion contest aiming at fostering new ideas and solutions for multisource remote sensing studies. In the 2014 edition, participants considered multiresolution and multisensor fusion between optical data acquired at 20-cm resolution and long-wave (thermal) infrared hyperspectral data at 1-m resolution. The Contest was proposed as a double-track competition: one aiming at accurate landcover classification and the other seeking innovation in the fusion of thermal hyperspectral and color data. In this paper, the results obtained by the winners of both tracks are presented and discussed.

209 citations


Journal ArticleDOI
TL;DR: The experimental results confirm that the proposed algorithm obtains a desirable detection performance and outperforms the classical RX-based anomaly detectors and the orthogonal subspace projection-based detectors.
Abstract: In this paper, we propose a hyperspectral image anomaly detection model by the use of background joint sparse representation (BJSR). With a practical binary hypothesis test model, the proposed approach consists of the following steps. The adaptive orthogonal background complementary subspace is first estimated by the BJSR, which adaptively selects the most representative background bases for the local region. An unsupervised adaptive subspace detection method is then proposed to suppress the background and simultaneously highlight the anomaly component. The experimental results confirm that the proposed algorithm obtains a desirable detection performance and outperforms the classical RX-based anomaly detectors and the orthogonal subspace projection-based detectors.

192 citations


Journal ArticleDOI
TL;DR: It is revealed that the NPP-VIIRS data can be a useful tool for evaluating poverty at the county level in China.
Abstract: Poverty has appeared as one of the long-term predicaments facing development of human society during the 21st century. Estimation of regional poverty level is a key issue for making strategies to eliminate poverty. This paper aims to evaluate the ability of the nighttime light composite data from the Visible Infrared Imaging Radiometer Suite (VIIRS) Day–Night Band (DNB) carried by the Suomi National Polar-orbiting Partnership (NPP) Satellite in estimating poverty at the county level in China. Two major experiments are involved in this study, which include 1) 38 counties of Chongqing city and 2) 2856 counties of China. The first experiment takes Chongqing as an example and combines 10 socioeconomic variables into an integrated poverty index (IPI). IPI is then used as a reference to validate the accuracy of poverty evaluation using the average light index (ALI) derived from NPP-VIIRS data. Linear regression and comparison of the class ranks have been employed to verify the correlation between ALI and IPI. The results show a good correlation between IPI and ALI, with a coefficient of determination ( $R^2$ ) of 0.8554, and the class ranks of IPI and API show relative closeness at the county level. The second experiment examines all counties in China and makes a comparison between ALI values and national poor counties (NPC). The comparison result shows a general agreement between the NPC and the counties with low ALI values. This study reveals that the NPP-VIIRS data can be a useful tool for evaluating poverty at the county level in China.

189 citations


Journal ArticleDOI
TL;DR: This paper defines a new paradigm (hypersharpening) in remote sensing image fusion, and draws the readers' attention to its peculiar characteristics, by proposing and evaluating two hypersharpens methods.
Abstract: This paper aims at defining a new paradigm (hypersharpening) in remote sensing image fusion. In fact, due to the development of new instruments, thinking only in terms of pansharpening is reductive. Even though some expressions as hyperspectral (HS) pansharpening already exist, there is not a suitable definition when multispectral/hyperspectral data are used as source to extract spatial details. After defining the hypersharpening framework, we draw the readers’ attention to its peculiar characteristics, by proposing and evaluating two hypersharpening methods. Experiments are carried out on the data produced by the updated version of SIM-GA imager, designed by Selex ES, which is composed by a panchromatic camera and two spectrometers in the VNIR and SWIR spectral ranges, respectively. Owing to the different resolution factors among panchromatic, VNIR and SWIR data sets, we can apply hypersharpening to fuse SWIR data to VNIR resolution. Comparisons of hypersharpening with “traditional” pansharpening show hypersharpening is more effective.

188 citations


Journal ArticleDOI
TL;DR: Two spatial-spectral composite kernel ELM classification methods are proposed that outperform the general ELM, SVM, and SVM with CK methods on the hyperspectral images.
Abstract: Due to its simple, fast, and good generalization ability, extreme learning machine (ELM) has recently drawn increasing attention in the pattern recognition and machine learning fields. To investigate the performance of ELM on the hyperspectral images (HSIs), this paper proposes two spatial–spectral composite kernel (CK) ELM classification methods. In the proposed CK framework, the single spatial or spectral kernel consists of activation–function-based kernel and general Gaussian kernel, respectively. The proposed methods inherit the advantages of ELM and have an analytic solution to directly implement the multiclass classification. Experimental results on three benchmark hyperspectral datasets demonstrate that the proposed ELM with CK methods outperform the general ELM, SVM, and SVM with CK methods.

177 citations


Journal ArticleDOI
Fan Hu1, Gui-Song Xia1, Zifeng Wang1, Xin Huang1, Liangpei Zhang1, Hong Sun1 
TL;DR: An improved UFL algorithm based on spectral clustering, named UFL-SC, which cannot only adaptively learn good local feature representations but also discover intrinsic structures of local image patches, which is demonstrated to be comparable to the state-of-the-art approach on open scene classification benchmark.
Abstract: Scene classification plays an important role in the interpretation of remotely sensed high-resolution imagery. However, the performance of scene classification strongly relies on the discriminative power of feature representation, which is generally hand-engineered and requires a huge amount of domain-expert knowledge as well as time-consuming hand tuning. Recently, unsupervised feature learning (UFL) provides an alternative way to automatically learn discriminative feature representation from images. However, the performances achieved by conventional UFL methods are not comparable to the state-of-the-art, mainly due to the neglect of locally substantial image structures. This paper presents an improved UFL algorithm based on spectral clustering, named UFL-SC, which cannot only adaptively learn good local feature representations but also discover intrinsic structures of local image patches. In contrast to the standard UFL pipeline, UFL-SC first maps the original image patches into a low-dimensional and intrinsic feature space by linear manifold analysis techniques, and then learns a dictionary (e.g., using K-means clustering) on the patch manifold for feature encoding. To generate a feature representation for each local patch, an explicit parameterized feature encoding method, i.e., triangle encoding, is applied with the learned dictionary on the same patch manifold. The holistic feature representation of image scenes is finally obtained by building a bag-of-visual-words (BOW) model of the encoded local features. Experiments demonstrate that the proposed UFL-SC algorithm can extract efficient local features for image scenes and show comparable performance to the state-of-the-art approach on open scene classification benchmark.

163 citations


Journal ArticleDOI
TL;DR: The proposed architecture results in efficiently analyzing real-time remote sensing Big Data using earth observatory system using Hadoop, and has the capability of storing incoming raw data to perform offline analysis on largely stored dumps, when required.
Abstract: The assets of remote senses digital world daily generate massive volume of real-time data (mainly referred to the term “Big Data”), where insight information has a potential significance if collected and aggregated effectively. In today’s era, there is a great deal added to real-time remote sensing Big Data than it seems at first, and extracting the useful information in an efficient manner leads a system toward a major computational challenges, such as to analyze, aggregate, and store, where data are remotely collected. Keeping in view the above mentioned factors, there is a need for designing a system architecture that welcomes both real-time, as well as offline data processing. Therefore, in this paper, we propose real-time Big Data analytical architecture for remote sensing satellite application. The proposed architecture comprises three main units, such as 1) remote sensing Big Data acquisition unit (RSDU); 2) data processing unit (DPU); and 3) data analysis decision unit (DADU). First, RSDU acquires data from the satellite and sends this data to the Base Station, where initial processing takes place. Second, DPU plays a vital role in architecture for efficient processing of real-time Big Data by providing filtration, load balancing, and parallel processing. Third, DADU is the upper layer unit of the proposed architecture, which is responsible for compilation, storage of the results, and generation of decision based on the results received from DPU. The proposed architecture has the capability of dividing, load balancing, and parallel processing of only useful data. Thus, it results in efficiently analyzing real-time remote sensing Big Data using earth observatory system. Furthermore, the proposed architecture has the capability of storing incoming raw data to perform offline analysis on largely stored dumps, when required. Finally, a detailed analysis of remotely sensed earth observatory Big Data for land and sea area are provided using Hadoop. In addition, various algorithms are proposed for each level of RSDU, DPU, and DADU to detect land as well as sea area to elaborate the working of an architecture.

Journal ArticleDOI
TL;DR: Experimental results show that the improved sparse subspace clustering method has the second shortest computational time and also outperforms the other six methods in classification accuracy when using an appropriate band number obtained by the DC plot algorithm.
Abstract: An improved sparse subspace clustering (ISSC) method is proposed to select an appropriate band subset for hyperspectral imagery (HSI) classification. The ISSC assumes that band vectors are sampled from a union of low-dimensional orthogonal subspaces and each band can be sparsely represented as a linear or affine combination of other bands within its subspace. First, the ISSC represents band vectors with sparse coefficient vectors by solving the L2-norm optimization problem using the least square regression (LSR) algorithm. The sparse and block diagonal structure of the coefficient matrix from LSR leads to correct segmentation of band vectors. Second, the angular similarity measurement is presented and utilized to construct the similarity matrix. Third, the distribution compactness (DC) plot algorithm is used to estimate an appropriate size of the band subset. Finally, spectral clustering is implemented to segment the similarity matrix and the desired ISSC band subset is found. Four groups of experiments on three widely used HSI datasets are performed to test the performance of ISSC for selecting bands in classification. In addition, the following six state-of-the-art band selection methods are used to make comparisons: linear constrained minimum variance-based band correlation constraint (LCMV-BCC), affinity propagation (AP), spectral information divergence (SID), maximum-variance principal component analysis (MVPCA), sparse representation-based band selection (SpaBS), and sparse nonnegative matrix factorization (SNMF). Experimental results show that the ISSC has the second shortest computational time and also outperforms the other six methods in classification accuracy when using an appropriate band number obtained by the DC plot algorithm.

Journal ArticleDOI
TL;DR: This paper presents a novel method for automated extraction of road markings directly from three dimensional point clouds acquired by a mobile light detection and ranging (LiDAR) system that achieves better performance and accuracy than those of the two existing methods.
Abstract: This paper presents a novel method for automated extraction of road markings directly from three dimensional (3-D) point clouds acquired by a mobile light detection and ranging (LiDAR) system. First, road surface points are segmented from a raw point cloud using a curb-based approach. Then, road markings are directly extracted from road surface points through multisegment thresholding and spatial density filtering. Finally, seven specific types of road markings are further accurately delineated through a combination of Euclidean distance clustering, voxel-based normalized cut segmentation, large-size marking classification based on trajectory and curb-lines, and small-size marking classification based on deep learning, and principal component analysis (PCA). Quantitative evaluations indicate that the proposed method achieves an average completeness, correctness, and F-measure of 0.93, 0.92, and 0.93, respectively. Comparative studies also demonstrate that the proposed method achieves better performance and accuracy than those of the two existing methods.

Journal ArticleDOI
TL;DR: The development of three component-specific feature descriptors for each monogenic component is produced first and the resulting features are fed into a joint sparse representation model to exploit the intercorrelation among multiple tasks.
Abstract: In this paper, the classification via sprepresentation and multitask learning is presented for target recognition in SAR image. To capture the characteristics of SAR image, a multidimensional generalization of the analytic signal, namely the monogenic signal, is employed. The original signal can be then orthogonally decomposed into three components: 1) local amplitude; 2) local phase; and 3) local orientation. Since the components represent the different kinds of information, it is beneficial by jointly considering them in a unifying framework. However, these components are infeasible to be directly utilized due to the high dimension and redundancy. To solve the problem, an intuitive idea is to define an augmented feature vector by concatenating the components. This strategy usually produces some information loss. To cover the shortage, this paper considers three components into different learning tasks, in which some common information can be shared. Specifically, the component-specific feature descriptor for each monogenic component is produced first. Inspired by the recent success of multitask learning, the resulting features are then fed into a joint sparse representation model to exploit the intercorrelation among multiple tasks. The inference is reached in terms of the total reconstruction error accumulated from all tasks. The novelty of this paper includes 1) the development of three component-specific feature descriptors; 2) the introduction of multitask learning into sparse representation model; 3) the numerical implementation of proposed method; and 4) extensive comparative experimental studies on MSTAR SAR dataset, including target recognition under standard operating conditions, as well as extended operating conditions, and the capability of outliers rejection.

Journal ArticleDOI
TL;DR: A new efficient strategy for fusion and classification of hyperspectral and LiDAR data designed to integrate multiple types of features extracted from these data, which does not require any regularization parameters.
Abstract: Hyperspectral image classification has been an active topic of research. In recent years, it has been found that light detection and ranging (LiDAR) data provide a source of complementary information that can greatly assist in the classification of hyperspectral data, in particular when it is difficult to separate complex classes. This is because, in addition to the spatial and the spectral information provided by hyperspectral data, LiDAR can provide very valuable information about the height of the surveyed area that can help with the discrimination of classes and their separability. In the past, several efforts have been investigated for fusion of hyperspectral and LiDAR data, with some efforts driven by the morphological information that can be derived from both data sources. However, a main challenge for the learning approaches is how to exploit the information coming from multiple features. Specifically, it has been found that simple concatenation or stacking of features such as morphological attribute profiles (APs) may contain redundant information. In addition, a significant increase in the number of features may lead to very high-dimensional input features. This is in contrast with the limited number of training samples often available in remote-sensing applications, which may lead to the Hughes effect. In this work, we develop a new efficient strategy for fusion and classification of hyperspectral and LiDAR data. Our approach has been designed to integrate multiple types of features extracted from these data. An important characteristic of the presented approach is that it does not require any regularization parameters, so that different types of features can be efficiently exploited and integrated in a collaborative and flexible way. Our experimental results, conducted using a hyperspectral image and a LiDAR-derived digital surface model (DSM) collected over the University of Houston campus and the neighboring urban area, indicate that the proposed framework for multiple feature learning provides state-of-the-art classification results.

Journal ArticleDOI
TL;DR: The method theory developed in this study can be employed for other sUAS-based remote sensing applications and the difference between the measured and the predicted reflectance values of 13 tallgrass sampling quadrats is not statistically significant.
Abstract: The use of small unmanned aircraft systems (sUAS) to acquire very high-resolution multispectral imagery has attracted growing attention recently; however, no systematic, feasible, and convenient radiometric calibration method has been specifically developed for sUAS remote sensing. In this research, we used a modified color infrared (CIR) digital single-lens reflex (DSLR) camera as the sensor and the DJI S800 hexacopter sUAS as the platform to collect imagery. Results show that the relationship between the natural logarithm of measured surface reflectance and image raw, unprocessed digital numbers (DNs) is linear and the ${\bm{y}}$ -intercept of the linear equation can be theoretically interpreted as the minimal possible surface reflectance that can be detected by each sensor waveband. The empirical line calibration equation for every single band image can be built using the ${\bm{y}}$ -intercept as one data point, and the natural log-transformed measured reflectance and image DNs of a gray calibration target as another point in the coordinate system. Image raw DNs are therefore converted to reflectance using the calibration equation. The Mann–Whitney ${\bm{U}}$ test results suggest that the difference between the measured and the predicted reflectance values of 13 tallgrass sampling quadrats is not statistically significant. The method theory developed in this study can be employed for other sUAS-based remote sensing applications.

Journal ArticleDOI
TL;DR: A crop model-data assimilation framework to assimilate the 1-km moderate resolution imaging spectroradiometer (MODIS) LAI and ET products into the soil water atmosphere plant (SWAP) model to assess the potential for estimating winter wheat yield at field and regional scales is presented.
Abstract: Leaf area index (LAI) and evapotranspiration (ET) are two crucial biophysical variables related to crop growth and grain yield. This study presents a crop model–data assimilation framework to assimilate the 1-km moderate resolution imaging spectroradiometer (MODIS) LAI and ET products (MCD15A3 and MOD16A2, respectively) into the soil water atmosphere plant (SWAP) model to assess the potential for estimating winter wheat yield at field and regional scales. Since the 1-km MODIS products generally underestimate LAI or ET values in fragmented agricultural landscapes due to scale effects and intrapixel heterogeneity, we constructed a new cost function by comparing the generalized vector angle between the observed and modeled LAI and ET time series during the growing season. We selected three parameters (irrigation date, irrigation depth, and emergence date) as the reinitialized parameters to be optimized by minimizing the cost function using the shuffled complex evolution method—University of Arizona (SCE-UA) optimization algorithm, and then used the optimized parameters as inputs into the SWAP model for winter wheat yield estimation. We used four data-assimilation schemes to estimate winter wheat yield at field and regional scales. We found that jointly assimilating MODIS LAI and ET data improved accuracy ( ${\bf R}^{\bf 2} = 0.43$ , ${\bf RMSE} = {619}\;{kg}\,{\cdot} {\bf ha}^{- 1}$ ) than assimilating MODIS LAI data ( ${\bf R}^2 = 0.28$ , ${\bf RMSE} = {889}\;{\bf kg}\;{\cdot}\;{\bf ha}^{- 1}$ ) or ET data ( ${\bf R}^{2} = 0.36$ , ${\bf RMSE} = {\bf 1561}\;{\bf kg}\;{\cdot}\;{\bf ha}^{- 1}$ ) at the county level, which indicates that the proposed estimation method is reliable and applicable at a county scale.

Journal ArticleDOI
TL;DR: The aim of this paper is to present the methodology and results of the DETER based on AWIFS data, called DETER-B, which is effective in detecting deforestation smaller than 25 ha and presents higher detection capability in identifying areas between 25 and 100 ha.
Abstract: The Brazilian Legal Amazon (BLA), the largest global rainforest on earth, contains nearly 30% of the rainforest on earth. Given the regional complexity and dynamics, there are large government investments focused on controlling and preventing deforestation. The National Institute for Space Research (INPE) is currently developing five complementary BLA monitoring systems, among which the near real-time deforestation detection system (DETER) excels. DETER employs MODIS 250 m imagery and almost daily revisit, enabling an early warning system to support surveillance and control of deforestation. The aim of this paper is to present the methodology and results of the DETER based on AWIFS data, called DETER-B. Supported by 56 m images, the new system is effective in detecting deforestation smaller than 25 ha, concentrating 80% of its total detections and 45% of the total mapped area in this range. It also presents higher detection capability in identifying areas between 25 and 100 ha. The area estimation per municipality is statistically equal to those of the official deforestation data (PRODES) and allows the identification of degradation and logging patterns not observed with the traditional DETER system.

Journal ArticleDOI
TL;DR: The relation between LST and Tair was found to be strongest during late summer and fall, and weakest during winter and early spring, and the relationship did not differ significantly across the two distinct mountainous ecoregions of Nevada.
Abstract: Land surface temperature (LST) is a fundamental physical property relevant to many ecological, hydrological, and atmospheric processes. There is a strong relationship between LST and near surface air temperature ( ${\mathbf {T}_{\mathbf {air}}}$ ), although the two temperatures have different physical meaning and responses to atmospheric conditions. In complex terrain, these differences are amplified; yet it is in these environments that remotely sensed LST may be most valuable in prediction and characterization of spatial–temporal patterns of ${\text{T}_{\text{air}}}$ due to typical paucity of meteorological stations in mountainous regions. This study presents an analysis on the suitability and limitations of using LST as a proxy or an input variable for predicting ${\text{T}_{\text{air}}}$ in complex mountainous topography. Explicitly, we investigated the influence of key environmental, topographic, and instrumental factors on the relation between LST and measured ${\text{T}_{\text{air}}}$ in two mountainous ecoregions of Nevada. The relation between LST and ${\text{T}_{\text{air}}}$ was found to be strongest during late summer and fall, and weakest during winter and early spring. Increasing terrain roughness was found to diminish the relation between between LST and ${\text{T}_{\text{air}}}$ . There was a strong agreement between nighttime ${\text{T}_{\text{air}}}$ lapse rates and LST lapse rates. Given the inadequacy of several gridded ${\text{T}_{\text{air}}}$ products in capturing minimum temperature cold-air pooling and inversions, using LST as an input variable in the interpolation process would enhance capture of temperature inversions in gridded data over complex terrain. Crucially, the relationship between LST and ${\text{T}_{\text{air}}}$ did not differ significantly across the two distinct mountainous ecoregions.

Journal ArticleDOI
TL;DR: This paper proposes an efficient metric learning detector based on random forests, named the random forest metric learning (RFML) algorithm, which combines semimultiple metrics with random forests to better separate the desired targets and background.
Abstract: Target detection is aimed at detecting and identifying target pixels based on specific spectral signatures, and is of great interest in hyperspectral image (HSI) processing. Target detection can be considered as essentially a binary classification. Random forests have been effectively applied to the classification of HSI data. However, random forests need a huge amount of labeled data to achieve a good performance, which can be difficult to obtain in target detection. In this paper, we propose an efficient metric learning detector based on random forests, named the random forest metric learning (RFML) algorithm, which combines semimultiple metrics with random forests to better separate the desired targets and background. The experimental results demonstrate that the proposed method outperforms both the state-of-the-art target detection algorithms and the other classical metric learning methods.

Journal ArticleDOI
TL;DR: A method for estimating the HVR in metropolitan areas using NPP-VIIRS NTL composite data is proposed, and it is discovered that the spatial distribution of HVR is influenced by natural situations as well as the degree of urban development.
Abstract: House vacancy rate (HVR) is an important index in assessing the healthiness of residential real estate market. Investigating HVR by field survey requires a lot of human and economic resources. The nighttime light (NTL) data, derived from Suomi National Polar-orbiting Partnership, can detect the artificial light from the Earth surface, and have been used to study social-economic activities. This paper proposes a method for estimating the HVR in metropolitan areas using NPP-VIIRS NTL composite data. This method combines NTL composite data with land cover information to extract the light intensity in urbanized areas. Then, we estimate the light intensity values for nonvacancy areas, and use such values to calculate the HVR in corresponding regions. Fifteen metropolitan areas in the United States have been selected for this study, and the estimated HVR values are validated using corresponding statistical data. The experimental results show a strong correlation between our derived HVR values and the statistical data. We also visualize the estimated HVR on maps, and discover that the spatial distribution of HVR is influenced by natural situations as well as the degree of urban development.

Journal ArticleDOI
Jibin Zheng1, Tao Su1, Wentao Zhu1, Xuehui He1, Qing Huo Liu2 
TL;DR: This coherent detection algorithm can detect high-speed targets without the brute-force searching of unknown motion parameters and achieve a good balance between the computational cost and the antinoise performance.
Abstract: In this paper, by employing the symmetric autocorrelation function and the scaled inverse Fourier transform (SCIFT), a coherent detection algorithm is proposed for high-speed targets. This coherent detection algorithm is simple and can be easily implemented by using complex multiplications, the fast Fourier transform (FFT) and the inverse FFT (IFFT). Compared to the Hough transform and the keystone transform, this coherent detection algorithm can detect high-speed targets without the brute-force searching of unknown motion parameters and achieve a good balance between the computational cost and the antinoise performance. Through simulations and analyses for synthetic models and the real data, we verify the effectiveness of the proposed coherent detection algorithm.

Journal ArticleDOI
TL;DR: An automated sea ice classification algorithm based on TerraSAR-X ScanSAR data is examined, which obtains a reasonable classification accuracy of at least 70% depending on the choice of the ice-type regime.
Abstract: We examine the performance of an automated sea ice classification algorithm based on TerraSAR-X ScanSAR data. In the first step of our process chain, gray-level co-occurrence matrix(GLCM)-based texture features are extracted from the image. In the second step, these data are fed into an artificial neural network to classify each pixel. Performance of our implementation is examined by utilizing a time series of ScanSAR images in the Western Barents Sea, acquired in spring 2013. The network is trained on the initial image of the time series and then applied to subsequent images. We obtain a reasonable classification accuracy of at least 70% depending on the choice of our ice-type regime, when the incidence angle range of the training data matches that of the classified image. Computational cost of our approach is sufficiently moderate to consider this classification procedure a promising step toward operational, near-realtime ice charting.

Journal ArticleDOI
TL;DR: A methodology involving the combination of hyperspectral and LiDAR data for individual tree classification, which can be extended to areas of shadow caused by the illumination of tree crowns with sunlight, is proposed and found that both shadow correction and tree-crown information improve the classification performance.
Abstract: The classification of tree species in forests is an important task for forest maintenance and management. With the increase in the spatial resolution of remote sensing imagery, individual tree classification is the next target of research area for the forest inventory. In this work, we propose a methodology involving the combination of hyperspectral and LiDAR data for individual tree classification, which can be extended to areas of shadow caused by the illumination of tree crowns with sunlight. To remove the influence of shadows in hyperspectral data, an unmixing-based correction is applied as preprocessing. Spectral features of trees are obtained by principal component analysis of the hyperspectral data. The sizes and shapes of individual trees are derived from the LiDAR data after individual tree-crown delineation. Both spectral and tree-crown features are combined and input into a support vector machine classifier pixel by pixel. This procedure is applied to data taken over Tama Forest Science Garden in Tokyo, Japan, to classify it into 16 classes of tree species. It is found that both shadow correction and tree-crown information improve the classification performance, which is further improved by postprocessing based on tree-crown information derived from the LiDAR data. Regarding the classification results in the case of 10% training data, when using the random sampling of pixels to select training samples, a classification accuracy of 82% was obtained, while the use of reference polygons as a more practical means of sample selection reduced the accuracy to 71%. These values are, respectively, 21.5% and 9% higher than those that are obtained using hyperspectral data only.

Journal ArticleDOI
TL;DR: A novel two-level machine-learning framework is proposed for identifying the water types from urban high-resolution remote-sensing images, which achieved satisfactory accuracies for both water extraction and water type classification in complex urban areas.
Abstract: Water is one of the vital components for the ecological environment, which plays an important role in human survival and socioeconomic development. Water resources in urban areas are gradually decreasing due to the rapid urbanization, especially in developing countries. Therefore, the precise extraction and automatic identification of water bodies are of great significance and urgently required for urban planning. It should be noted that although some studies have been reported regarding the water-area extraction, to our knowledge, few papers concern the identification of urban water types (e.g., rivers, lakes, canals, and ponds). In this paper, a novel two-level machine-learning framework is proposed for identifying the water types from urban high-resolution remote-sensing images. The framework consists of two interpretation levels: 1) water bodies are extracted at the pixel level, where the water/shadow/vegetation indexes are considered and 2) water types are further identified at the object level, where a set of geometrical and textural features are used. Both levels employ machine learning for the image interpretation. The proposed framework is validated using the GeoEye-1 and WorldView-2 images, over two mega cities in China, i.e., Wuhan and Shenzhen, respectively. The experimental results show that the proposed method achieved satisfactory accuracies for both water extraction [95.4% (Shenzhen), 96.2% (Wuhan)], and water type classification [94.1% (Shenzhen), 95.9% (Wuhan)] in complex urban areas.

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed method is robust under different complex backgrounds and has high detection rate with low false alarm.
Abstract: Automatic oil tank detection plays a very important role for remote sensing image processing. To accomplish the task, a hierarchical oil tank detector with deep surrounding features is proposed in this paper. The surrounding features extracted by the deep learning model aim at making the oil tanks more easily to recognize, since the appearance of oil tanks is a circle and this information is not enough to separate targets from the complex background. The proposed method is divided into three modules: 1) candidate selection; 2) feature extraction; and 3) classification. First, a modified ellipse and line segment detector (ELSD) based on gradient orientation is used to select candidates in the image. Afterward, the feature combing local and surrounding information together is extracted to represent the target. Histogram of oriented gradients (HOG) which can reliably capture the shape information is extracted to characterize the local patch. For the surrounding area, the convolutional neural network (CNN) trained in ImageNet Large Scale Visual Recognition Challenge 2012 (ILSVRC2012) contest is applied as a blackbox feature extractor to extract rich surrounding feature. Then, the linear support vector machine (SVM) is utilized as the classifier to give the final output. Experimental results indicate that the proposed method is robust under different complex backgrounds and has high detection rate with low false alarm.

Journal ArticleDOI
TL;DR: This paper studies the performances of different ship detectors based on adaptive threshold algorithms based on various clutter distributions and assessed automatically with a systematic methodology using large datasets of medium resolution SAR images and AIS data as ground truths.
Abstract: This paper studies the performances of different ship detectors based on adaptive threshold algorithms. The detection algorithms are based on various clutter distributions and assessed automatically with a systematic methodology. Evaluation using large datasets of medium resolution SAR images and AIS (automatic identification system) data as ground truths allows to evaluate the efficiency of each detector. Depending on the datasets used for testing, the detection algorithms offer different advantages and disadvantages. The systematic method used in discriminating real detected targets and false alarms in order to determine the detection rate, allows us to perform an appropriate and consistent comparison of the detectors. The impact of SAR sensors characteristics (incidence angle, polarization, frequency and spatial resolution) is fully assessed, the vessels’ length being also considered. Experiments are conducted on Radarsat-2 and CosmoSkymed ScanSAR datasets and AIS data acquired by coastal stations.

Journal ArticleDOI
TL;DR: The sea ice concentration parameter could play a role in SVM classification, and the whole process provided an effective way to classify sea ice using dual polarization ScanSAR data.
Abstract: An approach to sea ice classification using dual polarization RADARSAT-2 ScanSAR data is presented in this paper. It is based on support vector machine (SVM). In addition to backscatter coefficients and gray-level cooccurrence matrix (GLCM) texture features, sea ice concentration was introduced as a classification basis. To better analyze the backscatter information of sea ice types, we considered two steps that could improve the ScanSAR image quality, the noise floor stripe reduction and the incidence angle normalization. Then, effective GLCM texture characteristics from both polarizations were selected using the proper parameters. The third type of information, sea ice concentration, was extracted from the initial SVM classification result after the optimal SVM model was achieved from the training. The final result was generated by implementing the SVM twice and the decision tree once. Using this method, the classification was improved in two aspects, both of which were related to sea ice concentration. The results showed that the sea ice concentration parameter was effective in dealing with open water and in discriminating pancake ice from old ice. Finally, the maximum likelihood (ML) was run as a comparative test. In conclusion, the sea ice concentration parameter could play a role in SVM classification, and the whole process provided an effective way to classify sea ice using dual polarization ScanSAR data.

Journal ArticleDOI
TL;DR: The objective of this study is to investigate the multidecadal change in mangrove forests in Ca Mau peninsula, South Vietnam, based on Landsat data from 1979 to 2013, and show satisfactory agreement with the overall accuracy higher than 82%.
Abstract: Mangrove forests provide important ecosystem goods and services for human society. Extensive coastal development in many developing countries has converted mangrove forests to other land uses without regard to their ecosystem service values; thus, the ecosystem state of mangrove forests is critical for officials to evaluate sustainable coastal management strategies. The objective of this study is to investigate the multidecadal change in mangrove forests in Ca Mau peninsula, South Vietnam, based on Landsat data from 1979 to 2013. The data were processed through four main steps: 1) data preprocessing; 2) image processing using the object-based image analysis (OBIA); 3) accuracy assessment; and 4) multitemporal change detection and spatial analysis of mangrove forests. The classification maps compared with the ground reference data showed the satisfactory agreement with the overall accuracy higher than 82%. From 1979 to 2013, the area of mangrove forests in the study region had decreased by 74%, mainly due to the boom of local aquaculture industry in the study region. Given that mangrove reforestation and afforestation only contributed about 13.2% during the last three decades, advanced mangrove management strategies are in an acute need for promoting environmental sustainability in the future.

Journal ArticleDOI
TL;DR: This paper compares the accuracy of seven discrete return LiDAR filtering methods, implemented in nonproprietary tools and software in classification of the point clouds provided by the Spanish National Plan for Aerial Orthophotography (PNOA).
Abstract: Light detection and ranging (LiDAR) is an emerging remote-sensing technology with potential to assist in mapping, monitoring, and assessment of forest resources. Despite a growing body of peer-reviewed literature documenting the filtering methods of LiDAR data, there seems to be little information about qualitative and quantitative assessment of filtering methods to select the most appropriate to create digital elevation models with the final objective of normalizing the point cloud in forestry applications. Furthermore, most algorithms are proprietary and have high purchase costs, while a few are openly available and supported by published results. This paper compares the accuracy of seven discrete return LiDAR filtering methods, implemented in nonproprietary tools and software in classification of the point clouds provided by the Spanish National Plan for Aerial Orthophotography (PNOA). Two test sites in moderate to steep slopes and various land cover types were selected. The classification accuracy of each algorithm was assessed using 424 points classified by hand and located in different terrain slopes, cover types, point cloud densities, and scan angles. MCC filter presented the best overall performance with an 83.3% of success rate and a Kappa index of 0.67. Compared to other filters, MCC and LAStools balanced quite well the error rates. Sprouted scrub with abandoned logs, stumps, and woody debris and terrain slopes over 15° were the most problematic cover types in filtering. However, the influence of point density and scan-angle variables in filtering is lower, as morphological methods are less sensitive to them.