scispace - formally typeset
Search or ask a question

Showing papers on "Spatial analysis published in 2017"


Proceedings ArticleDOI
Zhen Zhou1, Yan Huang1, Wei Wang1, Liang Wang1, Tieniu Tan1 
01 Jul 2017
TL;DR: This paper focuses on video-based person re-identification and builds an end-to-end deep neural network architecture to jointly learn features and metrics and integrates the surrounding information at each location by a spatial recurrent model when measuring the similarity with another pedestrian video.
Abstract: Surveillance cameras have been widely used in different scenes. Accordingly, a demanding need is to recognize a person under different cameras, which is called person re-identification. This topic has gained increasing interests in computer vision recently. However, less attention has been paid to video-based approaches, compared with image-based ones. Two steps are usually involved in previous approaches, namely feature learning and metric learning. But most of the existing approaches only focus on either feature learning or metric learning. Meanwhile, many of them do not take full use of the temporal and spatial information. In this paper, we concentrate on video-based person re-identification and build an end-to-end deep neural network architecture to jointly learn features and metrics. The proposed method can automatically pick out the most discriminative frames in a given video by a temporal attention model. Moreover, it integrates the surrounding information at each location by a spatial recurrent model when measuring the similarity with another pedestrian video. That is, our method handles spatial and temporal information simultaneously in a unified manner. The carefully designed experiments on three public datasets show the effectiveness of each component of the proposed deep network, performing better in comparison with the state-of-the-art methods.

350 citations


Posted Content
TL;DR: In this article, the results of a predictive competition among the described methods as implemented by different groups with strong expertise in the methodology have been presented, and each group then wrote their own implementation of their method to produce predictions at the given location and each which was subsequently run on a common computing environment.
Abstract: The Gaussian process is an indispensable tool for spatial data analysts. The onset of the "big data" era, however, has lead to the traditional Gaussian process being computationally infeasible for modern spatial data. As such, various alternatives to the full Gaussian process that are more amenable to handling big spatial data have been proposed. These modern methods often exploit low rank structures and/or multi-core and multi-threaded computing environments to facilitate computation. This study provides, first, an introductory overview of several methods for analyzing large spatial data. Second, this study describes the results of a predictive competition among the described methods as implemented by different groups with strong expertise in the methodology. Specifically, each research group was provided with two training datasets (one simulated and one observed) along with a set of prediction locations. Each group then wrote their own implementation of their method to produce predictions at the given location and each which was subsequently run on a common computing environment. The methods were then compared in terms of various predictive diagnostics. Supplementary materials regarding implementation details of the methods and code are available for this article online.

243 citations


Journal ArticleDOI
TL;DR: The results indicate that the data fusion method provides a robust way of extracting useful information from uncertain sensor data using only a time-invariant model dataset and the knowledge contained within an entire sensor network.

229 citations


Journal ArticleDOI
TL;DR: The group-structured prior information of hyperspectral images is incorporated into the nonnegative matrix factorization optimization, where the data are organized into spatial groups to exploit the shared sparse pattern and to avoid the loss of spatial details within a spatial group.
Abstract: In recent years, blind source separation (BSS) has received much attention in the hyperspectral unmixing field due to the fact that it allows the simultaneous estimation of both endmembers and fractional abundances. Although great performances can be obtained by the BSS-based unmixing methods, the decomposition results are still unstable and sensitive to noise. Motivated by the first law of geography, some recent studies have revealed that spatial information can lead to an improvement in the decomposition stability. In this paper, the group-structured prior information of hyperspectral images is incorporated into the nonnegative matrix factorization optimization, where the data are organized into spatial groups. Pixels within a local spatial group are expected to share the same sparse structure in the low-rank matrix (abundance). To fully exploit the group structure, image segmentation is introduced to generate the spatial groups. Instead of a predefined group with a regular shape (e.g., a cross or a square window), the spatial groups are adaptively represented by superpixels. Moreover, the spatial group structure and sparsity of the abundance are integrated as a modified mixed-norm regularization to exploit the shared sparse pattern, and to avoid the loss of spatial details within a spatial group. The experimental results obtained with both simulated and real hyperspectral data confirm the high efficiency and precision of the proposed algorithm.

178 citations


Journal ArticleDOI
01 Mar 2017-Ecology
TL;DR: Three new models that share information in a less direct manner resulting in more robust performance when the auxiliary data is of lesser quality are developed for combining data sources and used in a case study of the Brown-headed Nuthatch in the Southeastern U.S.
Abstract: The last decade has seen a dramatic increase in the use of species distribution models (SDMs) to characterize patterns of species' occurrence and abundance. Efforts to parameterize SDMs often create a tension between the quality and quantity of data available to fit models. Estimation methods that integrate both standardized and non-standardized data types offer a potential solution to the tradeoff between data quality and quantity. Recently several authors have developed approaches for jointly modeling two sources of data (one of high quality and one of lesser quality). We extend their work by allowing for explicit spatial autocorrelation in occurrence and detection error using a Multivariate Conditional Autoregressive (MVCAR) model and develop three models that share information in a less direct manner resulting in more robust performance when the auxiliary data is of lesser quality. We describe these three new approaches ("Shared," "Correlation," "Covariates") for combining data sources and show their use in a case study of the Brown-headed Nuthatch in the Southeastern U.S. and through simulations. All three of the approaches which used the second data source improved out-of-sample predictions relative to a single data source ("Single"). When information in the second data source is of high quality, the Shared model performs the best, but the Correlation and Covariates model also perform well. When the information quality in the second data source is of lesser quality, the Correlation and Covariates model performed better suggesting they are robust alternatives when little is known about auxiliary data collected opportunistically or through citizen scientists. Methods that allow for both data types to be used will maximize the useful information available for estimating species distributions.

161 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a change detection method based on an improved MRF, where linear weights are designed for dividing unchanged, uncertain and changed pixels of the difference image, and spatial attraction model is introduced to refine the spatial neighborhood relations, which aims to enhance the accuracy of spatial information in MRF.
Abstract: The fixed weights between the center pixel and neighboring pixels are used in the traditional Markov random field for change detection, which will easily cause the overuse of spatial neighborhood information. Besides the traditional label field cannot accurately identify the spatial relations between neighborhood pixels. To solve these problems, this study proposes a change detection method based on an improved MRF. Linear weights are designed for dividing unchanged, uncertain and changed pixels of the difference image, and spatial attraction model is introduced to refine the spatial neighborhood relations, which aims to enhance the accuracy of spatial information in MRF. The experimental results indicate that the proposed method can effectively enhance the accuracy of change detection.

146 citations


Journal ArticleDOI
TL;DR: The study shows that with flexible spatial model parameterisation used in combination with the appropriate objective functions, the simulated spatial patterns of actual evapotranspiration become substantially more similar to the satellite-based estimates.
Abstract: . Satellite-based earth observations offer great opportunities to improve spatial model predictions by means of spatial-pattern-oriented model evaluations. In this study, observed spatial patterns of actual evapotranspiration (AET) are utilised for spatial model calibration tailored to target the pattern performance of the model. The proposed calibration framework combines temporally aggregated observed spatial patterns with a new spatial performance metric and a flexible spatial parameterisation scheme. The mesoscale hydrologic model (mHM) is used to simulate streamflow and AET and has been selected due to its soil parameter distribution approach based on pedo-transfer functions and the build in multi-scale parameter regionalisation. In addition two new spatial parameter distribution options have been incorporated in the model in order to increase the flexibility of root fraction coefficient and potential evapotranspiration correction parameterisations, based on soil type and vegetation density. These parameterisations are utilised as they are most relevant for simulated AET patterns from the hydrologic model. Due to the fundamental challenges encountered when evaluating spatial pattern performance using standard metrics, we developed a simple but highly discriminative spatial metric, i.e. one comprised of three easily interpretable components measuring co-location, variation and distribution of the spatial data. The study shows that with flexible spatial model parameterisation used in combination with the appropriate objective functions, the simulated spatial patterns of actual evapotranspiration become substantially more similar to the satellite-based estimates. Overall 26 parameters are identified for calibration through a sequential screening approach based on a combination of streamflow and spatial pattern metrics. The robustness of the calibrations is tested using an ensemble of nine calibrations based on different seed numbers using the shuffled complex evolution optimiser. The calibration results reveal a limited trade-off between streamflow dynamics and spatial patterns illustrating the benefit of combining separate observation types and objective functions. At the same time, the simulated spatial patterns of AET significantly improved when an objective function based on observed AET patterns and a novel spatial performance metric compared to traditional streamflow-only calibration were included. Since the overall water balance is usually a crucial goal in hydrologic modelling, spatial-pattern-oriented optimisation should always be accompanied by traditional discharge measurements. In such a multi-objective framework, the current study promotes the use of a novel bias-insensitive spatial pattern metric, which exploits the key information contained in the observed patterns while allowing the water balance to be informed by discharge observations.

118 citations


Journal ArticleDOI
TL;DR: A scalable data processing framework with a novel change detection algorithm that is compared with various existing approaches such as pruned exact linear time method, binary segmentation method, and segment neighborhood method to monitor the changes in the seasonal climate.

116 citations


Journal ArticleDOI
TL;DR: A modified version of the CV method called spatial k-fold cross validation (SKCV) is proposed, which provides a useful estimate for model prediction performance without optimistic bias due to SAC, and can be applied as a criterion for selecting data sampling density for new research area.
Abstract: In machine learning, one often assumes the data are independent when evaluating model performance. However, this rarely holds in practice. Geographic information datasets are an example where the data points have stronger dependencies among each other the closer they are geographically. This phenomenon known as spatial autocorrelation (SAC) causes the standard cross validation (CV) methods to produce optimistically biased prediction performance estimates for spatial models, which can result in increased costs and accidents in practical applications. To overcome this problem, we propose a modified version of the CV method called spatial k-fold cross validation (SKCV), which provides a useful estimate for model prediction performance without optimistic bias due to SAC. We test SKCV with three real-world cases involving open natural data showing that the estimates produced by the ordinary CV are up to 40% more optimistic than those of SKCV. Both regression and classification cases are considered in o...

97 citations


Journal ArticleDOI
TL;DR: This paper develops two techniques to construct the ensemble model, namely, hierarchical guidance filtering (HGF) and matrix of spectral angle distance (mSAD), and proposes an ensemble framework, which combines spectral and spatial information in different scales.
Abstract: Joint spectral and spatial information should be fully exploited in order to achieve accurate classification results for hyperspectral images. In this paper, we propose an ensemble framework, which combines spectral and spatial information in different scales. The motivation of the proposed method derives from the basic idea: by integrating many individual learners, ensemble learning can achieve better generalization ability than a single learner. In the proposed work, the individual learners are obtained by joint spectral-spatial features generated from different scales. Specially, we develop two techniques to construct the ensemble model, namely, hierarchical guidance filtering (HGF) and matrix of spectral angle distance (mSAD). HGF and mSAD are combined via a weighted ensemble strategy. HGF is a hierarchical edge-preserving filtering operation, which could produce diverse sample sets. Meanwhile, in each hierarchy, a different spatial contextual information is extracted. With the increase of hierarchy, the pixels spectra tend smooth, while the spatial features are enhanced. Based on the outputs of HGF, a series of classifiers can be obtained. Subsequently, we define a low-rank matrix, mSAD, to measure the diversity among training samples in each hierarchy. Finally, an ensemble strategy is proposed using the obtained individual classifiers and mSAD. We term the proposed method as HiFi-We. Experiments are conducted on two popular data sets, Indian Pines and Pavia University, as well as a challenging hyperspectral data set used in 2014 Data Fusion Contest (GRSS_DFC_2014). An effectiveness analysis about the ensemble strategy is also displayed.

96 citations


Journal ArticleDOI
01 Mar 2017-Ecology
TL;DR: Important concepts and properties related to basis functions are presented and several tools and techniques ecologists can use when modeling autocorrelation in ecological data are illustrated.
Abstract: Analyzing ecological data often requires modeling the autocorrelation created by spatial and temporal processes. Many seemingly disparate statistical methods used to account for autocorrelation can be expressed as regression models that include basis functions. Basis functions also enable ecologists to modify a wide range of existing ecological models in order to account for autocorrelation, which can improve inference and predictive accuracy. Furthermore, understanding the properties of basis functions is essential for evaluating the fit of spatial or time-series models, detecting a hidden form of collinearity, and analyzing large data sets. We present important concepts and properties related to basis functions and illustrate several tools and techniques ecologists can use when modeling autocorrelation in ecological data.

Journal ArticleDOI
Xiangyong Cao1, Lin Xu1, Deyu Meng1, Qian Zhao1, Zongben Xu1 
TL;DR: This paper proposes a novel spectral-spatial HSI classification method, which fully utilizes the spatial information in both steps and achieves a significant performance gain beyond state-of-the-art methods.

Journal ArticleDOI
TL;DR: A new and effective ℓ2-norm regularized SSC algorithm is developed which adds a four-neighborhood ™2- norm regularizer into the classical SSC model, thus taking full advantage of the spatial-spectral information contained in HSIs.
Abstract: Robust techniques such as sparse subspace clustering (SSC) have been recently developed for hyperspectral images (HSIs) based on the assumption that pixels belonging to the same land-cover class approximately lie in the same subspace. In order to account for the spatial information contained in HSIs, SSC models incorporating spatial information have become very popular. However, such models are often based on a local averaging constraint, which does not allow for a detailed exploration of the spatial information, thus limiting their discriminative capability and preventing the spatial homogeneity of the clustering results. To address these relevant issues, in this letter, we develop a new and effective $\ell _{2} $ -norm regularized SSC algorithm which adds a four-neighborhood $\ell _{2} $ -norm regularizer into the classical SSC model, thus taking full advantage of the spatial-spectral information contained in HSIs. The experimental results confirm the potential of including the spatial information (through the newly added $\ell _{2} $ -norm regularization term) in the SSC framework, which leads to a significant improvement in the clustering accuracy of SSC when applied to HSIs.

Journal ArticleDOI
TL;DR: The model comparison results indicated that the models based on TADs consistently offer a better performance compared to the others, and the models considering spatial autocorrelation outperform the ones that do not consider it.

Journal ArticleDOI
TL;DR: This letter proposes a closed-form solution, in which the spatial and spectral features are both utilized to induce the distance-weighted regularization terms, and outperforms the state-of-the-art classifiers.
Abstract: Representation-residual-based classifiers have attracted much attention in recent years in hyperspectral image (HSI) classification. How to obtain the optimal representa-tion coefficients for the classification task is the key problem of these methods. In this letter, spatial-aware collaborative representation (CR) is proposed for HSI classification. In order to make full use of the spatial–spectral information, we propose a closed-form solution, in which the spatial and spectral features are both utilized to induce the distance-weighted regularization terms. Different from traditional CR-based HSI classification algorithms, which model the spatial feature in a preprocessing or postprocessing stage, we directly incorporate the spatial information by adding a spatial regularization term to the representation objective function. The experimental results on three HSI data sets verify that our proposed approach outperforms the state-of-the-art classifiers.

Journal ArticleDOI
TL;DR: While crashes emerged a clustered pattern, comparison of the spatio-temporal separations showed an accidental spread in distinct categories, so the local governmental agencies can use the outcomes to adopt more effective strategies for traffic safety planning and management.
Abstract: As a developing country, Iran has one of the highest crash-related deaths, with a typical rate of 15.6 cases in every 100 thousand people. This paper is aimed to find the potential temporal and spatial patterns of road crashes aggregated at traffic analysis zonal (TAZ) level in urban environments. Localization pattern and hotspot distribution were examined using geo-information approach to find out the impact of spatial/temporal dimensions on the emergence of such patterns. The spatial clustering of crashes and hotspots were assessed using spatial autocorrelation methods such as the Moran's I and Getis-Ord Gi* index. Comap was used for comparing clusters in three attributes: the time of occurrence, severity, and location. The analysis of the annually crash frequencies aggregated in 156 TAZ in Shiraz; from 2010 to 2014, Iran showed that both Moran's I method and Getis-Ord Gi* statistics produced significant clustering of crash patterns. While crashes emerged a clustered pattern, comparison of the spatio-temporal separations showed an accidental spread in distinct categories. The local governmental agencies can use the outcomes to adopt more effective strategies for traffic safety planning and management.

Journal ArticleDOI
TL;DR: In this article, the authors quantified wildfire effects by correlating changes in forest structure derived from multi-temporal Light Detection and Ranging (LiDAR) acquisitions to multi-time temporal spectral changes captured by the Landsat Thematic Mapper and Operational Land Imager for the 2012 Pole Creek Fire in central Oregon.

Journal ArticleDOI
TL;DR: The graph-theory-based contour tree method was used to delineate hierarchical wetland catchments and characterize their geometric and topological properties and demonstrated that the proposed framework is promising for improving overland flow simulation and hydrologic connectivity analysis.
Abstract: . In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In reality, however, many depressions in the DEM are actual wetland landscape features with seasonal to permanent inundation patterning characterized by nested hierarchical structures and dynamic filling–spilling–merging surface-water hydrological processes. Differentiating and appropriately processing such ecohydrologically meaningful features remains a major technical terrain-processing challenge, particularly as high-resolution spatial data are increasingly used to support modeling and geographic analysis needs. The objectives of this study were to delineate hierarchical wetland catchments and model their hydrologic connectivity using high-resolution lidar data and aerial imagery. The graph-theory-based contour tree method was used to delineate the hierarchical wetland catchments and characterize their geometric and topological properties. Potential hydrologic connectivity between wetlands and streams were simulated using the least-cost-path algorithm. The resulting flow network delineated potential flow paths connecting wetland depressions to each other or to the river network on scales finer than those available through the National Hydrography Dataset. The results demonstrated that our proposed framework is promising for improving overland flow simulation and hydrologic connectivity analysis.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: A new deep architecture which incorporates the temporal and spatial information to boost the tracking performance is presented, and competing performance of the proposed tracker over a number of state-of-the-art algorithms is demonstrated.
Abstract: Recently deep neural networks have been widely employed to deal with the visual tracking problem. In this work, we present a new deep architecture which incorporates the temporal and spatial information to boost the tracking performance. Our deep architecture contains three networks, a Feature Net, a Temporal Net, and a Spatial Net. The Feature Net extracts general feature representations of the target. With these feature representations, the Temporal Net encodes the trajectory of the target and directly learns temporal correspondences to estimate the object state from a global perspective. Based on the learning results of the Temporal Net, the Spatial Net further refines the object tracking state using local spatial object information. Extensive experiments on four of the largest tracking benchmarks, including VOT2014, VOT2016, OTB50, and OTB100, demonstrate competing performance of the proposed tracker over a number of state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: A structured protocol for adapting a spatial ecosystem service model to local contexts is proposed and the type and level of stakeholders’ involvement is a determinant of spatial model usefulness.
Abstract: Ecosystem service (ES) spatial modelling is a key component of the integrated assessments designed to support policies and management practices aiming at environmental sustainability. ESTIMAP ("Ecosystem Service Mapping Tool") is a collection of spatially explicit models, originally developed to support policies at a European scale. We based our analysis on 10 case studies, and 3 ES models. Each case study applied at least one model at a local scale. We analyzed the applications with respect to: the adaptation process; the "precision differential" which we define as the variation generated in the model between the degree of spatial variation within the spatial distribution of ES and what the model captures; the stakeholders' opinions on the usefulness of models. We propose a protocol for adapting ESTIMAP to the local conditions. We present the precision differential as a means of assessing how the type of model and level of model adaptation generate variation among model outputs. We then present the opinion of stakeholders; that in general considered the approach useful for stimulating discussion and supporting communication. Major constraints identified were the lack of spatial data with sufficient level of detail, and the level of expertise needed to set up and compute the models.

Journal ArticleDOI
TL;DR: This article introduces a new class of data generating processes (DGP), called MGWR-SAR, in which the regression parameters and the spatial autocorrelation coefficient can vary over the space, and proposes a specification procedure to identify the correct spatial weight matrix for DGPs with spatial heterogeneity and spatial autOCorrelation of the endogenous.

Journal ArticleDOI
TL;DR: This study shows the potential of remote sensing techniques to provide valuable baseline spatial information for supporting agricultural monitoring and for large-scale land-use systems analysis.
Abstract: In response to the need for generic remote sensing tools to support large-scale agricultural monitoring, we present a new approach for regional-scale mapping of agricultural land-use systems (ALUS) based on object-based Normalized Difference Vegetation Index (NDVI) time series analysis. The approach consists of two main steps. First, to obtain relatively homogeneous land units in terms of phenological patterns, a principal component analysis (PCA) is applied to an annual MODIS NDVI time series, and an automatic segmentation is performed on the resulting high-order principal component images. Second, the resulting land units are classified into the crop agriculture domain or the livestock domain based on their land-cover characteristics. The crop agriculture domain land units are further classified into different cropping systems based on the correspondence of their NDVI temporal profiles with the phenological patterns associated with the cropping systems of the study area. A map of the main ALUS of the Brazilian state of Tocantins was produced for the 2013–2014 growing season with the new approach, and a significant coherence was observed between the spatial distribution of the cropping systems in the final ALUS map and in a reference map extracted from the official agricultural statistics of the Brazilian Institute of Geography and Statistics (IBGE). This study shows the potential of remote sensing techniques to provide valuable baseline spatial information for supporting agricultural monitoring and for large-scale land-use systems analysis.

Journal ArticleDOI
TL;DR: In this article, a stochastic modeling framework is proposed to extract the subsurface heterogeneity from multiple and complementary types of data, which is considered as the hidden link between multiple spatial data sets.
Abstract: Stochastic modeling methods and uncertainty quantification are important tools for gaining insight into the geological variability of subsurface structures. Previous attempts at geologic inversion and interpretation can be broadly categorized into geostatistics and process-based modeling. The choice of a suitable modeling technique directly depends on the modeling applications and the available input data. Modern geophysical techniques provide us with regional data sets in two- or three-dimensional spaces with high resolution either directly from sensors or indirectly from geophysical inversion. Existing methods suffer certain drawbacks in producing accurate and precise (with quantified uncertainty) geological models using these data sets. In this work, a stochastic modeling framework is proposed to extract the subsurface heterogeneity from multiple and complementary types of data. Subsurface heterogeneity is considered as the “hidden link” between multiple spatial data sets. Hidden Markov random field models are employed to perform three-dimensional segmentation, which is the representation of the “hidden link”. Finite Gaussian mixture models are adopted to characterize the statistical parameters of multiple data sets. The uncertainties are simulated via a Gibbs sampling process within a Bayesian inference framework. The proposed modeling method is validated and is demonstrated using numerical examples. It is shown that the proposed stochastic modeling framework is a promising tool for three-dimensional segmentation in the field of geological modeling and geophysics.

Journal ArticleDOI
TL;DR: This paper uses a comprehensive dataset of three million street-level geocoded firm observations to explore the location pattern of software firms in an Exploratory Spatial Data Analysis (ESDA), and develops a software firm location prediction model using Poisson regression and OSM data.
Abstract: While the effects of non-geographic aggregation on inference are well studied in economics, research on geographic aggregation is rather scarce. This knowledge gap together with the use of aggregated spatial units in previous firm location studies result in a lack of understanding of firm location determinants at the microgeographic level. Suitable data for microgeographic location analysis has become available only recently through the emergence of Volunteered Geographic Information (VGI), especially the OpenStreetMap (OSM) project, and the increasing availability of official (open) geodata. In this paper, we use a comprehensive dataset of three million street-level geocoded firm observations to explore the location pattern of software firms in an Exploratory Spatial Data Analysis (ESDA). Based on the ESDA results, we develop a software firm location prediction model using Poisson regression and OSM data. Our findings demonstrate that the model yields plausible predictions and OSM data is suitable for microgeographic location analysis. Our results also show that non-aggregated data can be used to detect information on location determinants, which are superimposed when aggregated spatial units are analysed, and that some findings of previous firm location studies are not robust at the microgeographic level. However, we also conclude that the lack of high-resolution geodata on socio-economic population characteristics causes systematic prediction errors, especially in cities with diverse and segregated populations.

Journal ArticleDOI
TL;DR: Results showed that GWPR and GWNBR achieved a better performance than GLM for the average residuals and likelihood as well as reducing the spatial autocorrelation of the residuals, and theGWNBR model was more able to capture the spatial heterogeneity of the crash frequency.

Journal ArticleDOI
Chunhui Zhao1, Xiaoqing Wan1, Genping Zhao1, Bing Cui1, Wu Liu1, Bin Qi1 
TL;DR: A new spectral-spatial deep learning-based classification paradigm is proposed, where random forest (RF) classifier is first introduced into stacked sparse autoencoder for HSI classification, based on the fact that it provides better tradeoff among generalization performance, prediction accuracy and operation speed compared to other traditional procedures.
Abstract: It is of great interest in exploiting spectral-spatial information for hyperspectral image (HSI) classification at different spatial resolutions. This paper proposes a new spectral-spatial deep learning-based classification paradigm. First, pixel-based scale transformation and class separability criteria are employed to measure appropriate spatial resolution HSI, and then we integrate the spectral and spatial information (i.e., both implicit and explicit features) together to construct a joint spectral-spatial feature set. Second, as a deep learning architecture, stacked sparse autoencoder provides strong learning performance and is expected to exploit even more abstract and high-level feature representations from both spectral and spatial domains. Specifically, random forest (RF) classifier is first introduced into stacked sparse autoencoder for HSI classification, based on the fact that it provides better tradeoff among generalization performance, prediction accuracy and operation speed compared...

Journal ArticleDOI
TL;DR: Both the computer simulation and empirical analysis support the proposed approach, namely Conditional geographically weighted regression (CGWR), which significantly reduces the bias and variance of data fitting.
Abstract: Geographically weighted regression (GWR) is a modelling technique designed to deal with spatial non-stationarity, e.g., the mean values vary by locations. It has been widely used as a visualization tool to explore the patterns of spatial data. However, the GWR tends to produce unsmooth surfaces when the mean parameters have considerable variations, partly due to that all parameter estimates are derived from a fixed- range (bandwidth) of observations. In order to deal with the varying bandwidth problem, this paper proposes an alternative approach, namely Conditional geographically weighted regression (CGWR). The estimation of CGWR is based on an iterative procedure, analogy to the numerical optimization problem. Computer simulation, under realistic settings, is used to compare the performance between the traditional GWR, CGWR, and a local linear modification of GWR. Furthermore, this study also applies the CGWR to two empirical datasets for evaluating the model performance. The first dataset consists of disability status of Taiwan’s elderly, along with some social-economic variables and the other is Ohio’s crime dataset. Under the positively correlated scenario, we found that the CGWR produces a better fit for the response surface. Both the computer simulation and empirical analysis support the proposed approach since it significantly reduces the bias and variance of data fitting. In addition, the response surface from the CGWR reviews local spatial characteristics according to the corresponded variables. As an explanatory tool for spatial data, producing accurate surface is essential in order to provide a first look at the data. Any distorted outcomes would likely mislead the following analysis. Since the CGWR can generate more accurate surface, it is more appropriate to use it exploring data that contain suspicious variables with varying characteristics.

Journal ArticleDOI
TL;DR: The results provide some evidence that a smaller number of neighbours used in defining the spatial weights matrix yields a better model fit, and may provide a more accurate representation of the underlying spatial random field.
Abstract: When analysing spatial data, it is important to account for spatial autocorrelation. In Bayesian statistics, spatial autocorrelation is commonly modelled by the intrinsic conditional autoregressive prior distribution. At the heart of this model is a spatial weights matrix which controls the behaviour and degree of spatial smoothing. The purpose of this study is to review the main specifications of the spatial weights matrix found in the literature, and together with some new and less common specifications, compare the effect that they have on smoothing and model performance. The popular BYM model is described, and a simple solution for addressing the identifiability issue among the spatial random effects is provided. Seventeen different definitions of the spatial weights matrix are defined, which are classified into four classes: adjacency-based weights, and weights based on geographic distance, distance between covariate values, and a hybrid of geographic and covariate distances. These last two definitions embody the main novelty of this research. Three synthetic data sets are generated, each representing a different underlying spatial structure. These data sets together with a real spatial data set from the literature are analysed using the models. The models are evaluated using the deviance information criterion and Moran’s I statistic. The deviance information criterion indicated that the model which uses binary, first-order adjacency weights to perform spatial smoothing is generally an optimal choice for achieving a good model fit. Distance-based weights also generally perform quite well and offer similar parameter interpretations. The less commonly explored options for performing spatial smoothing generally provided a worse model fit than models with more traditional approaches to smoothing, but usually outperformed the benchmark model which did not conduct spatial smoothing. The specification of the spatial weights matrix can have a colossal impact on model fit and parameter estimation. The results provide some evidence that a smaller number of neighbours used in defining the spatial weights matrix yields a better model fit, and may provide a more accurate representation of the underlying spatial random field. The commonly used binary, first-order adjacency weights still appear to be a good choice for implementing spatial smoothing.

Journal ArticleDOI
01 Jul 2017-Catena
TL;DR: In this paper, a generalized additive model (GAM) was compared to random forest (RF) and support vector regression (SVR) for the predictor selection, and a land potential assessment for soil nutrients was conducted using trimmed k-mean cluster analysis.
Abstract: Mountain soils play an essential role in ecosystem management. Assessment of land potentials can provide detailed spatial information particularly concerning nutrient availability. Spatial distributions of topsoil carbon, nitrogen and available phosphorus in mountain regions were identified using supervised learning methods, and a functional landscape analysis was performed in order to determine the spatial soil fertility pattern for the Soyang Lake watershed in South Korea. Specific research aims were (1) to identify important predictors; (2) to develop digital soil maps; (3) to assess land potentials using digital soil maps. Soil profiles and samples were collected by conditioned Latin Hypercube Sampling considering operational field constraints such as accessibility and no-go areas contaminated by landmines as well as budget limitations. Terrain parameters and different vegetation indices were derived for the covariates. We compared a generalized additive model (GAM) to random forest (RF) and support vector regression (SVR). For the predictor selection, we used the recursive feature elimination (RFE). A land potential assessment for soil nutrients was conducted using trimmed k-mean cluster analysis. Results suggested that vegetation indices have powerful abilities to predict soil nutrients. Using selected predictors via RFE improved prediction results. RF showed the best performance. Cluster analysis identified four land potential classes: fertile, medium and low fertile with an additional class dominated by high phosphorus and low carbon and nitrogen contents due to human impact. This study provides an effective approach to map land potentials for mountain ecosystem management.

Journal ArticleDOI
TL;DR: A prognostic use of the method promises to increase the availability of information about the number of events at the regional scale, and to facilitate the production of inventory maps, yielding useful results to study the phenomenon for model tuning, landslide forecast model validation, and the relationship between triggering factors and number of occurred events.
Abstract: Landslides cause damages and affect victims worldwide, but landslide information is lacking Even large events may not leave records when they happen in remote areas or simply do not impact with vulnerable elements This paper proposes a procedure to measure spatial autocorrelation changes induced by event landslides in a multi-temporal series of synthetic aperture radar (SAR) intensity Sentinel-1 images The procedure first measures pixel-based changes between consecutive couples of SAR intensity images using the Log-Ratio index, then it follows the temporal evolution of the spatial autocorrelation inside the Log-Ratio layers using the Moran’s I index and the semivariance When an event occurs, the Moran’s I index and the semivariance increase compared to the values measured before and after the event The spatial autocorrelation growth is due to the local homogenization of the soil response caused by the event landslide The emerging clusters of autocorrelated pixels generated by the event are localized by a process of optimal segmentation of the log-ratio layers The procedure was used to intercept an event that occurred in August 2015 in Myanmar, Tozang area, when strong rainfall precipitations triggered a number of landslides A prognostic use of the method promises to increase the availability of information about the number of events at the regional scale, and to facilitate the production of inventory maps, yielding useful results to study the phenomenon for model tuning, landslide forecast model validation, and the relationship between triggering factors and number of occurred events