scispace - formally typeset
Search or ask a question

Showing papers on "Spatial analysis published in 2019"


Journal ArticleDOI
TL;DR: The usage and advantages of landscapemetrics are demonstrated by analysing the influence of different sampling schemes on the estimation of landscape metrics, and the many advantages of the package are demonstrated, especially its easy integration into large workflows.
Abstract: Quantifying landscape characteristics and linking them to ecological processes is one of the central goals of landscape ecology. Landscape metrics are a widely used tool for the analysis of patch‐based, discrete land‐cover classes. Existing software to calculate landscape metrics has several constraints, such as being limited to a single platform, not being open‐source or involving a complicated integration into large workflows. We present landscapemetrics, an open‐source R package that overcomes many constraints of existing landscape metric software. The package includes an extensive collection of commonly used landscape metrics in a tidy workflow. To facilitate the integration into large workflows, landscapemetrics is based on a well‐established spatial framework in R. This allows pre‐processing of land‐cover maps or further statistical analysis without importing and exporting the data from and to different software environments. Additionally, the package provides many utility functions to visualize, extract, and sample landscape metrics. Lastly, we provide building‐blocks to motivate the development and integration of new metrics in the future. We demonstrate the usage and advantages of landscapemetrics by analysing the influence of different sampling schemes on the estimation of landscape metrics. In so doing, we demonstrate the many advantages of the package, especially its easy integration into large workflows. These new developments should help with the integration of landscape analysis in ecological research, given that ecologists are increasingly using R for the statistical analysis, modelling and visualization of spatial data.

427 citations


Book ChapterDOI
13 Mar 2019
TL;DR: The Moran scatterplot as mentioned in this paper is a simple tool to visualise and examine the degree of spatial instability in spatial association by means of Moran's I.I.D., which is used for exploratory spatial data analysis, in the sense of spatial dependence and spatial heterogeneity.
Abstract: This chapter suggests a simple tool to visualise and examines the degree of spatial instability in spatial association by means of Moran’s I. It discusses the salient characteristics of techniques for exploratory spatial data analysis. Exploratory spatial data analysis should focus explicitly on the spatial aspects of the data, in the sense of spatial dependence and spatial heterogeneity. The indicators of spatial association from either view that are most relevant for an exploratory approach to spatial data analysis are those that show local patterns and allow for local instabilities. The chapter reviews some methods that have been suggested to deal with local instability in spatial association. It outlines of the ideas behind the Moran scatter plot and a discussion of its properties and potential use. The implementation of a Moran scatterplot is straightforward, since most statistical and many geographic information systems software packages include a scatterplot function and an associated linear regression smoother and indication of fit.

319 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce mgwr, a Python-based implementation of MGWR that explicitly focuses on the multiscale analysis of spatial heterogeneity, and provide novel functionality for inference and exploratory analysis of local spatial processes, new diagnostics unique to multi-scale local models, and drastic improvements in estimation routines.
Abstract: Geographically weighted regression (GWR) is a spatial statistical technique that recognizes that traditional ‘global’ regression models may be limited when spatial processes vary with spatial context. GWR captures process spatial heterogeneity by allowing effects to vary over space. To do this, GWR calibrates an ensemble of local linear models at any number of locations using ‘borrowed’ nearby data. This provides a surface of location-specific parameter estimates for each relationship in the model that is allowed to vary spatially, as well as a single bandwidth parameter that provides intuition about the geographic scale of the processes. A recent extension to this framework allows each relationship to vary according to a distinct spatial scale parameter, and is therefore known as multiscale (M)GWR. This paper introduces mgwr, a Python-based implementation of MGWR that explicitly focuses on the multiscale analysis of spatial heterogeneity. It provides novel functionality for inference and exploratory analysis of local spatial processes, new diagnostics unique to multi-scale local models, and drastic improvements to efficiency in estimation routines. We provide two case studies using mgwr, in addition to reviewing core concepts of local models. We present this in a literate programming style, providing an overview of the primary software functionality and demonstrations of suggested usage alongside the discussion of primary concepts and demonstration of the improvements made in mgwr.

308 citations


Journal ArticleDOI
TL;DR: This study provides an introductory overview of several methods for analyzing large spatial data and describes the results of a predictive competition among the described methods as implemented by different groups with strong expertise in the methodology.
Abstract: The Gaussian process is an indispensable tool for spatial data analysts. The onset of the “big data” era, however, has lead to the traditional Gaussian process being computationally infeasible for modern spatial data. As such, various alternatives to the full Gaussian process that are more amenable to handling big spatial data have been proposed. These modern methods often exploit low-rank structures and/or multi-core and multi-threaded computing environments to facilitate computation. This study provides, first, an introductory overview of several methods for analyzing large spatial data. Second, this study describes the results of a predictive competition among the described methods as implemented by different groups with strong expertise in the methodology. Specifically, each research group was provided with two training datasets (one simulated and one observed) along with a set of prediction locations. Each group then wrote their own implementation of their method to produce predictions at the given location and each was subsequently run on a common computing environment. The methods were then compared in terms of various predictive diagnostics. Supplementary materials regarding implementation details of the methods and code are available for this article online.

273 citations


Journal ArticleDOI
TL;DR: The r package blockCV as mentioned in this paper is a toolbox for cross-validation of species distribution modeling, which can be used for any spatial modelling. But it is not suitable for the analysis of structured data, as it may lead to underestimation of prediction error and may result in inappropriate model selection.
Abstract: When applied to structured data, conventional random cross-validation techniques can lead to underestimation of prediction error, and may result in inappropriate model selection. We present the r package blockCV, a new toolbox for cross-validation of species distribution modelling. Although it has been developed with species distribution modelling in mind, it can be used for any spatial modelling. The package can generate spatially or environmentally separated folds. It includes tools to measure spatial autocorrelation ranges in candidate covariates, providing the user with insights into the spatial structure in these data. It also offers interactive graphical capabilities for creating spatial blocks and exploring data folds. Package blockCV enables modellers to more easily implement a range of evaluation approaches. It will help the modelling community learn more about the impacts of evaluation approaches on our understanding of predictive performance of species distribution models.

251 citations


Journal ArticleDOI
05 Dec 2019-Nature
TL;DR: A new computational framework, novoSpaRc, leverages single-cell data to reconstruct spatial context for cells and spatial expression across tissues and organisms, on the basis of an organization principle for gene expression.
Abstract: Multiplexed RNA sequencing in individual cells is transforming basic and clinical life sciences1–4. Often, however, tissues must first be dissociated, and crucial information about spatial relationships and communication between cells is thus lost. Existing approaches to reconstruct tissues assign spatial positions to each cell, independently of other cells, by using spatial patterns of expression of marker genes5,6—which often do not exist. Here we reconstruct spatial positions with little or no prior knowledge, by searching for spatial arrangements of sequenced cells in which nearby cells have transcriptional profiles that are often (but not always) more similar than cells that are farther apart. We formulate this task as a generalized optimal-transport problem for probabilistic embedding and derive an efficient iterative algorithm to solve it. We reconstruct the spatial expression of genes in mammalian liver and intestinal epithelium, fly and zebrafish embryos, sections from the mammalian cerebellum and whole kidney, and use the reconstructed tissues to identify genes that are spatially informative. Thus, we identify an organization principle for the spatial expression of genes in animal tissues, which can be exploited to infer meaningful probabilities of spatial position for individual cells. Our framework (‘novoSpaRc’) can incorporate prior spatial information and is compatible with any single-cell technology. Additional principles that underlie the cartography of gene expression can be tested using our approach. A new computational framework, novoSpaRc, leverages single-cell data to reconstruct spatial context for cells and spatial expression across tissues and organisms, on the basis of an organization principle for gene expression.

198 citations


Journal ArticleDOI
TL;DR: A new hand-crafted feature extraction method, based on multiscale covariance maps (MCMs), that is specifically aimed at improving the classification of HSIs using CNNs, which demonstrates that the proposed method can indeed increase the robustness of the CNN model.
Abstract: The classification of hyperspectral images (HSIs) using convolutional neural networks (CNNs) has recently drawn significant attention. However, it is important to address the potential overfitting problems that CNN-based methods suffer when dealing with HSIs. Unlike common natural images, HSIs are essentially three-order tensors which contain two spatial dimensions and one spectral dimension. As a result, exploiting both spatial and spectral information is very important for HSI classification. This paper proposes a new hand-crafted feature extraction method, based on multiscale covariance maps (MCMs), that is specifically aimed at improving the classification of HSIs using CNNs. The proposed method has the following distinctive advantages. First, with the use of covariance maps, the spatial and spectral information of the HSI can be jointly exploited. Each entry in the covariance map stands for the covariance between two different spectral bands within a local spatial window, which can absorb and integrate the two kinds of information (spatial and spectral) in a natural way. Second, by means of our multiscale strategy, each sample can be enhanced with spatial information from different scales, increasing the information conveyed by training samples significantly. To verify the effectiveness of our proposed method, we conduct comprehensive experiments on three widely used hyperspectral data sets, using a classical 2-D CNN (2DCNN) model. Our experimental results demonstrate that the proposed method can indeed increase the robustness of the CNN model. Moreover, the proposed MCMs+2DCNN method exhibits better classification performance than other CNN-based classification strategies and several standard techniques for spectral-spatial classification of HSIs.

186 citations


Journal ArticleDOI
TL;DR: It is confirmed that spatial cross-validation is essential in preventing overoptimistic model performance and that in addition to spatial validation, a spatial variable selection must be considered in spatial predictions of ecological data to produce reliable predictions.

175 citations


Journal ArticleDOI
TL;DR: In this article, the effects of spatial autocorrelation on hyperparameter tuning and performance estimation by comparing several widely used machine-learning algorithms such as boosted regression trees (BRT), k-nearest neighbor (KNN), random forest (RF) and support vector machine (SVM) with traditional parametric algorithms, such as logistic regression (GLM) and semi-parametric ones like generalized additive models (GAM) in terms of predictive performance.

173 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed spectral-spatial attention network for hyperspectral image classification can fully utilize the spectral and spatial information to obtain competitive performance.
Abstract: Many deep learning models, such as convolutional neural network (CNN) and recurrent neural network (RNN), have been successfully applied to extracting deep features for hyperspectral tasks. Hyperspectral image classification allows distinguishing the characterization of land covers by utilizing their abundant information. Motivated by the attention mechanism of the human visual system, in this study, we propose a spectral-spatial attention network for hyperspectral image classification. In our method, RNN with attention can learn inner spectral correlations within a continuous spectrum, while CNN with attention is designed to focus on saliency features and spatial relevance between neighboring pixels in the spatial dimension. Experimental results demonstrate that our method can fully utilize the spectral and spatial information to obtain competitive performance.

163 citations


Journal ArticleDOI
TL;DR: This paper presents a hierarchical robust CNN, where multiscale convolutional features are extracted to represent the hierarchical spatial semantic information and multiple fully connected layer features are stacked together so as to improve the rotation and scaling robustness.
Abstract: Object detection is a basic issue of very high-resolution remote sensing images (RSIs) for automatically labeling objects. At present, deep learning has gradually gained the competitive advantage for remote sensing object detection, especially based on convolutional neural networks (CNNs). Most of the existing methods use the global information in the fully connected feature vector and ignore the local information in the convolutional feature cubes. However, the local information can provide spatial information, which is helpful for accurate localization. In addition, there are variable factors, such as rotation and scaling, which affect the object detection accuracy in RSIs. In order to solve these problems, this paper presents a hierarchical robust CNN. First, multiscale convolutional features are extracted to represent the hierarchical spatial semantic information. Second, multiple fully connected layer features are stacked together so as to improve the rotation and scaling robustness. Experiments on two data sets have shown the effectiveness of our method. In addition, a large-scale high-resolution remote sensing object detection data set is established to make up for the current situation that the existing data set is insufficient or too small. The data set is available at https://github.com/CrazyStoneonRoad/TGRS-HRRSD-Dataset .

Journal ArticleDOI
TL;DR: This paper proposes an efficient and geometric range query scheme (EGRQ) supporting searching and data access control over encrypted spatial data, and employs secure KNN computation, polynomial fitting technique, and order-preserving encryption to achieve secure, efficient, and accurate geometricrange query over cloud data.
Abstract: As a basic query function, range query has been exploited in many scenarios such as SQL retrieves, location-based services, and computational geometry Meanwhile, with explosive growth of data volume, users are increasingly inclining to store data on the cloud for saving local storage and computational cost However, a long-standing problem is that the user’s data may be completely revealed to the cloud server because it has full data access right To cope with this problem, a frequently-used method is to encrypt raw data before outsourcing them, but the availability and operability of data will be reduced significantly In this paper, we propose an efficient and geometric range query scheme (EGRQ) supporting searching and data access control over encrypted spatial data We employ secure KNN computation, polynomial fitting technique, and order-preserving encryption to achieve secure, efficient, and accurate geometric range query over cloud data Then, we propose a novel spatial data access control strategy to refine user’s rights in our EGRQ To improve the efficiency, R-tree is adopted to reduce the searching space and matching times in whole search process Finally, we theoretically prove the security of our proposed scheme in terms of confidentiality of spatial data, privacy protection of index and trapdoor, and the unlinkability of trapdoors In addition, extensive experiments demonstrate the high efficiency of our proposed model compared with existing schemes

Journal ArticleDOI
TL;DR: In this article, a large literature on persistence finds that many modern outcomes strongly reflect characteristics of the same places in the distant past, and the purpose of this paper is to examine whether these two properties might be connected, and find that even for modest ranges of spatial correlation between points, t statistics become severely inflated leading to significance levels that are in error by several orders of magnitude.
Abstract: A large literature on persistence finds that many modern outcomes strongly reflect characteristics of the same places in the distant past. However, alongside unusually high t statistics, these regressions display severe spatial autocorrelation in residuals, and the purpose of this paper is to examine whether these two properties might be connected. We start by running artificial regressions where both variables are spatial noise and find that, even for modest ranges of spatial correlation between points, t statistics become severely inflated leading to significance levels that are in error by several orders of magnitude. We analyse 27 persistence studies in leading journals and find that in most cases if we replace the main explanatory variable with spatial noise the fit of the regression commonly improves; and if we replace the dependent variable with spatial noise, the persistence variable can still explain it at high significance levels. We can predict in advance which persistence results might be the outcome of fitting spatial noise from the degree of spatial autocorrelation in their residuals measured by a standard Moran statistic. Our findings suggest that the results of persistence studies, and of spatial regressions more generally, might be treated with some caution in the absence of reported Moran statistics and noise simulations.

Journal ArticleDOI
TL;DR: In this study, eight sample points have been used for the analysis of water quality in the Mamasın dam in the 2209/A group project of “Assessment and Modeling with GIS and RS Data of the Land Use Effects on Water Quality of Mamas Istanbuln Dam” supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under its program to support graduate students.
Abstract: The use of open source software, which has been constantly evolving since the mid-2000s, has affected every research discipline. Disciplines using geographic information systems (GIS) and remote se...

Journal ArticleDOI
TL;DR: The space-for-time substitution assumption is often used implicitly for studying ecological processes in static spatial data sets, but since ecological processes occur in time, this practice is problematic, especially in nonstationary environments.
Abstract: The space-for-time substitution assumption is often used implicitly for studying ecological processes in static spatial data sets. Since ecological processes occur in time, this practice is problematic, especially in nonstationary environments. More processes might lead to the same spatial pattern, and instead of testing hypotheses on ecological processes by analyzing spatial variation in static data, it is more judicious to report the observed spatial patterns and only discuss which ecological processes are in concordance with the observed spatial pattern. Alternatively, it might be feasible to combine relatively sparse time-series data or experimental data with spatial variation data and analyze such data types in a common statistical framework.

Proceedings ArticleDOI
Kai Su1, Dongdong Yu, Zhenqi Xu, Xin Geng1, Changhu Wang 
09 May 2019
TL;DR: Two novel modules to perform the enhancement of the information for the multi-person pose estimation by adopting the channel shuffle operation on the feature maps with different levels, promoting cross-channel information communication among the pyramid feature maps are proposed.
Abstract: Multi-person pose estimation is an important but challenging problem in computer vision. Although current approaches have achieved significant progress by fusing the multi-scale feature maps, they pay little attention to enhancing the channel-wise and spatial information of the feature maps. In this paper, we propose two novel modules to perform the enhancement of the information for the multi-person pose estimation. First, a Channel Shuffle Module (CSM) is proposed to adopt the channel shuffle operation on the feature maps with different levels, promoting cross-channel information communication among the pyramid feature maps. Second, a Spatial, Channel-wise Attention Residual Bottleneck (SCARB) is designed to boost the original residual unit with attention mechanism, adaptively highlighting the information of the feature maps both in the spatial and channel-wise context. The effectiveness of our proposed modules is evaluated on the COCO keypoint benchmark, and experimental results show that our approach achieves the state-of-the-art results.

Journal ArticleDOI
TL;DR: The Random Forest model outperforms the other models, relying on hydrogeological units, the percentage of arable land and the nitrogen balance as the three most influencing predictors based on a 1000 m circular contributing area, and is a big step forward in the prediction of nitrate in groundwater on regional scale.

Journal ArticleDOI
TL;DR: GeoSpark is presented, which extends the core engine of Apache Spark and SparkSQL to support spatial data types, indexes, and geometrical operations at scale and achieves up to two orders of magnitude faster run time performance than existing Hadoop-based systems.
Abstract: The paper presents the details of designing and developing GeoSpark, which extends the core engine of Apache Spark and SparkSQL to support spatial data types, indexes, and geometrical operations at scale. The paper also gives a detailed analysis of the technical challenges and opportunities of extending Apache Spark to support state-of-the-art spatial data partitioning techniques: uniform grid, R-tree, Quad-Tree, and KDB-Tree. The paper also shows how building local spatial indexes, e.g., R-Tree or Quad-Tree, on each Spark data partition can speed up the local computation and hence decrease the overall runtime of the spatial analytics program. Furthermore, the paper introduces a comprehensive experiment analysis that surveys and experimentally evaluates the performance of running de-facto spatial operations like spatial range, spatial K-Nearest Neighbors (KNN), and spatial join queries in the Apache Spark ecosystem. Extensive experiments on real spatial datasets show that GeoSpark achieves up to two orders of magnitude faster run time performance than existing Hadoop-based systems and up to an order of magnitude faster performance than Spark-based systems.

Journal ArticleDOI
TL;DR: This work presents spatial variance component analysis (SVCA), a computational framework for the analysis of spatial molecular data that enables quantifying different dimensions of spatial variation and in particular quantifies the effect of cell-cell interactions on gene expression.

Journal ArticleDOI
TL;DR: This survey gives a comprehensive review of state-of-the-art indoor localization methods and localization improvement methods using maps, spatial models, and landmarks.
Abstract: Indoor localization is essential for healthcare, security, augmented reality gaming, and many other location-based services. There is currently a wealth of relevant literature on indoor localization. This article focuses on recent advances in indoor localization methods that use spatial context to improve the location estimation. Spatial context in the form of maps and spatial models have been used to improve the localization by constraining location estimates in the navigable parts of indoor environments. Landmarks such as doors and corners, which are also one form of spatial context, have proved useful in assisting indoor localization by correcting the localization error. This survey gives a comprehensive review of state-of-the-art indoor localization methods and localization improvement methods using maps, spatial models, and landmarks.

Journal ArticleDOI
TL;DR: Four sample selection methods—simple random, proportional stratified random, disproportional stratified Random, and deliberative sampling—as well as three cross-validation tuning approaches—k-fold, leave-one-out, and Monte Carlo methods are investigated.
Abstract: High spatial resolution (1–5 m) remotely sensed datasets are increasingly being used to map land covers over large geographic areas using supervised machine learning algorithms. Although many studies have compared machine learning classification methods, sample selection methods for acquiring training and validation data for machine learning, and cross-validation techniques for tuning classifier parameters are rarely investigated, particularly on large, high spatial resolution datasets. This work, therefore, examines four sample selection methods—simple random, proportional stratified random, disproportional stratified random, and deliberative sampling—as well as three cross-validation tuning approaches—k-fold, leave-one-out, and Monte Carlo methods. In addition, the effect on the accuracy of localizing sample selections to a small geographic subset of the entire area, an approach that is sometimes used to reduce costs associated with training data collection, is investigated. These methods are investigated in the context of support vector machines (SVM) classification and geographic object-based image analysis (GEOBIA), using high spatial resolution National Agricultural Imagery Program (NAIP) orthoimagery and LIDAR-derived rasters, covering a 2,609 km2 regional-scale area in northeastern West Virginia, USA. Stratified-statistical-based sampling methods were found to generate the highest classification accuracy. Using a small number of training samples collected from only a subset of the study area provided a similar level of overall accuracy to a sample of equivalent size collected in a dispersed manner across the entire regional-scale dataset. There were minimal differences in accuracy for the different cross-validation tuning methods. The processing time for Monte Carlo and leave-one-out cross-validation were high, especially with large training sets. For this reason, k-fold cross-validation appears to be a good choice. Classifications trained with samples collected deliberately (i.e., not randomly) were less accurate than classifiers trained from statistical-based samples. This may be due to the high positive spatial autocorrelation in the deliberative training set. Thus, if possible, samples for training should be selected randomly; deliberative samples should be avoided.

Journal ArticleDOI
TL;DR: In this paper, a method called invariant attribute profiles (IAPs) is proposed to extract the spatial invariant features by exploiting isotropic filter banks or convolutional kernels on HSI and spatial aggregation techniques in the Cartesian coordinate system.
Abstract: Up to the present, an enormous number of advanced techniques have been developed to enhance and extract the spatially semantic information in hyperspectral image processing and analysis. However, locally semantic change, such as scene composition, relative position between objects, spectral variability caused by illumination, atmospheric effects, and material mixture, has been less frequently investigated in modeling spatial information. As a consequence, identifying the same materials from spatially different scenes or positions can be difficult. In this paper, we propose a solution to address this issue by locally extracting invariant features from hyperspectral imagery (HSI) in both spatial and frequency domains, using a method called invariant attribute profiles (IAPs). IAPs extract the spatial invariant features by exploiting isotropic filter banks or convolutional kernels on HSI and spatial aggregation techniques (e.g., superpixel segmentation) in the Cartesian coordinate system. Furthermore, they model invariant behaviors (e.g., shift, rotation) by the means of a continuous histogram of oriented gradients constructed in a Fourier polar coordinate. This yields a combinatorial representation of spatial-frequency invariant features with application to HSI classification. Extensive experiments conducted on three promising hyperspectral datasets (Houston2013 and Houston2018) demonstrate the superiority and effectiveness of the proposed IAP method in comparison with several state-of-the-art profile-related techniques. The codes will be available from the website: this https URL.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: This paper proposes a novel semantic stereo network named SSPCV-Net, which includes newly designed pyramid cost volumes for describing semantic and spatial information on multiple levels and designs a 3D multi-cost aggregation module to integrate the extracted multilevel features and perform regression for accurate disparity maps.
Abstract: The accuracy of stereo matching has been greatly improved by using deep learning with convolutional neural networks. To further capture the details of disparity maps, in this paper, we propose a novel semantic stereo network named SSPCV-Net, which includes newly designed pyramid cost volumes for describing semantic and spatial information on multiple levels. The semantic features are inferred by a semantic segmentation subnetwork while the spatial features are derived by hierarchical spatial pooling. In the end, we design a 3D multi-cost aggregation module to integrate the extracted multilevel features and perform regression for accurate disparity maps. We conduct comprehensive experiments and comparisons with some recent stereo matching networks on Scene Flow, KITTI 2015 and 2012, and Cityscapes benchmark datasets, and the results show that the proposed SSPCV-Net significantly promotes the state-of-the-art stereo-matching performance.

Journal ArticleDOI
TL;DR: The Improved Flexible Spatiotemporal DAta Fusion (IFSDAF) method was developed in this article to provide reliable NDVI datasets with high spatial and temporal resolution to support research on land surface processes.

Journal ArticleDOI
TL;DR: A global estimate for the Land Use Efficiency (LUE) indicator—SDG 11.3.1, for circa 10,000 urban centers, calculating the ratio of land consumption rate to population growth rate between 1990 and 2015 is presented.
Abstract: The Global Human Settlement Layer (GHSL) produces new global spatial information, evidence-based analytics describing the human presence on the planet that is based mainly on two quantitative factors: (i) the spatial distribution (density) of built-up structures and (ii) the spatial distribution (density) of resident people. Both of the factors are observed in the long-term temporal domain and per unit area, in order to support the analysis of the trends and indicators for monitoring the implementation of the 2030 Development Agenda and the related thematic agreements. The GHSL uses various input data, including global, multi-temporal archives of high-resolution satellite imagery, census data, and volunteered geographic information. In this paper, we present a global estimate for the Land Use Efficiency (LUE) indicator—SDG 11.3.1, for circa 10,000 urban centers, calculating the ratio of land consumption rate to population growth rate between 1990 and 2015. In addition, we analyze the characteristics of the GHSL information to demonstrate how the original frameworks of data (gridded GHSL data) and tools (GHSL tools suite), developed from Earth Observation and integrated with census information, could support Sustainable Development Goals monitoring. In particular, we demonstrate the potential of gridded, open and free, local yet globally consistent, multi-temporal data in filling the data gap for Sustainable Development Goal 11. The results of our research demonstrate that there is potential to raise SDG 11.3.1 from a Tier II classification (manifesting unavailability of data) to a Tier I, as GHSL provides a global baseline for the essential variables called by the SDG 11.3.1 metadata.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed the first deep learning architecture for the analysis of Satellite Image Time Series (SITS) data, which combines CNNs and RNNs to exploit complementary information: spatial autocorrelation and temporal dependencies.
Abstract: Nowadays, modern Earth Observation systems continuously generate huge amounts of data A notable example is represented by the Sentinel-2 mission, which provides images at high spatial resolution (up to 10m) with high temporal revisit period (every 5 days), which can be organized in Satellite Image Time Series (SITS) While the use of SITS has been proved to be beneficial in the context of Land Use/Land Cover (LULC) map generation, unfortunately, machine learning approaches commonly leveraged in remote sensing field fail to take advantage of spatio-temporal dependencies present in such data Recently, new generation deep learning methods allowed to significantly advance research in this field These approaches have generally focused on a single type of neural network, ie, Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs), which model different but complementary information: spatial autocorrelation (CNNs) and temporal dependencies (RNNs) In this work, we propose the first deep learning architecture for the analysis of SITS data, namely \method{} (DUal view Point deep Learning architecture for time series classificatiOn), that combines Convolutional and Recurrent neural networks to exploit their complementarity Our hypothesis is that, since CNNs and RNNs capture different aspects of the data, a combination of both models would produce a more diverse and complete representation of the information for the underlying land cover classification task Experiments carried out on two study sites characterized by different land cover characteristics (ie, the \textit{Gard} site in France and the \textit{Reunion Island} in the Indian Ocean), demonstrate the significance of our proposal

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a multi-parallel path neural network (MAP-Net) for accurately extracting multiscale building footprints and precise boundaries, which achieved state-of-the-art performance.
Abstract: Accurately and efficiently extracting building footprints from a wide range of remote sensed imagery remains a challenge due to their complex structure, variety of scales and diverse appearances. Existing convolutional neural network (CNN)-based building extraction methods are complained that they cannot detect the tiny buildings because the spatial information of CNN feature maps are lost during repeated pooling operations of the CNN, and the large buildings still have inaccurate segmentation edges. Moreover, features extracted by a CNN are always partial which restricted by the size of the respective field, and large-scale buildings with low texture are always discontinuous and holey when extracted. This paper proposes a novel multi attending path neural network (MAP-Net) for accurately extracting multiscale building footprints and precise boundaries. MAP-Net learns spatial localization-preserved multiscale features through a multi-parallel path in which each stage is gradually generated to extract high-level semantic features with fixed resolution. Then, an attention module adaptively squeezes channel-wise features from each path for optimization, and a pyramid spatial pooling module captures global dependency for refining discontinuous building footprints. Experimental results show that MAP-Net outperforms state-of-the-art (SOTA) algorithms in boundary localization accuracy as well as continuity of large buildings. Specifically, our method achieved 0.68\%, 1.74\%, 1.46\% precision, and 1.50\%, 1.53\%, 0.82\% IoU score improvement without increasing computational complexity compared with the latest HRNetv2 on the Urban 3D, Deep Globe and WHU datasets, respectively. The TensorFlow implementation is available at this https URL.

Journal ArticleDOI
TL;DR: A new method that integrates object-based post-classification refinement (OBPR) and CNNs for LULC mapping using Sentinel optical and SAR data and considerably improved the classification accuracy of urban ground targets is developed.
Abstract: Object-based image analysis (OBIA) has been widely used for land use and land cover (LULC) mapping using optical and synthetic aperture radar (SAR) images because it can utilize spatial information, reduce the effect of salt and pepper, and delineate LULC boundaries. With recent advances in machine learning, convolutional neural networks (CNNs) have become state-of-the-art algorithms. However, CNNs cannot be easily integrated with OBIA because the processing unit of CNNs is a rectangular image, whereas that of OBIA is an irregular image object. To obtain object-based thematic maps, this study developed a new method that integrates object-based post-classification refinement (OBPR) and CNNs for LULC mapping using Sentinel optical and SAR data. After producing the classification map by CNN, each image object was labeled with the most frequent land cover category of its pixels. The proposed method was tested on the optical-SAR Sentinel Guangzhou dataset with 10 m spatial resolution, the optical-SAR Zhuhai-Macau local climate zones (LCZ) dataset with 100 m spatial resolution, and a hyperspectral benchmark the University of Pavia with 1.3 m spatial resolution. It outperformed OBIA support vector machine (SVM) and random forest (RF). SVM and RF could benefit more from the combined use of optical and SAR data compared with CNN, whereas spatial information learned by CNN was very effective for classification. With the ability to extract spatial features and maintain object boundaries, the proposed method considerably improved the classification accuracy of urban ground targets. It achieved overall accuracy (OA) of 95.33% for the Sentinel Guangzhou dataset, OA of 77.64% for the Zhuhai-Macau LCZ dataset, and OA of 95.70% for the University of Pavia dataset with only 10 labeled samples per class.

Journal ArticleDOI
TL;DR: TerraBrasilis, a spatial data analytics infrastructure that provides interfaces that are not only found within traditional geographic information systems but also in data analytics environments with complex algorithms, is designed in Brazil.
Abstract: The physical phenomena derived from an analysis of remotely sensed imagery provide a clearer understanding of the spectral variations of a large number of land use and cover (LUC) classes. The creation of LUC maps have corroborated this view by enabling the scientific community to estimate the parameter heterogeneity of the Earth’s surface. Along with descriptions of features and statistics for aggregating spatio-temporal information, the government programs have disseminated thematic maps to further the implementation of effective public policies and foster sustainable development. In Brazil, PRODES and DETER have shown that they are committed to monitoring the mapping areas of large-scale deforestation systematically and by means of data quality assurance. However, these programs are so complex that they require the designing, implementation and deployment of a spatial data infrastructure based on extensive data analytics features so that users who lack a necessary understanding of standard spatial interfaces can still carry out research on them. With this in mind, the Brazilian National Institute for Space Research (INPE) has designed TerraBrasilis, a spatial data analytics infrastructure that provides interfaces that are not only found within traditional geographic information systems but also in data analytics environments with complex algorithms. To ensure it achieved its best performance, we leveraged a micro-service architecture with virtualized computer resources to enable high availability, lower size, simplicity to produce an increment, reliable to change and fault tolerance in unstable computer network scenarios. In addition, we tuned and optimized our databases both to adjust to the input format of complex algorithms and speed up the loading of the web application so that it was faster than other systems.

Journal ArticleDOI
TL;DR: In this article, a new method based on GeoDetector and a spatial logistic regression (SLR) model is proposed to select condition factors based on the spatial distribution of landslides.