scispace - formally typeset
Search or ask a question

Showing papers in "ISPRS international journal of geo-information in 2019"


Journal ArticleDOI
TL;DR: In this paper, the authors introduce mgwr, a Python-based implementation of MGWR that explicitly focuses on the multiscale analysis of spatial heterogeneity, and provide novel functionality for inference and exploratory analysis of local spatial processes, new diagnostics unique to multi-scale local models, and drastic improvements in estimation routines.
Abstract: Geographically weighted regression (GWR) is a spatial statistical technique that recognizes that traditional ‘global’ regression models may be limited when spatial processes vary with spatial context. GWR captures process spatial heterogeneity by allowing effects to vary over space. To do this, GWR calibrates an ensemble of local linear models at any number of locations using ‘borrowed’ nearby data. This provides a surface of location-specific parameter estimates for each relationship in the model that is allowed to vary spatially, as well as a single bandwidth parameter that provides intuition about the geographic scale of the processes. A recent extension to this framework allows each relationship to vary according to a distinct spatial scale parameter, and is therefore known as multiscale (M)GWR. This paper introduces mgwr, a Python-based implementation of MGWR that explicitly focuses on the multiscale analysis of spatial heterogeneity. It provides novel functionality for inference and exploratory analysis of local spatial processes, new diagnostics unique to multi-scale local models, and drastic improvements to efficiency in estimation routines. We provide two case studies using mgwr, in addition to reviewing core concepts of local models. We present this in a literate programming style, providing an overview of the primary software functionality and demonstrations of suggested usage alongside the discussion of primary concepts and demonstration of the improvements made in mgwr.

308 citations


Journal ArticleDOI
TL;DR: A combination of terrestrial laser scanning and unmanned aerial vehicle (UAV) photogrammetry is used to establish a three-dimensional model and the associated digital documentation of the Magoksa Temple, Republic of Korea to acquire the perpendicular geometry of buildings and sites.
Abstract: Three-dimensional digital technology is important in the maintenance and monitoring of cultural heritage sites. This study focuses on using a combination of terrestrial laser scanning and unmanned aerial vehicle (UAV) photogrammetry to establish a three-dimensional model and the associated digital documentation of the Magoksa Temple, Republic of Korea. Herein, terrestrial laser scanning and UAV photogrammetry was used to acquire the perpendicular geometry of the buildings and sites, where UAV photogrammetry yielded higher planar data acquisition rate in upper zones, such as the roof of a building, than terrestrial laser scanning. On comparing the two technologies’ accuracy based on their ground control points, laser scanning was observed to provide higher positional accuracy than photogrammetry. The overall discrepancy between the two technologies was found to be sufficient for the generation of convergent data. Thus, the terrestrial laser scanning and UAV photogrammetry data were aligned and merged post conversion into compatible extensions. A three-dimensional (3D) model, with planar and perpendicular geometries, based on the hybrid data-point cloud was developed. This study demonstrates the potential for using the integration of terrestrial laser scanning and UAV photogrammetry in 3D digital documentation and spatial analysis of cultural heritage sites.

109 citations


Journal ArticleDOI
TL;DR: A state of the art review on image processing methods used for shoreline detection in remote sensing is presented, which starts with a review of different key concepts that can be used for Shoreline detection.
Abstract: With coastal erosion and the increased interest in beach monitoring, there is a greater need for evaluation of the shoreline detection methods. Some studies have been conducted to produce state of the art reviews on shoreline definition and detection. It should be noted that with the development of remote sensing, shoreline detection is mainly achieved by image processing. Thus, it is important to evaluate the different image processing approaches used for shoreline detection. This paper presents a state of the art review on image processing methods used for shoreline detection in remote sensing. It starts with a review of different key concepts that can be used for shoreline detection. Then, the applied fundamental image processing methods are shown before a comparative analysis of these methods. A significant outcome of this study will provide practical insights into shoreline detection.

105 citations


Journal ArticleDOI
TL;DR: A global estimate for the Land Use Efficiency (LUE) indicator—SDG 11.3.1, for circa 10,000 urban centers, calculating the ratio of land consumption rate to population growth rate between 1990 and 2015 is presented.
Abstract: The Global Human Settlement Layer (GHSL) produces new global spatial information, evidence-based analytics describing the human presence on the planet that is based mainly on two quantitative factors: (i) the spatial distribution (density) of built-up structures and (ii) the spatial distribution (density) of resident people. Both of the factors are observed in the long-term temporal domain and per unit area, in order to support the analysis of the trends and indicators for monitoring the implementation of the 2030 Development Agenda and the related thematic agreements. The GHSL uses various input data, including global, multi-temporal archives of high-resolution satellite imagery, census data, and volunteered geographic information. In this paper, we present a global estimate for the Land Use Efficiency (LUE) indicator—SDG 11.3.1, for circa 10,000 urban centers, calculating the ratio of land consumption rate to population growth rate between 1990 and 2015. In addition, we analyze the characteristics of the GHSL information to demonstrate how the original frameworks of data (gridded GHSL data) and tools (GHSL tools suite), developed from Earth Observation and integrated with census information, could support Sustainable Development Goals monitoring. In particular, we demonstrate the potential of gridded, open and free, local yet globally consistent, multi-temporal data in filling the data gap for Sustainable Development Goal 11. The results of our research demonstrate that there is potential to raise SDG 11.3.1 from a Tier II classification (manifesting unavailability of data) to a Tier I, as GHSL provides a global baseline for the essential variables called by the SDG 11.3.1 metadata.

98 citations


Journal ArticleDOI
TL;DR: This study provides a comprehensive review of how UAV-based damage mapping has evolved from providing simple descriptive overviews of a disaster science, to more sophisticated texture and segmentation-based approaches, and finally to studies using advanced deep learning approaches to provide comprehensive damage descriptions.
Abstract: Structural disaster damage detection and characterization is one of the oldest remote sensing challenges, and the utility of virtually every type of active and passive sensor deployed on various air- and spaceborne platforms has been assessed. The proliferation and growing sophistication of unmanned aerial vehicles (UAVs) in recent years has opened up many new opportunities for damage mapping, due to the high spatial resolution, the resulting stereo images and derivatives, and the flexibility of the platform. This study provides a comprehensive review of how UAV-based damage mapping has evolved from providing simple descriptive overviews of a disaster science, to more sophisticated texture and segmentation-based approaches, and finally to studies using advanced deep learning approaches, as well as multi-temporal and multi-perspective imagery to provide comprehensive damage descriptions. The paper further reviews studies on the utility of the developed mapping strategies and image processing pipelines for first responders, focusing especially on outcomes of two recent European research projects, RECONASS (Reconstruction and Recovery Planning: Rapid and Continuously Updated Construction Damage, and Related Needs Assessment) and INACHUS (Technological and Methodological Solutions for Integrated Wide Area Situation Awareness and Survivor Localization to Support Search and Rescue Teams). Finally, recent and emerging developments are reviewed, such as recent improvements in machine learning, increasing mapping autonomy, damage mapping in interior, GPS-denied environments, the utility of UAVs for infrastructure mapping and maintenance, as well as the emergence of UAVs with robotic abilities.

96 citations


Journal ArticleDOI
TL;DR: Testing results showed that the FACNN greatly exceeded several recent convolutional neural networks in land cover classification and the object-based change detection could achieve much better results than a pixel-based method, and provide accurate change maps to facilitate manual urban land cover updating.
Abstract: The study investigates land use/cover classification and change detection of urban areas from very high resolution (VHR) remote sensing images using deep learning-based methods. Firstly, we introduce a fully Atrous convolutional neural network (FACNN) to learn the land cover classification. In the FACNN an encoder, consisting of full Atrous convolution layers, is proposed for extracting scale robust features from VHR images. Then, a pixel-based change map is produced based on the classification map of current images and an outdated land cover geographical information system (GIS) map. Both polygon-based and object-based change detection accuracy is investigated, where a polygon is the unit of the GIS map and an object consists of those adjacent changed pixels on the pixel-based change map. The test data covers a rapidly developing city of Wuhan (8000 km2), China, consisting of 0.5 m ground resolution aerial images acquired in 2014, and 1 m ground resolution Beijing-2 satellite images in 2017, and their land cover GIS maps. Testing results showed that our FACNN greatly exceeded several recent convolutional neural networks in land cover classification. Second, the object-based change detection could achieve much better results than a pixel-based method, and provide accurate change maps to facilitate manual urban land cover updating.

94 citations


Journal ArticleDOI
TL;DR: TerraBrasilis, a spatial data analytics infrastructure that provides interfaces that are not only found within traditional geographic information systems but also in data analytics environments with complex algorithms, is designed in Brazil.
Abstract: The physical phenomena derived from an analysis of remotely sensed imagery provide a clearer understanding of the spectral variations of a large number of land use and cover (LUC) classes. The creation of LUC maps have corroborated this view by enabling the scientific community to estimate the parameter heterogeneity of the Earth’s surface. Along with descriptions of features and statistics for aggregating spatio-temporal information, the government programs have disseminated thematic maps to further the implementation of effective public policies and foster sustainable development. In Brazil, PRODES and DETER have shown that they are committed to monitoring the mapping areas of large-scale deforestation systematically and by means of data quality assurance. However, these programs are so complex that they require the designing, implementation and deployment of a spatial data infrastructure based on extensive data analytics features so that users who lack a necessary understanding of standard spatial interfaces can still carry out research on them. With this in mind, the Brazilian National Institute for Space Research (INPE) has designed TerraBrasilis, a spatial data analytics infrastructure that provides interfaces that are not only found within traditional geographic information systems but also in data analytics environments with complex algorithms. To ensure it achieved its best performance, we leveraged a micro-service architecture with virtualized computer resources to enable high availability, lower size, simplicity to produce an increment, reliable to change and fault tolerance in unstable computer network scenarios. In addition, we tuned and optimized our databases both to adjust to the input format of complex algorithms and speed up the loading of the web application so that it was faster than other systems.

80 citations


Journal ArticleDOI
TL;DR: It is indicated that corn and soybean yields for a given year can be forecasted in advance, at the beginning of September, approximately a month or more ahead of harvesting time, than the other five AI models.
Abstract: This paper compares different artificial intelligence (AI) models in order to develop the best crop yield prediction model for the Midwestern United States (US). Through experiments to examine the effects of phenology using three different periods, we selected the July–August (JA) database as the best months to predict corn and soybean yields. Six different AI models for crop yield prediction are tested in this research. Then, a comprehensive and objective comparison is conducted between the AI models. Particularly for the deep neural network (DNN) model, we performed an optimization process to ensure the best configurations for the layer structure, cost function, optimizer, activation function, and drop-out ratio. In terms of mean absolute error (MAE), our DNN model with the JA database was approximately 21–33% and 17–22% more accurate for corn and soybean yields, respectively, than the other five AI models. This indicates that corn and soybean yields for a given year can be forecasted in advance, at the beginning of September, approximately a month or more ahead of harvesting time. A combination of the optimized DNN model and spatial statistical methods should be investigated in future work, to mitigate partly clustered errors in some regions.

75 citations


Journal ArticleDOI
TL;DR: The most reliable algorithm for the prediction of air pollution was autoregressive nonlinear neural network with external input using the proposed prediction model, where its one-day prediction error reached 1.79 µg/m3.
Abstract: Environmental pollution has mainly been attributed to urbanization and industrial developments across the globe. Air pollution has been marked as one of the major problems of metropolitan areas around the world, especially in Tehran, the capital of Iran, where its administrators and residents have long been struggling with air pollution damage such as the health issues of its citizens. As far as the study area of this research is concerned, a considerable proportion of Tehran air pollution is attributed to PM10 and PM2.5 pollutants. Therefore, the present study was conducted to determine the prediction models to determine air pollutions based on PM10 and PM2.5 pollution concentrations in Tehran. To predict the air-pollution, the data related to day of week, month of year, topography, meteorology, and pollutant rate of two nearest neighbors as the input parameters and machine learning methods were used. These methods include a regression support vector machine, geographically weighted regression, artificial neural network and auto-regressive nonlinear neural network with an external input as the machine learning method for the air pollution prediction. A prediction model was then proposed to improve the afore-mentioned methods, by which the error percentage has been reduced and improved by 57%, 47%, 47% and 94%, respectively. The most reliable algorithm for the prediction of air pollution was autoregressive nonlinear neural network with external input using the proposed prediction model, where its one-day prediction error reached 1.79 µg/m3. Finally, using genetic algorithm, data for day of week, month of year, topography, wind direction, maximum temperature and pollutant rate of the two nearest neighbors were identified as the most effective parameters in the prediction of air pollution.

74 citations


Journal ArticleDOI
TL;DR: This study developed landslide susceptibility maps using GIS-based statistical models at the regional level in central Nepal to evaluate the differences in landslide susceptibility using analytical hierarchy process (AHP), frequency ratio (FR) and hybrid spatial multi-criteria evaluation (SMCE) models.
Abstract: As a result of the Gorkha earthquake in 2015, about 9000 people lost their lives and many more were injured. Most of these losses were caused by earthquake-induced landslides. Sustainable planning and decision-making are required to reduce the losses caused by earthquakes and related hazards. The use of remote sensing and geographic information systems (GIS) for landslide susceptibility mapping can help planning authorities to prepare for and mitigate the consequences of future hazards. In this study, we developed landslide susceptibility maps using GIS-based statistical models at the regional level in central Nepal. Our study area included the districts affected by landslides after the Gorkha earthquake and its aftershocks. We used the 23,439 landslide locations obtained from high-resolution satellite imagery to evaluate the differences in landslide susceptibility using analytical hierarchy process (AHP), frequency ratio (FR) and hybrid spatial multi-criteria evaluation (SMCE) models. The nine landslide conditioning factors of lithology, land cover, precipitation, slope, aspect, elevation, distance to roads, distance to drainage and distance to faults were used as the input data for the applied landslide susceptibility mapping (LSM) models. The spatial correlation of landslides and these factors were identified using GIS-based statistical models. We divided the inventory into data used for training the statistical models (70%) and data used for validation (30%). Receiver operating characteristics (ROC) and the relative landslide density index (R-index) were used to validate the results. The area under the curve (AUC) values obtained from the ROC approach for AHP, FR and hybrid SMCE were 0.902, 0.905 and 0.91, respectively. The index of relative landslide density, R-index, values in sample datasets of AHP, FR and hybrid SMCE maps were 53%, 58% and 59% for the very high hazard classes. The final susceptibility results will be beneficial for regional planning and sustainable hazard mitigation.

72 citations


Journal ArticleDOI
TL;DR: A voxel-based feature engineering that better characterize point clusters and provide strong support to supervised or unsupervised classification and different feature generalization levels to permit interoperable frameworks is proposed.
Abstract: Automation in point cloud data processing is central in knowledge discovery within decision-making systems. The definition of relevant features is often key for segmentation and classification, with automated workflows presenting the main challenges. In this paper, we propose a voxel-based feature engineering that better characterize point clusters and provide strong support to supervised or unsupervised classification. We provide different feature generalization levels to permit interoperable frameworks. First, we recommend a shape-based feature set (SF1) that only leverages the raw X, Y, Z attributes of any point cloud. Afterwards, we derive relationship and topology between voxel entities to obtain a three-dimensional (3D) structural connectivity feature set (SF2). Finally, we provide a knowledge-based decision tree to permit infrastructure-related classification. We study SF1/SF2 synergy on a new semantic segmentation framework for the constitution of a higher semantic representation of point clouds in relevant clusters. Finally, we benchmark the approach against novel and best-performing deep-learning methods while using the full S3DIS dataset. We highlight good performances, easy-integration, and high F1-score (> 85%) for planar-dominant classes that are comparable to state-of-the-art deep learning.

Journal ArticleDOI
TL;DR: This study demonstrates the efficiency of the proposed neuro model of PSO-ANN in estimating the factor of safety compared to other conventional techniques.
Abstract: In this paper, a neuro particle-based optimization of the artificial neural network (ANN) is investigated for slope stability calculation. The results are also compared to another artificial intelligence technique of a conventional ANN and adaptive neuro-fuzzy inference system (ANFIS) training solutions. The database used with 504 training datasets (e.g., a range of 80%) and testing dataset consists of 126 items (e.g., 20% of the whole dataset). Moreover, variables of the ANN method (for example, nodes number for each hidden layer) and the algorithm of PSO-like swarm size and inertia weight are improved by utilizing a total of 28 (i.e., for the PSO-ANN) trial and error approaches. The key properties were fed as input, which were utilized via the analysis of OptumG2 finite element modelling (FEM), containing undrained cohesion stability of the baseline soil (Cu), angle of the original slope (β), and setback distance ratio (b/B) where the target is selected factor of safety. The estimated data for datasets of ANN, ANFIS, and PSO-ANN models were examined based on three determined statistical indexes. Namely, root mean square error (RMSE) and the coefficient of determination (R2). After accomplishing the analysis of sensitivity, considering 72 trials and errors of the neurons number, the optimized architecture of 4 × 6 × 1 was determined to the structure of the ANN model. As an outcome, the employed methods presented excellent efficiency, but based on the ranking method, the PSO-ANN approach might have slightly better efficiency in comparison to the algorithms of ANN and ANFIS. According to statistics, for the proper structure of PSO-ANN, the indexes of R2 and RMSE values of 0.9996, and 0.0123, as well as 0.9994 and 0.0157, were calculated for the training and testing networks. Nevertheless, having the ANN model with six neurons for each hidden layer was formulized for further practical use. This study demonstrates the efficiency of the proposed neuro model of PSO-ANN in estimating the factor of safety compared to other conventional techniques.

Journal ArticleDOI
TL;DR: Some representative ST clustering methods are reviewed, most of which are extended from spatial clustering, and they are broadly divided into hypothesis testing-based methods and partitional clustering Methods that have been applied differently in previous research.
Abstract: Large quantities of spatiotemporal (ST) data can be easily collected from various domains such as transportation, social media analysis, crime analysis, and human mobility analysis. The development of ST data analysis methods can uncover potentially interesting and useful information. Due to the complexity of ST data and the diversity of objectives, a number of ST analysis methods exist, including but not limited to clustering, prediction, and change detection. As one of the most important methods, clustering has been widely used in many applications. It is a process of grouping data with similar spatial attributes, temporal attributes, or both, from which many significant events and regular phenomena can be discovered. In this paper, some representative ST clustering methods are reviewed, most of which are extended from spatial clustering. These methods are broadly divided into hypothesis testing-based methods and partitional clustering methods that have been applied differently in previous research. Research trends and the challenges of ST clustering are also discussed.

Journal ArticleDOI
TL;DR: The aim of this paper is that cadastral information stored based on the land administration domain model (LADM) are integrated with BIM on building level for accurate representation of legal boundaries and with GIS on city level for visualization of 3D cadastre in urban environments.
Abstract: The current three-dimensionally (3D) delimited property units are in most countries registered using two-dimensional (2D) documentation and textual descriptions. This approach has limitations if used for representing the actual extent of complicated 3D property units, in particular in city centers. 3D digital models such as building information model (BIM) and 3D geographic information system (GIS) could be utilized for accurate identification of property units, better representation of cadastral boundaries, and detailed visualization of complex buildings. To facilitate this, several requirements need to be identified considering organizational, legal, and technical aspects. In this study, we formulate these requirements and then develop a framework for integration of 3D cadastre and 3D digital models. The aim of this paper is that cadastral information stored based on the land administration domain model (LADM) are integrated with BIM on building level for accurate representation of legal boundaries and with GIS on city level for visualization of 3D cadastre in urban environments. The framework is implemented and evaluated against the requirements in a practical case study in Sweden. The conclusion is that the integration of the cadastral information and BIM/GIS is possible on both conceptual level and data level which will facilitate that organizations dealing with cadastral information (cadastral units), BIM models (architecture, engineering, and construction companies), and GIS (surveying units on e.g., municipality level) can exchange information; this facilitates better representation and visualization of 3D cadastral boundaries.

Journal ArticleDOI
TL;DR: A novel deep-learning-based approach to collectively predict two types of passenger flow volumes—inflow and outflow—in each metro station of a city by transforming the city metro network to a graph and making predictions using graph convolutional neural networks (GCNNs).
Abstract: Predicting the passenger flow of metro networks is of great importance for traffic management and public safety. However, such predictions are very challenging, as passenger flow is affected by complex spatial dependencies (nearby and distant) and temporal dependencies (recent and periodic). In this paper, we propose a novel deep-learning-based approach, named STGCNNmetro (spatiotemporal graph convolutional neural networks for metro), to collectively predict two types of passenger flow volumes—inflow and outflow—in each metro station of a city. Specifically, instead of representing metro stations by grids and employing conventional convolutional neural networks (CNNs) to capture spatiotemporal dependencies, STGCNNmetro transforms the city metro network to a graph and makes predictions using graph convolutional neural networks (GCNNs). First, we apply stereogram graph convolution operations to seamlessly capture the irregular spatiotemporal dependencies along the metro network. Second, a deep structure composed of GCNNs is constructed to capture the distant spatiotemporal dependencies at the citywide level. Finally, we integrate three temporal patterns (recent, daily, and weekly) and fuse the spatiotemporal dependencies captured from these patterns to form the final prediction values. The STGCNNmetro model is an end-to-end framework which can accept raw passenger flow-volume data, automatically capture the effective features of the citywide metro network, and output predictions. We test this model by predicting the short-term passenger flow volume in the citywide metro network of Shanghai, China. Experiments show that the STGCNNmetro model outperforms seven well-known baseline models (LSVR, PCA-kNN, NMF-kNN, Bayesian, MLR, M-CNN, and LSTM). We additionally explore the sensitivity of the model to its parameters and discuss the distribution of prediction errors.

Journal ArticleDOI
TL;DR: A modified two-branch convolutional neural network for the adaptive fusion of hyperspectral imagery (HSI) and Light Detection and Ranging (LiDAR) data is proposed, sharing the same network structure to reduce the time cost of network design.
Abstract: Accurate urban land-use mapping is a challenging task in the remote-sensing field. With the availability of diverse remote sensors, synthetic use and integration of multisource data provides an opportunity for improving urban land-use classification accuracy. Neural networks for Deep Learning have achieved very promising results in computer-vision tasks, such as image classification and object detection. However, the problem of designing an effective deep-learning model for the fusion of multisource remote-sensing data still remains. To tackle this issue, this paper proposes a modified two-branch convolutional neural network for the adaptive fusion of hyperspectral imagery (HSI) and Light Detection and Ranging (LiDAR) data. Specifically, the proposed model consists of a HSI branch and a LiDAR branch, sharing the same network structure to reduce the time cost of network design. A residual block is utilized in each branch to extract hierarchical, parallel, and multiscale features. An adaptive-feature fusion module is proposed to integrate HSI and LiDAR features in a more reasonable and natural way (based on “Squeeze-and-Excitation Networks”). Experiments indicate that the proposed two-branch network shows good performance, with an overall accuracy of almost 92%. Compared with single-source data, the introduction of multisource data improves accuracy by at least 8%. The adaptive fusion model can also increase classification accuracy by more than 3% when compared with the feature-stacking method (simple concatenation). The results demonstrate that the proposed network can effectively extract and fuse features for a better urban land-use mapping accuracy.

Journal ArticleDOI
TL;DR: Con conditioning factors related with topography are analyzed and the impact of resolution and accuracy of DEMs on these factors is discussed, and two factors or parameters are proposed for inclusion in landslide inventory list as a conditioning factor and a risk assessment parameter for future studies.
Abstract: Digital elevation models (DEMs) are considered an imperative tool for many 3D visualization applications; however, for applications related to topography, they are exploited mostly as a basic source of information. In the study of landslide susceptibility mapping, parameters or landslide conditioning factors are deduced from the information related to DEMs, especially elevation. In this paper conditioning factors related with topography are analyzed and the impact of resolution and accuracy of DEMs on these factors is discussed. Previously conducted research on landslide susceptibility mapping using these factors or parameters through exploiting different methods or models in the last two decades is reviewed, and modern trends in this field are presented in a tabulated form. Two factors or parameters are proposed for inclusion in landslide inventory list as a conditioning factor and a risk assessment parameter for future studies.

Journal ArticleDOI
TL;DR: Monitoring rice cropping patterns in the An Giang province of the VMD from March 2017 to March 2018 shows a strong correlation with the spatial patterns of various rice growth stages and their association with the water infrastructure, and Sentinel-1A can be used to understand rice phenological changes as well as rice crops systems using radar backscattering.
Abstract: Cropping intensity is one of the most important decisions made independently by farmers in Vietnam. It is a crucial variable of various economic and process-based models. Rice is grown under irrigated triple- and double-rice cropping systems and a rainfed single-rice cropping system in the Vietnamese Mekong Delta (VMD). These rice cropping systems are adopted according to the geographical location and water infrastructure. However, little work has been done to map triple-cropping of rice using Sentinel-1 along with the effects of water infrastructure on the rice cropping intensity decision. This study is focused on monitoring rice cropping patterns in the An Giang province of the VMD from March 2017 to March 2018. The fieldwork was carried out on the dates close to the Sentinel-1A acquisition. The results of dual-polarized (VV and VH) Sentinel-1A data show a strong correlation with the spatial patterns of various rice growth stages and their association with the water infrastructure. The VH backscatter (σ°) is strongly correlated with the three rice growth stages, especially the reproductive stage when the backscatter is less affected by soil moisture and water in the rice fields. In all three cropping patterns, σ°VV and σ°VH show the highest value in the maturity stage, often appearing 10 to 12 days before the harvesting of the rice. A rice cropping pattern map was generated using the Support Vector Machine (SVM) classification of Sentinel-1A data. The overall accuracy of the classification was 80.7% with a 0.78 Kappa coefficient. Therefore, Sentinel-1A can be used to understand rice phenological changes as well as rice cropping systems using radar backscattering.

Journal ArticleDOI
TL;DR: Airbnb is active in holiday destinations of Spain, where it often serves as an intermediary for the rental of second or investment homes and apartments and the location of Airbnb listings is mostly determined by the supply of empty or secondary dwellings, distribution of traditional tourism accommodation, coastal location, and the level of internationalized tourism demand.
Abstract: The rising number of homes and apartments rented out through Airbnb and similar peer-to-peer accommodation platforms cause concerns about the impact of such activity on the tourism sector and prope ...

Journal ArticleDOI
TL;DR: It is revealed that the MLP outperformed other machine learning-based models in predicting factor of safety against slope failures, and SVR and MLR presented an almost equal accuracy in estimation, for both training and testing phases.
Abstract: In this study, we employed various machine learning-based techniques in predicting factor of safety against slope failures. Different regression methods namely, multi-layer perceptron (MLP), Gaussian process regression (GPR), multiple linear regression (MLR), simple linear regression (SLR), support vector regression (SVR) were used. Traditional methods of slope analysis (e.g., first established in the first half of the twentieth century) used widely as engineering design tools. Offering more progressive design tools, such as machine learning-based predictive algorithms, they draw the attention of many researchers. The main objective of the current study is to evaluate and optimize various machine learning-based and multilinear regression models predicting the safety factor. To prepare training and testing datasets for the predictive models, 630 finite limit equilibrium analysis modelling (i.e., a database including 504 training datasets and 126 testing datasets) were employed on a single-layered cohesive soil layer. The estimated results for the presented database from GPR, MLR, MLP, SLR, and SVR were assessed by various methods. Firstly, the efficiency of applied models was calculated employing various statistical indices. As a result, obtained total scores 20, 35, 50, 10, and 35, respectively for GPR, MLR, MLP, SLR, and SVR, revealed that the MLP outperformed other machine learning-based models. In addition, SVR and MLR presented an almost equal accuracy in estimation, for both training and testing phases. Note that, an acceptable degree of efficiency was obtained for GPR and SLR models. However, GPR showed more precision. Following this, the equation of applied MLP and MLR models (i.e., in their optimal condition) was derived, due to the reliability of their results, to be used in similar slope stability problems.

Journal ArticleDOI
TL;DR: An index system was established from the exposure and disaster reduction capability categories, and is based on analytic hierarchy process (AHP) methods, and can be used as a proposal for population and building distribution readjustments, and the management of flash floods in China.
Abstract: Flash floods are one of the natural disasters that threaten the lives of many people all over the world every year. Flash floods are significantly affected by the intensification of extreme climate events and interactions with exposed and vulnerable socio-economic systems impede regional development processes. Hence, it is important to estimate the loss due to flash floods before the disaster occurs. However, there are no comprehensive vulnerability assessment results for flash floods in China. Fortunately, the National Mountain Flood Disaster Investigation Project provided a foundation to develop this proposed assessment. In this study, an index system was established from the exposure and disaster reduction capability categories, and is based on analytic hierarchy process (AHP) methods. We evaluated flash flood vulnerability by adopting the support vector machine (SVM) model. Our results showed 439 counties with high and extremely high vulnerability (accounting for 10.5% of the land area and corresponding to approximately 100 million hectares (ha)), 571 counties with moderate vulnerability (accounting for 19.18% of the land area and corresponding to approximately 180 million ha), and 1128 counties with low and extremely low vulnerability (accounting for 39.43% of the land area and corresponding to approximately 370 million ha). The highly-vulnerable counties were mainly concentrated in the south and southeast regions of China, moderately-vulnerable counties were primarily concentrated in the central, northern, and southwestern regions of China, and low-vulnerability counties chiefly occurred in the northwest regions of China. Additionally, the results of the spatial autocorrelation suggested that the “High-High” values of spatial agglomeration areas mainly occurred in the Zhejiang, Fujian, Jiangxi, Hunan, Guangxi, Chongqing, and Beijing areas. On the basis of these results, our study can be used as a proposal for population and building distribution readjustments, and the management of flash floods in China.

Journal ArticleDOI
TL;DR: Road images from UAV oblique photogrammetry are used to reconstruct road three-dimensional (3D) models, from which road pavement distress is automatically detected and the corresponding dimensions are extracted using the developed algorithm.
Abstract: The timely and proper rehabilitation of damaged roads is essential for road maintenance, and an effective method to detect road surface distress with high efficiency and low cost is urgently needed. Meanwhile, unmanned aerial vehicles (UAVs), with the advantages of high flexibility, low cost, and easy maneuverability, are a new fascinating choice for road condition monitoring. In this paper, road images from UAV oblique photogrammetry are used to reconstruct road three-dimensional (3D) models, from which road pavement distress is automatically detected and the corresponding dimensions are extracted using the developed algorithm. Compared with a field survey, the detection result presents a high precision with an error of around 1 cm in the height dimension for most cases, demonstrating the potential of the proposed method for future engineering practice.

Journal ArticleDOI
TL;DR: This paper describes a supervised classification method considering SVM for lithological mapping in the region of Souk Arbaa Sahel belonging to the Sidi Ifni inlier, located in southern Morocco (Western Anti-Atlas), and confirms the ability of SVM as a supervised learning algorithm for lithology mapping purposes.
Abstract: Remote sensing data proved to be a valuable resource in a variety of earth science applications. Using high-dimensional data with advanced methods such as machine learning algorithms (MLAs), a sub-domain of artificial intelligence, enhances lithological mapping by spectral classification. Support vector machines (SVM) are one of the most popular MLAs with the ability to define non-linear decision boundaries in high-dimensional feature space by solving a quadratic optimization problem. This paper describes a supervised classification method considering SVM for lithological mapping in the region of Souk Arbaa Sahel belonging to the Sidi Ifni inlier, located in southern Morocco (Western Anti-Atlas). The aims of this study were (1) to refine the existing lithological map of this region, and (2) to evaluate and study the performance of the SVM approach by using combined spectral features of Landsat 8 OLI with digital elevation model (DEM) geomorphometric attributes of ALOS/PALSAR data. We performed an SVM classification method to allow the joint use of geomorphometric features and multispectral data of Landsat 8 OLI. The results indicated an overall classification accuracy of 85%. From the results obtained, we can conclude that the classification approach produced an image containing lithological units which easily identified formations such as silt, alluvium, limestone, dolomite, conglomerate, sandstone, rhyolite, andesite, granodiorite, quartzite, lutite, and ignimbrite, coinciding with those already existing on the published geological map. This result confirms the ability of SVM as a supervised learning algorithm for lithological mapping purposes.

Journal ArticleDOI
TL;DR: This work proposes an end-to-end U-shaped neural network, which efficiently merges depth and spectral information within two parallel networks combined at the late stage for binary building mask generation, and demonstrates that the method generalizes for use in cities which are not included in the training data.
Abstract: Recent technical developments made it possible to supply large-scale satellite image coverage. This poses the challenge of efficient discovery of imagery. One very important task in applications like urban planning and reconstruction is to automatically extract building footprints. The integration of different information, which is presently achievable due to the availability of high-resolution remote sensing data sources, makes it possible to improve the quality of the extracted building outlines. Recently, deep neural networks were extended from image-level to pixel-level labelling, allowing to densely predict semantic labels. Based on these advances, we propose an end-to-end U-shaped neural network, which efficiently merges depth and spectral information within two parallel networks combined at the late stage for binary building mask generation. Moreover, as satellites usually provide high-resolution panchromatic images, but only low-resolution multi-spectral images, we tackle this issue by using a residual neural network block. It fuses those images with different spatial resolution at the early stage, before passing the fused information to the Unet stream, responsible for processing spectral information. In a parallel stream, a stereo digital surface model (DSM) is also processed by the Unet. Additionally, we demonstrate that our method generalizes for use in cities which are not included in the training data.

Journal ArticleDOI
TL;DR: A convolutional neural network architecture is proposed to validate landslide photos collected by citizens or nonexperts and integrated into a mobile- and web-based GIS environment designed specifically for a landslide CitSci project.
Abstract: Several scientific processes benefit from Citizen Science (CitSci) and VGI (Volunteered Geographical Information) with the help of mobile and geospatial technologies. Studies on landslides can also take advantage of these approaches to a great extent. However, the quality of the collected data by both approaches is often questionable, and automated procedures to check the quality are needed for this purpose. In the present study, a convolutional neural network (CNN) architecture is proposed to validate landslide photos collected by citizens or nonexperts and integrated into a mobile- and web-based GIS environment designed specifically for a landslide CitSci project. The VGG16 has been used as the base model since it allows finetuning, and high performance could be achieved by selecting the best hyper-parameters. Although the training dataset was small, the proposed CNN architecture was found to be effective as it could identify the landslide photos with 94% precision. The accuracy of the results is sufficient for purpose and could even be improved further using a larger amount of training data, which is expected to be obtained with the help of volunteers.

Journal ArticleDOI
TL;DR: This article considers the history of corporate involvement in the community and analyzes historical quarterly-snapshot OSM-QA-Tiles to show where and what these corporate editors are mapping, and raises questions about how the OSM community might proceed as corporate editing grows and evolves as a mechanism for expanding the map for multiple uses.
Abstract: OpenStreetMap (OSM), the largest Volunteered Geographic Information project in the world, is characterized both by its map as well as the active community of the millions of mappers who produce it. The discourse about participation in the OSM community largely focuses on the motivations for why members contribute map data and the resulting data quality. Recently, large corporations including Apple, Microsoft, and Facebook have been hiring editors to contribute to the OSM database. In this article, we explore the influence these corporate editors are having on the map by first considering the history of corporate involvement in the community and then analyzing historical quarterly-snapshot OSM-QA-Tiles to show where and what these corporate editors are mapping. Cumulatively, millions of corporate edits have a global footprint, but corporations vary in geographic reach, edit types, and quantity. While corporations currently have a major impact on road networks, non-corporate mappers edit more buildings and points-of-interest: representing the majority of all edits, on average. Since corporate editing represents the latest stage in the evolution of corporate involvement, we raise questions about how the OSM community—and researchers—might proceed as corporate editing grows and evolves as a mechanism for expanding the map for multiple uses.

Journal ArticleDOI
TL;DR: By means of a geographical information system (GIS) and spatial statistics, it is demonstrated that it is possible to better define the groupings of rural accommodation available in Extremadura, Spain, especially if these are conceptualized by dint of their lodging capacity.
Abstract: The importance of the distribution of accommodation businesses over a certain area has grown remarkably, especially if such distribution is mapped using tools and techniques that utilize the territory as a variable in the analysis. The purpose of this paper is to demonstrate, by means of a geographical information system (GIS) and spatial statistics, that it is possible to better define the groupings of rural accommodation available in Extremadura, Spain, especially if these are conceptualized by dint of their lodging capacity. To do so, two specific techniques have been used: hotspot analysis and outlier analysis, which yield results that prove the existence of homogeneous and heterogeneous groups of accommodation businesses, based not only on their spatial proximity but also on their lodging capacity. On the basis of this analysis, the regional administration can devise tourist policies and strategic plans in order to improve the management and efficiency of each business. Despite the applicability of the present results, this study also addresses the difficulties in using these techniques—Where establishing the spatial relationships and the boundary distance are key concepts. In the case study here, the ideal configuration utilizes a fixed distance of six miles.

Journal ArticleDOI
TL;DR: A new mesh-to-HBIM modeling workflow and an integrated BIM management system to connect HBIM elements and historical knowledge and a semantic model with object-oriented knowledge is developed by extending the capability of the BIM platform.
Abstract: Built heritage has been documented by reality-based modeling for geometric description and by ontology for knowledge management. The current challenge still involves the extraction of geometric primitives and the establishment of their connection to heterogeneous knowledge. As a recently developed 3D information modeling environment, building information modeling (BIM) entails both graphical and non-graphical aspects of the entire building, which has been increasingly applied to heritage documentation and generates a new issue of heritage/historic BIM (HBIM). However, HBIM needs to additionally deal with the heterogeneity of geometric shape and semantic knowledge of the heritage object. This paper developed a new mesh-to-HBIM modeling workflow and an integrated BIM management system to connect HBIM elements and historical knowledge. Using the St-Pierre-le-Jeune Church, Strasbourg, France as a case study, this project employs Autodesk Revit as a BIM environment and Dynamo, a built-in visual programming tool of Revit, to extend the new HBIM functions. The mesh-to-HBIM process segments the surface mesh, thickens the triangle mesh to 3D volume, and transfers the primitives to BIM elements. The obtained HBIM is then converted to the ontology model to enrich the heterogeneous knowledge. Finally, HBIM geometric elements and ontology semantic knowledge is joined in a unified BIM environment. By extending the capability of the BIM platform, the HBIM modeling process can be conducted in a time-saving way, and the obtained HBIM is a semantic model with object-oriented knowledge.

Journal ArticleDOI
TL;DR: ANNK is a promising approach for mapping SOC content at a local scale when considering the uncertainty and efficiency, ML and two-step approach are more suitable than geostatistics in regional landscapes with the high heterogeneity.
Abstract: Accurate digital soil mapping (DSM) of soil organic carbon (SOC) is still a challenging subject because of its spatial variability and dependency. This study is aimed at comparing six typical methods in three types of DSM techniques for SOC mapping in an area surrounding Changchun in Northeast China. The methods include ordinary kriging (OK) and geographically weighted regression (GWR) from geostatistics, support vector machines for regression (SVR) and artificial neural networks (ANN) from machine learning, and geographically weighted regression kriging (GWRK) and artificial neural networks kriging (ANNK) from hybrid approaches. The hybrid approaches, in particular, integrated the GWR from geostatistics and ANN from machine learning with the estimation of residuals by ordinary kriging, respectively. Environmental variables, including soil properties, climatic, topographic, and remote sensing data, were used for modeling. The mapping results of SOC content from different models were validated by independent testing data based on values of the mean error, root mean squared error and coefficient of determination. The prediction maps depicted spatial variation and patterns of SOC content of the study area. The results showed the accuracy ranking of the compared methods in decreasing order was ANNK, SVR, ANN, GWRK, OK, and GWR. Two-step hybrid approaches performed better than the corresponding individual models, and non-linear models performed better than the linear models. When considering the uncertainty and efficiency, ML and two-step approach are more suitable than geostatistics in regional landscapes with the high heterogeneity. The study concludes that ANNK is a promising approach for mapping SOC content at a local scale.

Journal ArticleDOI
TL;DR: A classification model that combines spectral indices-based band selection and one-dimensional convolutional neural networks was proposed to realize automatic oil films classification using hyperspectral remote sensing images and surpassed the accuracy of other machine learning algorithms such as SVM and RF.
Abstract: Spectral characteristics play an important role in the classification of oil film, but the presence of too many bands can lead to information redundancy and reduced classification accuracy. In this study, a classification model that combines spectral indices-based band selection (SIs) and one-dimensional convolutional neural networks was proposed to realize automatic oil films classification using hyperspectral remote sensing images. Additionally, for comparison, the minimum Redundancy Maximum Relevance (mRMR) was tested for reducing the number of bands. The support vector machine (SVM), random forest (RF), and Hu’s convolutional neural networks (CNN) were trained and tested. The results show that the accuracy of classifications through the one dimensional convolutional neural network (1D CNN) models surpassed the accuracy of other machine learning algorithms such as SVM and RF. The model of SIs+1D CNN could produce a relatively higher accuracy oil film distribution map within less time than other models.