scispace - formally typeset
Search or ask a question

Showing papers in "Earth Science Informatics in 2019"


Journal ArticleDOI
TL;DR: This study attempts to provide a comprehensive review of the fundamental processes required for change detection with a brief account of the main techniques of change detection and discusses the need for development of enhanced change detection methods.
Abstract: Change detection captures the spatial changes from multi temporal satellite images due to manmade or natural phenomenon. It is of great importance in remote sensing, monitoring environmental changes and land use –land cover change detection. Remote sensing satellites acquire satellite images at varying resolutions and use these for change detection. This paper briefly analyses various change detection methods and the challenges and issues faced as part of change detection. Over the years, a wide range of methods have been developed for analyzing remote sensing data and newer methods are still being developed. Timely and accurate change detection of Earth’s surface features provides the basis for evaluating the relationships and interactions between human and natural phenomena for the better management of resources. In general, change detection applies multi-temporal datasets to quantitatively analyse the temporal effects of the phenomenon. As such, this study attempts to provide a comprehensive review of the fundamental processes required for change detection. The study also gives a brief account of the main techniques of change detection and discusses the need for development of enhanced change detection methods.

196 citations


Journal ArticleDOI
TL;DR: The MLP-PSO model as a hybrid ANN demonstrated superior accuracy and effectiveness compared to the other ROP-prediction algorithms evaluated, but its performance is rivalled by the SVR model.
Abstract: Predicting the drilling rate of penetration (ROP) is one approach to optimizing drilling performance. However, as ROP behavior is unique to specific geological conditions its application is not straightforward. Moreover, ROP is typically affected by various operational factors (e.g. bit type, weight-on-bit, rotation rate, etc.) as well as the geological characteristics of the rocks being penetrated. This makes ROP prediction an intricate and multi-faceted problem. Here we compare data mining methods with several machine learning algorithms to evaluate their accuracy and effectiveness in predicting ROP. The algorithms considered are: artificial neural networks (ANN) applying a multi-layer perceptron (MLP); ANN applying a radial basis function (RBF); support vector regression (SVR), and an hybrid MLP trained using a particle swarm optimization algorithm (MLP-PSO). Data preparation prior to executing the algorithms involves applying a Savitzky–Golay (SG) smoothing filter to remove noise from petrophysical well-logs and drilling data from the mud-logs. A genetic algorithm is applied to tune the machine learning algorithms by identifying and ranking the most influential input variables on ROP. This tuning routine identified and selected eight input variables which have the greatest impact on ROP. These are: weight on bit, bit rotational speed, pump flow rate, pump pressure, pore pressure, gamma ray, density log and sonic wave velocity. Results showed that the machine learning algorithms evaluated all predicted ROP accurately. Their performance was improved when applied to filtered data rather than raw well-log data. The MLP-PSO model as a hybrid ANN demonstrated superior accuracy and effectiveness compared to the other ROP-prediction algorithms evaluated, but its performance is rivalled by the SVR model.

66 citations


Journal ArticleDOI
TL;DR: Results show that per-superpixel MCNN can effectively avoid misclassification in complex urban area compared with per- superpixel classification method combining single-scale CNN (Per-super pixel SCNN), and series of classification results show that using the pre-estimated scale parameter can guarantee high classification accuracy, thus arbitrary nature of scale estimation can be avoided to some extent.
Abstract: Traditional classification methods, which use low-level features, have failed to gain satisfactory classification results of very high spatial resolution (VHR) remote sensing images. Even though per-pixel classification method based on convolutional neural network (CNN) (Per-pixel CNN) achieved higher accuracy with the help of high-level features, this method still has limitations. Per-superpixel classification method based on CNN (Per-superpixel CNN) overcomes the limitations of per-pixel CNN, however, there are still some scale related issues in per-superpixel CNN needed to be explored and addressed. Firstly, in order to avoid the misclassification of complex land cover objects caused by scale effect, the per-superpixel classification method combining multi-scale CNN (Per-superpixel MCNN) is proposed. Secondly, this paper analyzes how scale parameter of CNN impacts the classification accuracy and involves spatial statistics to pre-estimate scale parameter in per-superpixel CNN. This paper takes two VHR remote sensing images as experimental data, and employs two superpixel segmentation algorithms to classify urban and suburban land covers. The experimental results show that per-superpixel MCNN can effectively avoid misclassification in complex urban area compared with per-superpixel classification method combining single-scale CNN (Per-superpixel SCNN). Series of classification results also show that using the pre-estimated scale parameter can guarantee high classification accuracy, thus arbitrary nature of scale estimation can be avoided to some extent. Additionally, through discussion of the influence of accuracy evaluation method in CNN classification, it is stressed that random selection of ground truth validation points from study area is recommended and more responsibly other than using part of a reference dataset.

58 citations


Journal ArticleDOI
TL;DR: The significance of textural features in improving the classification accuracy of heterogeneous landscape and it becomes more significant as the spatial resolution improved is demonstrated and it is revealed that textures are vital especially in the case of SAR data.
Abstract: Texture analysis of remote sensing images has been received a substantial amount of attention as it plays a vital role in improving the classification accuracy of heterogeneous landscape. However, it is inadequately studied that how the images from different sensors with varying spatial resolutions influence the choice of textural features. This study endeavors to examine the textural features from the Landsat 8-OLI, RISAT-1, Resourcesat 2-LISS III, Sentinel-1A and Resourcesat 2-LISS IV satellite images with spatial resolution of 30, 25, 23.5, 5×20 and 5.8 m respectively, for improving land use/land cover (LULC) classification accuracy. The textural features were extracted from the aforesaid sensor data with the assistance of gray-level co-occurrence matrix (GLCM) with different moving window sizes. The best combination of textural features was recognized using standard deviations and correlation coefficients following separability analysis of LULC categories based on training samples. A supervised support vector machine (SVM) classifier was employed to perform LULC classification and the results were evaluated using ground truth information. This work demonstrates the significance of textural features in improving the classification accuracy of heterogeneous landscape and it becomes more significant as the spatial resolution improved. It is also revealed that textures are vital especially in the case of SAR data.

53 citations


Journal ArticleDOI
TL;DR: This paper introduces a group of information-centric ontologies that encompass the flood domain and describes how they can be benefited to access, analyze, and visualize flood-related data with natural language queries.
Abstract: Advancements and new techniques in information technologies are making it possible to manage, analyze and present large-scale environmental modeling results and spatial data acquired from various sources. However, it is a major challenge to make this data accessible because of its unstructured, incomplete and varied nature. Extracting information and making accurate inferences from various data sources rapidly is critical for natural disaster preparedness and response. Critical information about disasters needs to be provided in a structured and easily accessible way in a context-specific manner. This paper introduces a group of information-centric ontologies that encompass the flood domain and describes how they can be benefited to access, analyze, and visualize flood-related data with natural language queries. The presented methodology enables the easy integration of domain knowledge into expert systems and voice-enabled intelligent applications that can be accessed through web-based information platforms, instant messaging apps, automated workflow systems, home automation devices, and augmented and virtual reality platforms. A case study is described to demonstrate the usage of presented ontologies in such intelligent systems.

42 citations


Journal ArticleDOI
TL;DR: There is a strong positive correlation between excellent groundwater potential zones with high specific yield and it diminishes with low specific yield with poor potential zones.
Abstract: The present study focuses on the exploration of groundwater potential zones in Vellore district, Tamil Nadu, India based on geospatial technologies. Various thematic layers such as landuse/ land cover, soil texture, slope, geology, geomorphology, drainage and lineament density, rainfall, transmissivity, permeability and specific capacity were derived from suitable sources. All the thematic layers and classified raster’s were prepared using ArcGIS version 10.1 software. The value in each thematic layer was assigned a common scale thus integrating all the thematic layers and to generate a final output layer. The score for each influencing factor were calculated based on multi-influencing factor technique. During weighted overlay analysis, the ranking was given for each individual parameter of each thematic map, and weights were assigned according to the multi-influencing factor (MIF). In this study, the groundwater potential zones were classified by four categories such as poor, moderate, good and excellent. Excellent groundwater potential zones have been observed in the eastern part of the study area due to the presence of good porosity and permeability. The obtained results were validated with specific yield of the aquifer in the study area. It is noted that there is a strong positive correlation between excellent groundwater potential zones with high specific yield and it diminishes with low specific yield with poor potential zones.

41 citations


Journal ArticleDOI
TL;DR: The results indicate that the mechanisms for landslide occurrence varied for each cluster and that driving forces of the landslides operated differently at a country level compared to the cluster level, and the variation in the functionality of effective factors should be considered.
Abstract: In this paper, we clustered and analyzed landslides and investigated their underlying driving forces at two levels, country and cluster, all over Iran. Considering 12 conditioning factors, the landslides were clustered into nine relatively homogeneous regions using the Contextual Neural Gas (CNG) algorithm. Next, their underlying driving forces were ranked using the Random Forest (RF) algorithm at country and cluster levels. Our results indicate that the mechanisms for landslide occurrence varied for each cluster and that driving forces of the landslides operated differently at a country level compared to the cluster level. Moreover, slope, altitude, average annual rainfall, and distance to the main roads were identified as the most important causes of landslides within all clusters. Thus, for effective management and modelling landslides on a large scale, the variation in the functionality of effective factors should be considered.

39 citations


Journal ArticleDOI
TL;DR: Two supervised machine learning algorithms, namely radial basis function (RBF) neural network and support vector machine (SVM) with RBF kernel were used for generating data-driven predictive models of porphyry-Cu mineral prospectivity and demonstrates that the former is more successful in delineating exploration targets than the latter one.
Abstract: Definition of the efficient ore-forming processes, which are considered as mineralization controls is a fundamental stage in mineral prospectivity modeling. In this contribution, four efficient targeting criteria of geochemical, geological and structural data related to porphyry-type Cu deposits in Varzaghan district, NW Iran, were integrated. For creation of multi-element geochemical layer, a two-stage factor analysis was firstly conducted on ilr-transformed data of 18 selected elements and it was found that factor 1 (F1) is the representative of Cu-Au-Mo-Bi elemental association in the study area. Then, the combined model of multifractal inverse distance weighting (IDW) interpolation technique and spectrum-area (S-A) fractal method of F1 as the significant mineralization-related multi-element geochemical layer was integrated with geological-structural evidence layers. For this purpose, two supervised machine learning algorithms, namely radial basis function (RBF) neural network and support vector machine (SVM) with RBF kernel were used for generating data-driven predictive models of porphyry-Cu mineral prospectivity. Comparison of the generated models demonstrates that the former is more successful in delineating exploration targets than the latter one.

38 citations


Journal ArticleDOI
TL;DR: The shift in the vegetation and water body extents have contributed detrimentally to the drastic declined in the Isimangaliso Wetland Park in recent years and this development might have negative effects on the wetland ecosystem and biodiversity.
Abstract: Various forms of competition for water and amplified agricultural practices, as well as urban development in South Africa, have modified and destroyed natural wetlands and its biodiversity benefits. To conserve and protect wetlands resources, it is important to file and monitor wetlands and their accompanied land features. Spatial science such as remote sensing has been used with various advantages for assessing wetlands dynamic especially for large areas. Four satellite images for 1987, 1997, 2007 (Landsat 5 Thematic Mapper) and 2017 (Landsat 8 Operational Land Imager) were used in this study for mapping wetland dynamics in the study area. The result revealed that the natural landscapes in the area have experienced changes in the last three decades. Dense vegetation, sparse vegetation and water body have increased with about 14% (5976.495 km2), 23% (10,349.631km2) and 1% (324.621) respectively between 1987 and 2017. While wetland features (marshland and quag) in the same period experienced drastic decrease with an area coverage of about 16,651.07 km2 (38%). This study revealed that the shift in the vegetation and water body extents have contributed detrimentally to the drastic declined in the Isimangaliso Wetland Park in recent years. Consequently, this development might have negative effects on the wetland ecosystem and biodiversity and the grave state of the wetland in the study area requires an urgent need for protection of the dregs wetland benefits.

38 citations


Journal ArticleDOI
TL;DR: This paper proposes a neural network approach, namely, attention-based bidirectional long short-term memory with a conditional random field layer (Att-BiLSTM-CRF), for name entity recognition to extract information entities describing geoscience information from geos science reports.
Abstract: Many detailed geoscience reports lie unused, offering both challenges and opportunities for information extraction. In geoscience research, geological named entity recognition (GNER) is an important task in the field of geoscience information extraction. Regarding numerical geoscience data, research on information extraction remains limited. Most conventional NER approaches are heavily dependent on feature engineering, and such sentence-level-based methods suffer from the tagging inconsistency problem. Based on the above observations, this paper proposes a neural network approach, namely, attention-based bidirectional long short-term memory with a conditional random field layer (Att-BiLSTM-CRF), for name entity recognition to extract information entities describing geoscience information from geoscience reports. This approach leverages global information learned from an attention mechanism to enforce tagging consistency across multiple instances of the same token in a document. Experiments on the constructed dataset show that our method achieves comparable performance to that of other state-of-the-art systems. Additionally, our method achieved an average F1 score of 91.47% in the NER extraction task.

38 citations


Journal ArticleDOI
TL;DR: UsingGRAMAT and processing the GRACE level-2 data products, the global spatio-temporal mass variations can be efficiently and robustly estimated, which indicates the potential wide range of GRAMAT’s applications in hydrology, oceanography, cryosphere, solid Earth and geophysical disciplines to interpret large-scale mass redistribution and transport in the Earth system.
Abstract: In this paper, we robustly analyze the noise reduction methods for processing spherical harmonic (SH) coefficient data products collected by the Gravity Recovery and Climate Experiment (GRACE) satellite mission and devise a comprehensive GRACE Matlab Toolbox (GRAMAT) to estimate spatio-temporal mass variations over land and oceans. Functions in GRAMAT contain: (1) destriping of SH coefficients to remove “north-to-south” stripes, or geographically correlated high-frequency errors, and Gaussian smoothing, (2) spherical harmonic analysis and synthesis, (3) assessment and reduction of the leakage effect in GRACE-derived mass variations, and (4) harmonic analysis of regional time series of the mass variations and assessment of the uncertainty of the GRACE estimates. As a case study, we analyze the terrestrial water storage (TWS) variations in the Amazon River basin using the functions in GRAMAT. In addition to obvious seasonal TWS variations in the Amazon River basin, significant interannual TWS variations are detected by GRACE using the GRAMAT, which are consistent with precipitation anomalies in the region. We conclude that using GRAMAT and processing the GRACE level-2 data products, the global spatio-temporal mass variations can be efficiently and robustly estimated, which indicates the potential wide range of GRAMAT’s applications in hydrology, oceanography, cryosphere, solid Earth and geophysical disciplines to interpret large-scale mass redistribution and transport in the Earth system. We postulate that GRAMAT will also be an effective tool for the analysis of data from the upcoming GRACE-Follow-On mission.

Journal ArticleDOI
TL;DR: Four state-of-the-art data mining models are explored for the spatially explicit prediction of landslide susceptibility across a landslide-prone landscape in the Zagros Mountains, Iran to obtain a better understanding of the capability of different predictive models.
Abstract: In recent years, increasing efforts have been made to predict the time, location, and magnitude of future landslides. This study explores the potential application of four state-of-the-art data mining models (logistic regression, random forest, support vector machine, and Naive Bayes tree) for the spatially explicit prediction of landslide susceptibility across a landslide-prone landscape in the Zagros Mountains, Iran. Fifteen conditioning factors and 272 historical landslide events were used to develop a geospatial database for the study area. A two-step factor analysis procedure based on the multicollinearity analysis and the Gain Ratio technique was performed to measure the predictive utility of the factors and to quantify their contribution to landslide occurrences across the study region. Once the models were successfully trained and validated using several performance metrics (i.e., ROC-AUC, sensitivity, specificity, accuracy, RMSE, and Kappa), they were applied to the entire study region to generate distribution maps of landslide susceptibilities. Overall, the random forest model demonstrated the highest training performance (AUC = 0.971; accuracy = 99%; RMSE = 0.120) and ability to predict future landslides (AUC = 0.942; accuracy =87%; RMSE = 0.312), followed by the support vector machine, Naive Bayes tree, and logistic regression models. The Wilcoxon signed-rank test further proved the superiority of the random forest model for mapping landslide susceptibility in the Zagros region. The insights obtained from this research could be useful for the spatially explicit assessment of landslide-prone landscapes and obtaining a better understanding of the capability of different predictive models.

Journal ArticleDOI
TL;DR: The problem of semantic segmentation of big remote sensing images is addressed by proposed a top-down approach based on two main steps to compute features at the object-level and use this structure to label every pixel in new images.
Abstract: The increasing amount of remote sensing data has opened the door to new challenging research topics. Nowadays, significant efforts are devoted to pixel and object based classification in case of massive data. This paper addresses the problem of semantic segmentation of big remote sensing images. To do this, we proposed a top-down approach based on two main steps. The first step aims to compute features at the object-level. These features constitute the input of a multi-layer feed-forward network to generate a structure for classifying remote sensing objects. The goal of the second step is to use this structure to label every pixel in new images. Several experiments are conducted based on real datasets and results show good classification accuracy of the proposed approach. In addition, the comparison with existing classification techniques proves the effectiveness of the proposed approach especially for big remote sensing data.

Journal ArticleDOI
TL;DR: It was observed that the proposed model performs better than the other studied models (Gaussian Process Regression and Artificial Neural Network) and was compared with other models used in the previous studies.
Abstract: The mineral industry needs fast and efficient mineral quality monitoring equipment, and a machine vision system could be a suitable alternative to the traditional quality monitoring system. This study attempts to develop a machine vision-based expert system using support vector machine regression (SVR) model for the online quality monitoring of iron ores (hereafter known as ore grades). The images of the ore samples were captured during the run of condition on the fabricated conveyor belt transportation system. A total of 280 image features were extracted from each of the selected captured images in order to evaluate its suitability in object identification. A sequential forward floating selection (SFFS) algorithm was developed using the support vector machine regression (SVR) as a criterion function for selecting the optimum set of image features. The optimised feature subset was used as input, and the iron ore grade value was used as an output parameter for the model development. The grade of iron ore corresponding to each captured image was analysed in the laboratory using X-Ray Fluorescence (XRF) for grade estimation. The model was trained using 70% of the dataset and tested using 30% of the sample dataset. The model performance was evaluated using a test dataset with the five indices viz. the sum of squared errors (SSE), root mean squared error (RMSE), normalised mean squared error (NMSE), R-square (R2) and bias. The SSE, RMSE, NMSE and bias values of the model were obtained as 537.5367, 5.9863, 0.0063, and 0.8875, respectively. The R2 value of the model was obtained as 0.9402. The results indicate that the model performs satisfactorily for the iron ore grade prediction from the image collected in a controlled laboratory environment. The performance of the proposed model was compared with other models used in the previous studies. It was observed that the proposed model performs better than the other studied models (Gaussian Process Regression and Artificial Neural Network).

Journal ArticleDOI
TL;DR: The findings show that there is a high and significant correlation between soil-spectral reflectance and soil salinity, and that the correlation between ground-water salinity and soil spectral reflectance decreases gradually from the blue band to the shortwave infrared band of ETM + images.
Abstract: This study discusses measures for improving the precision of optical remote-sensing detection of soil salinity and the possibility of soil salinity detection at different depths of 0–10 cm, 10–30 cm and 30–50 cm using optical remote-sensing data, and analyzes the mechanism by which deep-layer soil salinity influences the soil spectrum. The findings show that there is a high and significant correlation between soil-spectral reflectance and soil salinity, and that the correlation between soil-spectral reflectance and soil salinity decreases gradually from the blue band to the shortwave infrared band of ETM + images. The partial least squares regression model is used to estimate soil salinity in the 0–10-cm surface-layer, confirming that the selected soil-salinity-detecting bands of Band 1 and Band 4, the established difference soil salinity index, the derivative of the normalized differential vegetation index, and the deep-layer soil moisture can improve the precision of remote-sensing detection of surface-layer soil salinity. The precise estimation of the 0–10-cm surface-layer soil salinity with variables features an R2 = 0.752, an RMSE = 26.84 g/Kg, and a p = 0.000. There is a strong mediating effect between deep-layer soil salinity, 0–10-cm surface-layer soil salinity, and soil spectral reflectance in the study area; namely, deep-layer soil salinity influences soil spectral reflectance by influencing surface-layer soil salinity. There is a significant and very strong power function relation between 0 and 10-cm surface-layer soil salinity and deep-layer soil salinity. Based on this relationship, this study estimates deep-layer soil salinity using optical remote-sensing images.

Journal ArticleDOI
TL;DR: An Automatic Identification System (AIS) which can protect fishermen by notifying the country’s border and provides collision avoidance by using AIS/ ultrasonic sensors is proposed.
Abstract: Maritime Border Collision is one of the vital concerns in coastal states since the maritime boundaries of any two countries cannot be identified easily during fishing. Maritime domain awareness and the border line control are the essential requirement which happens via recognition, and observing of boats inside their country boundary. It is necessary to identify the maritime border and alert the fisherman during the fishing. In this paper, we propose an Automatic Identification System (AIS) which can protect fishermen by notifying the country’s border. If they are nearing towards the International Maritime Border Line (IMBL), an alert will be sent to coast guards via VHF set. Using the inbuilt GPS, AIS can find the location and transmits to the embedded systems, which gathers the recent position by comparing autonomy and longitudinal values with the existing assessment. The proposed system is validated under a case study in the maritime border between India and Sri Lanka, which is identified as Gulf of Mannar. It has been revealed that fishermen can aware that they are about to near the nautical border by means of visual and audio alert. Then, protectors in the coast preserve support and afford supplementary assist to those fishermen. This system also provides collision avoidance by using AIS/ ultrasonic sensors. It has better performance than the relevant methods such as RF (Charan et al. 2016), ECDIS (Vanparia and Ghodasara, International Journal of Computer Applications & Information Technology, 1:58–64, 2014), Android (Kumar et al. 2016), GSM and GPS (Sivagnanam et al., International Journal of Innovative Research in Advanced Engineering (IJIRAE), 2:124–132, 2015).

Journal ArticleDOI
TL;DR: The built-up land category proved to be the most influential or significant in the development of high land surface temperature levels, while vegetation had an opposite effect as a series of data sets illustrated that vegetated areas had a cooling effect on the surface.
Abstract: The world is currently experiencing unprecedented urban growth, industrialization, and the perceived higher standard of living that is often associated with access to better infrastructure. “Surface Heat Island (SHI) is a phenomenon where urban areas experience higher surface temperatures compared to the surrounding rural areas. The presence of the SHI in urban areas in most cases has a negative impact not only on city dwellers, but also on the environment and the economy. This study aimed at evaluating SHI in King Williams Town by studying the relationship between land surface temperatures, land cover and land cover indices. The derived indices are the Normalized Difference Vegetation Index (NDVI) and Normalized Difference Built-up Index (NDBI), these indices were selected because they are representative of the land surface features.” This study was conducted for the King Williams Town (KWT) study area between the years 1995 and 2018, the land surface temperature was derived from Landsat ETM + high thermal band data. The findings from this study provided an idea on the correlation between satellite derived land surface temperature and the land modification, which occurred during the urbanization of King Williams Town during 23-year period between 1995 and 2018. The built-up land category proved to be the most influential or significant in the development of high land surface temperature levels, while vegetation had an opposite effect as a series of data sets illustrated that vegetated areas had a cooling effect on the surface. Water bodies in the study area had insignificant effect on the surface temperature levels while the grass lands were not as cooling as the vegetation but did provide for a cooling environment on the study area.

Journal ArticleDOI
TL;DR: Out of grid partitioning and subtractive clustering, subtractives clustering is found to be matchless in a prediction of earthquake magnitude for the data selected in this research.
Abstract: The present work emphasizes forecasting the occurrence of earth- quakes using a smart and intelligent tool called adaptive neuro-fuzzy inference system(ANFIS). ANFIS can be considered as the fusion of artificial neural networking and fuzzy inference system, which is the smarter version of predicting tool. For this purpose information regarding forty-five real earthquakes are collected from different regions. During the period of 1933 and 1985, earth- quakes from different stations are assembled having magnitude not less than 5. Thereupon, two algorithms are used to develop a model with ANFIS, which tries to produce a better prediction of earthquake magnitude. The higher mag- nitude of earthquakes leads to devastating the life and economy, hence for the safety of the vicinity, the prediction of earthquake magnitude can be a life- saving approach which is quite challenging. Adopting this approach is a very fast and economic way of prediction. Out of grid partitioning and subtractive clustering, subtractive clustering is found to be matchless in a prediction of earthquake magnitude for the data selected in this research.

Journal ArticleDOI
TL;DR: This paper uses a combination of GIS software to carry out the three-dimensional geological modeling of above-ground and underground integration in the study area, and puts forward application suggestions for the underground space planning of the area, providing support for decision-making support for the government and relevant departments.
Abstract: Since the development of underground space has the characteristics of “difficult to recover, one-off, and difficult to budget”, the planning of underground space is particularly important. Traditional two-dimensional planning technology is difficult to describe three-dimensional spatial information such as underground complex geological environment, and three-dimensional GIS technology can solve this problem better, which can greatly enrich the urban planning, construction, and management control methods. This paper uses a combination of GIS software to carry out the three-dimensional geological modeling of above-ground and underground integration in the study area, and puts forward application suggestions for the underground space planning of the area, providing support for decision-making support for the government and relevant departments. and provide a demonstration for other cities to carry out urban geological survey work.

Journal ArticleDOI
TL;DR: A connection cloud model coupled with extenics, taking into account of randomness, fuzziness and incompatibility of evaluation indicators, was presented here to analyze the water quality and shows that this model can not only quantitatively describe certainty and uncertainty relationship between evaluation indicators and classification standard in a unified way, but also make the evaluation result more reasonable.
Abstract: The evaluation of water quality is challenging because it is involved with various uncertainty factors. A connection cloud model coupled with extenics, taking into account of randomness, fuzziness and incompatibility of evaluation indicators, was presented here to analyze the water quality. First, according to the classification standard, left and right half interval lengths of evaluation indicator were specified to assign the digital features of the connection cloud at various levels. Then, a matter element was built with the connection cloud model. Namely, connection clouds in finite intervals were simulated to analyze the certainty degree of measured indicator to each evaluation standard, the certainty degree of indicator was calculated, and the extension matrix was constructed based on connection cloud. Next, associated with the weight vector of indicators, the integrated certainty degree was calculated to determine the class of water quality. Finally, a case study and comparisons with other methods were performed to confirm the validity and reliability of the proposed model. The results show that this model can not only quantitatively describe certainty and uncertainty relationship between evaluation indicators and classification standard in a unified way, but also make the evaluation result more reasonable.

Journal ArticleDOI
TL;DR: An online geoprocessing and analysis framework, which allows an inexperienced user to perform unsupervised classification of remote sensing data, and a prototype architecture based on participatory GIS is developed for field data collection, accuracy assessment and online dissemination.
Abstract: The online geoprocessing and analysis using state-of-the-art technology are offering automated analytical tools to a large group of users. This paper describes an online geoprocessing and analysis framework, which allows an inexperienced user to perform unsupervised classification of remote sensing data. The parallel computing based geoprocessing and analysis framework adopted in this work has been implemented using Free and Open Source Software for Geospatial (FOSS4G). Web Processing Service (WPS) based geoprocessing framework facilitates the deployment of unsupervised classification algorithm on the web in a standardized way. However, it is dynamic in nature to deploy other geoprocessing algorithms. The developed system describes how to process remote sensing data (Sentinel-2, Landsat-8, etc.) for classification and sharing the interoperable results in a distributed environment. To validate the classification results, a prototype architecture based on participatory GIS is developed for field data collection, accuracy assessment and online dissemination. The accuracy assessment (i.e., overall accuracy, Kappa coefficient) is performed to validate the derived classification results using collected field data.

Journal ArticleDOI
TL;DR: An integrated approach based on semi-automatic method has been used over the Chhota-Shigri glacier and results indicate that the areal extent of clean ice has significantly decreased whereas the IMD and debris cover area has increased from 1989 to 2015.
Abstract: The Glacier studies are important for monitoring the hydrological cycle as well as the impact of climate change. Recent studies indicate that the glacier area-shrinking rate is accelerating specifically in the Himalayan glaciers. In this study, an integrated approach based on semi-automatic method has been used over the Chhota-Shigri glacier. The combination of multispectral (Landsat data from 1989 to 2015) and elevation data is used for delineation of glacier features. This comprehensive approach has integrated the remote sensing indices, thermal information and morphometric parameters generated using elevation data to delineate the glacier into three categories i.e. clean ice, ice mixed with debris (IMD) and debris. Results indicate that the areal extent of clean ice has significantly decreased whereas the IMD and debris cover area has increased from 1989 to 2015. The accuracy assessment is performed using a pan-sharpened image and field data. The overall accuracy obtained about 90.0% with a 0.895 kappa coefficient for the adopted approach. This glacier areal change information can be used for flow velocity measurements, hazard caused by glaciers, and the impact of climate change on glaciers.

Journal ArticleDOI
TL;DR: Experimental results showed that the spatial index proposed has obvious advantages, which could solve the problem of storage redundancy and query results, and effectively improve spatial queries particularly when the data volume is large.
Abstract: As geospatial data is increasingly massive and complex, large data volume and rich data source query retrieval are among the urgent issues in need of resolution. Spatial indices are widely used to organize data and optimize queries. However, tree-based indices are increasingly difficult to adapt to a high-efficiency query, and the combination of a grid index and space-filling curve can help decrease the dimensions to improve the query efficiency, but can also lead to data redundancy as one object can cover several grids. To solve the aforementioned problems, this paper proposes a method to manage and query data using a grid-code-array spatial index based on a GeoSOT global subdivision model. For the first time, a grid code was organized in a code-array format and an inverted index was constructed on the column of the code-array. By adding a grid-code-array data structure, we verified the feasibility and efficiency and compared the R-tree index in the Oracle Spatial system and the grid index in the ArcSDE geodatabase for Oracle, which are the most widely used. Experimental results showed that the spatial index we proposed has obvious advantages, which could solve the problem of storage redundancy and query results, and effectively improve spatial queries particularly when the data volume is large.

Journal ArticleDOI
TL;DR: This study reveals a positive impact of the incorporation of multiple ontologies and feature rules, which is meaningful for improving accuracy and comprehensiveness of information retrieval.
Abstract: Spatio-temporal geological big data contain a large amount of spatial and nonspatial data. It is important to effectively manage and retrieve these existing data for geological research, and understanding the question represents the first step. This paper aims to better understand the problem to improve the retrieval efficiency. In geology, the organization of massive unstructured geological data and the discovery of implicit content based on knowledge and relationships have been realized. However, previous findings are primarily based on spatial and nonspatial dimensions, and the key words searched are often just segmented words. In geological research, the dimension of time is as important as spatial and other nonspatial dimensions. In addition, an individual user’s goal may be more than a superficial representation of the problem. In this paper, we first construct the geological event ontology, organize Spatio-temporal big data with this dimension, and expand the concept of geological time. Next, based on geology knowledge, we propose spatio-temporal rules, spatial characteristics, and domain constraint rules to assess the consistency of the ontology and to maximize the relationship between the information and improvements in the efficiency of information retrieval. Then, the ontology question is extended, and the rules between this question and other ontologies are expounded to deepen the understanding of the problem. Finally, we evaluate our contribution over a real geology dataset on a knowledge-driven geologic survey information smart-service platform (GSISSP), which integrates geological thematic ontology, geological temporal ontology, and toponymy ontology. This study reveals a positive impact of the incorporation of multiple ontologies and feature rules, which is meaningful for improving accuracy and comprehensiveness.

Journal ArticleDOI
TL;DR: The design and implementation of a spatial database and a web-GIS application that allow the management, visualization and analysis of data that are directly or indirectly related to climate and its future projections in Greece are described.
Abstract: Climate information is important for scientific research and decision making. Nowadays web-based cartography technologies provide the means for wide dissemination of such information. This paper describes the design and implementation of a spatial database and a web-GIS application that allow the management, visualization and analysis of data that are directly or indirectly related to climate and its future projections in Greece. Emphasis is given to the design decisions made in order to fulfill the requirements of flexible data querying and reporting, and of maintenance cost minimization. Cartographic layers for climatic and other parameters are dynamically produced through database views and geographic web services. The web-GIS provides user-friendly cartographic operations for visualizing and manipulating thematic maps, as well as reporting services for selecting, displaying and downloading climatic values for selected areas.

Journal ArticleDOI
TL;DR: The experimental results show that the foggy image containing bright areas such as the sky have a good processing effect, which significantly reduces the color distortion of the bright area, and the image is clearer and more natural.
Abstract: For the dark channel prior image dehazing algorithm to defog the foggy image with the same transmittance, the recovery result will have a more serious color shift problem. An image defogging algorithm based on different color wavelength compensation is proposed. Firstly, the median dark channel map is obtained by the median filtering method, and then the optical attenuation coefficients of different wavelengths are calculated to obtain the transmittance of Red-Green-Blue(RGB) three channels. Finally, the revised parameters are substituted into the atmospheric scattering model to restore the fog-free image. The experimental results show that the foggy image containing bright areas such as the sky have a good processing effect, which significantly reduces the color distortion of the bright area, and the image is clearer and more natural.

Journal ArticleDOI
TL;DR: This work employs the efficient k-d tree structure to store spatiotemporal data and adopts several machine learning methods to learn optimal parameters and significantly outperforms the previous work in terms of both speed and accuracy.
Abstract: To better assess the relationships between environmental exposures and health outcomes, an appropriate spatiotemporal interpolation is critical. Traditional spatiotemporal interpolation methods either consider the spatial and temporal dimensions separately or incorporate both dimensions simultaneously by simply treating time as another dimension in space. Such interpolation results suffer from relatively low accuracy as the true space-time domain is skewed inappropriately and the distance calculation in such domain is not accurate. We employ the efficient k-d tree structure to store spatiotemporal data and adopt several machine learning methods to learn optimal parameters. To overcome the computational difficulty with large data sets, we implement our method on an efficient cluster computing framework – Apache Spark. Real world PM2.5 data sets are utilized to test our implementation and the experimental results demonstrate the computational power of our method, which significantly outperforms the previous work in terms of both speed and accuracy.

Journal ArticleDOI
TL;DR: An Adaptive Convolutional Neural Network model using N-gram for Spatial Object Recognition on Satellite Images and the results show that convinced association relevant level to the human perception is serving additional to identify the spatial objects.
Abstract: Remote sensing applications are playing a vital role to improve the commercial satellite imagery with high resolution. In the spatial information system, object detection is the basic needs for computing the mathematical model. Geographical object related analysis for the image is used to gather data from remote sensing images. In this paper, we propose an Adaptive Convolutional Neural Network model using N-gram for Spatial Object Recognition on Satellite Images. Our methodology needs a learning model for the structures in the images to gather the data using prior knowledge. N-gram uses the functionalities of learning models. Spatial object recognition is performed using the learning method to segment the images with the human subjects that can increase their understanding of including the perception, cognition and decision. The result obtained for two stage of image processing is collected, and a relationship to psychological and mathematical basis is made. The results show that convinced association relevant level to the human perception is serving additional to identify the spatial objects. The experimentation is performed in MATLAB software where the results proved that our methodology is superior suitable for precise object detection and recognition on dissimilar levels of satellite images.

Journal ArticleDOI
Liyu Tang1, Xianmin Peng1, Chongcheng Chen1, Hongyu Huang1, Ding Lin1 
TL;DR: The objectives of this study were to propose a strategy for integrating a three-dimensional (3D) geographic environment with growth models and to develop a 3D stand visualization software prototype, thus facilitating the participation of various stakeholders in management and education.
Abstract: Virtual geographic environments related to dynamic processes contribute to a human understanding of the real world. The results of growth simulations provide good estimations of the future status of forests, but they are typically expressed in plain text summaries, tables or static displays, making it difficult to analyse, understand and further apply the forecast data. The objectives of this study were to propose a strategy for integrating a three-dimensional (3D) geographic environment with growth models and to develop a 3D stand visualization software prototype. Forest growth increments were predicted using the growth models, whereas stand dynamics were simulated using detailed tree models to recognize the changes in the branch whorls and height of individual trees. The spatial structure of the stand was represented by linking each tree diameter class to a spatial distribution according to the features of a Voronoi diagram. The stand visualization system VisForest, which allows users to predict increments in the diameter and height of trees, was extended to estimate the number of trees in each diameter class and to visualize many aspects of a forest stand, e.g., individual tree structure, stem diameter at breast height (DBH, i.e., 1.3 m) distribution and height. The software system provides a specialized, intuitive tool for the visualization of a stand, thus facilitating the participation of various stakeholders in management and education.

Journal ArticleDOI
TL;DR: The proposed coupled detection-delineation was benchmarked against different feature descriptors and state-of-the-art supervised and unsupervised machine learning techniques and showed that it could detect and delineate the plantations with an accuracy of 92.29% and precision, recall and Kappa of 91.16%, 84.97%, and 0.81, respectively.
Abstract: Characterization of oil palm plantation is a crucial step toward many geographical based management strategies, ranging from determining regional planting and appropriate species to irrigation and logistics planning. Accurate and most updated plantation identification enables well informed and effective measures for such schemes. This paper proposes a computerized method for detecting oil-palm plantation from remotely sensed imagery. Unlike other existing approaches, where imaging features were retrieved from spectral data and then trained with a machine learning box for region of interest extraction, this paper employed 2-stage detection. Firstly, a deep learning network was employed to determine a presence of oil-palm plantation in a generic Google satellite image. With irrelevant samples being disregarded and thus the problem space being so contained, the images with detected oil-palm had their plantation delineated at higher accuracy by using a support vector machine, based on Gabor texture descriptor. The proposed coupled detection-delineation was benchmarked against different feature descriptors and state-of-the-art supervised and unsupervised machine learning techniques. The validation was made by comparing the extraction results with those ground surveyed by an authority. It was shown in the experiments that it could detect and delineate the plantations with an accuracy of 92.29% and precision, recall and Kappa of 91.16%, 84.97%, and 0.81, respectively.