scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A geographically weighted artificial neural network

TL;DR: GWANN combines geographical weighting with artificial neural networks, which are able to learn complex nonlinear relationships in a data-driven manner without assumptions, and shows that the performance of GWANN can also be superior in a practical setting.
Abstract: While recent developments have extended geographically weighted regression (GWR) in many directions, it is usually assumed that the relationships between the dependent and the independent variables
Citations
More filters
Journal ArticleDOI
TL;DR: The proposed GTWNN model was proposed by integrating artificial neural network into geographically and temporally weighted regression (GTWR) using publicly available data sources, including satellite imagery and climate data to provide evidence for the existence of spatial non-stationarity and temporal non- stationarity in winter wheat yield prediction.

26 citations

Journal ArticleDOI
TL;DR: The concept of weak replicability has been introduced in this paper, where the authors discuss possible approaches to its measurement and discuss how the principle of spatial heterogeneity might be addressed in the context of artificial intelligence.
Abstract: Replicability takes on special meaning when researching phenomena that are embedded in space and time, including phenomena distributed on the surface and near surface of the Earth. Two principles, spatial dependence and spatial heterogeneity, are generally characteristic of such phenomena. Various practices have evolved in dealing with spatial heterogeneity, including the use of place-based models. We review the rapidly emerging applications of artificial intelligence to phenomena distributed in space and time and speculate on how the principle of spatial heterogeneity might be addressed. We introduce a concept of weak replicability and discuss possible approaches to its measurement.

26 citations

Journal ArticleDOI
TL;DR: In this paper , the authors present a route map to decide whether to use a GWR model or not, and if so which of three core variants to apply: a standard GWR, a mixed GWR or a multiscale GWR (MS-GWR).
Abstract: Geographically Weighted Regression (GWR) is increasingly used in spatial analyses of social and environmental data. It allows spatial heterogeneities in processes and relationships to be investigated through a series of local regression models rather than a single global one. Standard GWR assumes that relationships between the response and predictor variables operate at the same spatial scale, which is frequently not the case. To address this, several GWR variants have been proposed. This paper describes a route map to decide whether to use a GWR model or not, and if so which of three core variants to apply: a standard GWR, a mixed GWR or a multiscale GWR (MS-GWR). The route map comprises 3 primary steps that should always be undertaken: (1) a basic linear regression, (2) a MS-GWR, and (3) investigations of the results of these in order to decide whether to use a GWR approach, and if so for determining the appropriate GWR variant. The paper also highlights the importance of investigating a number of secondary issues at global and local scales including collinearity, the influence of outliers, and dependent error terms. Code and data for the case study used to illustrate the route map are provided.

23 citations

Journal ArticleDOI
TL;DR: iST-RF can improve predictive accuracy compared to the aspatial RF approach while enhancing interpretations of the trained model’s spatio-temporal relevance for its ensemble prediction and can help balance prediction and interpretation with fidelity in a spatial data science life cycle.
Abstract: Machine learning (ML) interpretability has become increasingly crucial for identifying accurate and relevant structural relationships between spatial events and factors that explain them. Methodolo...

8 citations


Cites background or methods from "A geographically weighted artificia..."

  • ...This procedure also differs from the ANN-based approaches that incorporate geographical weighting of connection weights between neurons (Hagenauer and Helbich 2021)....

    [...]

  • ...Recently, spatial and spatio-temporal non-stationarity was explicitly addressed in a few deep learning-based approaches, such as geographically weighted artificial neural networks (ANN) (Du et al. 2020, Hagenauer and Helbich 2021) and geographically and temporally neural networks (Wu et al. 2021)....

    [...]

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper constructed 50 sets of geographically weighted artificial neural network models for fractional vegetation coverage (FVC) and its driving factors in the Shengli Coalfield.
Abstract: Mining has caused considerable damage to vegetation coverage, especially in grasslands. It is of great significance to investigate the specific contributions of various factors to vegetation cover change. In this study, fractional vegetation coverage (FVC) is used as a proxy indicator for vegetation coverage. We constructed 50 sets of geographically weighted artificial neural network models for FVC and its driving factors in the Shengli Coalfield. Based on the idea of differentiation, we proposed the geographically weighted differential factors-artificial neural network (GWDF-ANN) to quantify the contributions of different driving factors on FVC changes in mining areas. The highlights of the study are as follows: (1) For the 50 models, the average RMSE was 0.052. The lowest RMSE was 0.007, and the highest was 0.112. For the MRE, the average value was 0.007, the lowest was 0.001, and the highest was 0.023. The GWDF-ANN model is suitable for quantifying FVC changes in mining areas. (2) Precipitation and temperature were the main driving factors for FVC change. The contributions were 32.45% for precipitation, 24.80% for temperature, 22.44% for mining, 14.44% for urban expansion, and 5.87% for topography. (3) Over time, the contributions of precipitation and temperature exhibited downward trends, while mining and urban expansion showed positive trajectories. For topography, its contribution remains generally unchanged. (4) As the distance from the mining area increases, the contribution of mining gradually decreases. At 200 m away, the contribution of mining was 26.69%; at 2000 m away, the value drops to 17.8%. (5) Mining has a cumulative effect on vegetation coverage both interannually and spatially. This study provides important support for understanding the mechanism of vegetation coverage change in mining areas.

7 citations

References
More filters
Book
16 Jul 1998
TL;DR: Thorough, well-organized, and completely up to date, this book examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks.
Abstract: From the Publisher: This book represents the most comprehensive treatment available of neural networks from an engineering perspective. Thorough, well-organized, and completely up to date, it examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks. Written in a concise and fluid manner, by a foremost engineering textbook author, to make the material more accessible, this book is ideal for professional engineers and graduate students entering this exciting field. Computer experiments, problems, worked examples, a bibliography, photographs, and illustrations reinforce key concepts.

29,130 citations


"A geographically weighted artificia..." refers background in this paper

  • ...An artificial neural network (ANN) consists of a set of neurons and unidirectional connections between them, which enables the imitation of the brain’s ability to detect patterns and learn relationships within data (Haykin 2008)....

    [...]

Journal ArticleDOI
01 Jan 1988-Nature
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract: We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

23,814 citations

Journal ArticleDOI
TL;DR: It is demonstrated that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube.
Abstract: In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks.

12,286 citations


"A geographically weighted artificia..." refers background in this paper

  • ...Given enough hidden neurons, ANNs with a single hidden layer are able to arbitrary well approximate any continuous function on closed and bounded subsets of n-dimensional Euclidean space (Cybenko 1989)....

    [...]

Journal ArticleDOI
TL;DR: In this article, a theory of hedonic prices is formulated as a problem in the economics of spatial equilibrium in which the entire set of implicit prices guides both consumer and producer locational decisions in characteristics space.
Abstract: A class of differentiated products is completely described by a vector of objectively measured characteristics. Observed product prices and the specific amounts of characteristics associated with each good define a set of implicit or "hedonic" prices. A theory of hedonic prices is formulated as a problem in the economics of spatial equilibrium in which the entire set of implicit prices guides both consumer and producer locational decisions in characteristics space. Buyer and seller choices, as well as the meaning and nature of market equilibrium, are analyzed. Empirical implications for hedonic price regressions and index number construction are pointed out.

10,206 citations


"A geographically weighted artificia..." refers background in this paper

  • ...Hedonic theory assumes that a property represents a heterogeneous good that can be decomposed into its utility-bearing characteristics, and that the resulting benefit is reflected in the property price (Rosen 1974)....

    [...]

Proceedings Article
16 Jun 2013
TL;DR: It is shown that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs to levels of performance that were previously achievable only with Hessian-Free optimization.
Abstract: Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models that were considered to be almost impossible to train using stochastic gradient descent with momentum. In this paper, we show that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs (on datasets with long-term dependencies) to levels of performance that were previously achievable only with Hessian-Free optimization. We find that both the initialization and the momentum are crucial since poorly initialized networks cannot be trained with momentum and well-initialized networks perform markedly worse when the momentum is absent or poorly tuned. Our success training these models suggests that previous attempts to train deep and recurrent neural networks from random initializations have likely failed due to poor initialization schemes. Furthermore, carefully tuned momentum methods suffice for dealing with the curvature issues in deep and recurrent network training objectives without the need for sophisticated second-order methods.

4,121 citations


"A geographically weighted artificia..." refers methods in this paper

  • ...Also, using Nesterov’s accelerated gradient (Nesterov 1983) when adjusting the connection weights can substantially improve the training performance (Sutskever et al. 2013)....

    [...]