scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Comparison of Self-Organizing Map, Artificial Neural Network, and Co-Active Neuro-Fuzzy Inference System Methods in Simulating Groundwater Quality: Geospatial Artificial Intelligence

TL;DR: In this paper, the authors used geospatial artificial intelligence approaches such as self-organizing map (SOM), artificial neural network (ANN), and co-active neuro-fuzzy inference system (CANFIS) to simulate groundwater quality in the Mazandaran plain in the north of Iran.
Abstract: Water quality experiments are difficult, costly, and time-consuming. Therefore, different modeling methods can be used as an alternative for these experiments. To achieve the research objective, geospatial artificial intelligence approaches such as the self-organizing map (SOM), artificial neural network (ANN), and co-active neuro-fuzzy inference system (CANFIS) were used to simulate groundwater quality in the Mazandaran plain in the north of Iran. Geographical information system (GIS) techniques were used as a pre-processer and post-processer. Data from 85 drinking water wells was used as secondary data and were separated into two splits of (a) 70 percent for training (60% for training and 10% for cross-validation), and (b) 30 percent for the test stage. The groundwater quality index (GWQI) and the effective water quality factors (distance from industries, groundwater depth, and transmissivity of aquifer formations) were implemented as output and input variables, respectively. Statistical indices (i.e., R squared (R-sqr) and the mean squared error (MSE)) were utilized to compare the performance of three methods. The results demonstrate the high performance of the three methods in groundwater quality simulation. However, in the test stage, CANFIS (R-sqr = 0.89) had a higher performance than the SOM (R-sqr = 0.8) and ANN (R-sqr = 0.73) methods. The tested CANFIS model was used to estimate GWQI values on the area of the plain. Finally, the groundwater quality was mapped in a GIS environment associated with CANFIS simulation. The results can be used to manage groundwater quality as well as support and contribute to the sustainable development goal (SDG)-6, SDG-11, and SDG-13.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article , several data-driven models, namely, multiple linear regression (MLR), multiple adaptive regression splines (MARS), support vector machine (SVM), and random forest (RF), were used for rainfall-runoff prediction of the Gola watershed, located in the south-eastern part of the Uttarakhand.
Abstract: Nowadays, great attention has been attributed to the study of runoff and its fluctuation over space and time. There is a crucial need for a good soil and water management system to overcome the challenges of water scarcity and other natural adverse events like floods and landslides, among others. Rainfall–runoff (R-R) modeling is an appropriate approach for runoff prediction, making it possible to take preventive measures to avoid damage caused by natural hazards such as floods. In the present study, several data-driven models, namely, multiple linear regression (MLR), multiple adaptive regression splines (MARS), support vector machine (SVM), and random forest (RF), were used for rainfall–runoff prediction of the Gola watershed, located in the south-eastern part of the Uttarakhand. The rainfall–runoff model analysis was conducted using daily rainfall and runoff data for 12 years (2009 to 2020) of the Gola watershed. The first 80% of the complete data was used to train the model, and the remaining 20% was used for the testing period. The performance of the models was evaluated based on the coefficient of determination (R2), root mean square error (RMSE), Nash–Sutcliffe efficiency (NSE), and percent bias (PBAIS) indices. In addition to the numerical comparison, the models were evaluated. Their performances were evaluated based on graphical plotting, i.e., time-series line diagram, scatter plot, violin plot, relative error plot, and Taylor diagram (TD). The comparison results revealed that the four heuristic methods gave higher accuracy than the MLR model. Among the machine learning models, the RF (RMSE (m3/s), R2, NSE, and PBIAS (%) = 6.31, 0.96, 0.94, and −0.20 during the training period, respectively, and 5.53, 0.95, 0.92, and −0.20 during the testing period, respectively) surpassed the MARS, SVM, and the MLR models in forecasting daily runoff for all cases studied. The RF model outperformed in all four models’ training and testing periods. It can be summarized that the RF model is best-in-class and delivers a strong potential for the runoff prediction of the Gola watershed.

25 citations

Journal ArticleDOI
TL;DR: In this article , self-organizing maps (SOM) were developed to simulate monthly inflows to a reservoir based on satellite-estimated gridded precipitation time series.
Abstract: Abstract Hydrological data provide valuable information for the decision-making process in water resources management, where long and complete time series are always desired. However, it is common to deal with missing data when working on streamflow time series. Rainfall-streamflow modeling is an alternative to overcome such a difficulty. In this paper, self-organizing maps (SOM) were developed to simulate monthly inflows to a reservoir based on satellite-estimated gridded precipitation time series. Three different calibration datasets from Três Marias Reservoir, composed of inflows (targets) and 91 TRMM-estimated rainfall data (inputs), from 1998 to 2019, were used. The results showed that the inflow data homogeneity pattern influenced the rainfall-streamflow modeling. The models generally showed superior performance during the calibration phase, whereas the outcomes varied depending on the data homogeneity pattern and the chosen SOM structure in the testing phase. Regardless of the input data homogeneity, the SOM networks showed excellent results for the rainfall-runoff modeling, presenting Nash–Sutcliffe coefficients greater than 0.90. Graphical Abstract

8 citations

Journal ArticleDOI
TL;DR: In this article , three machine learning methods including deep neural network (DNN), extreme gradient boosting (EGB), and multiple linear regression (MLR) were used to predict nitrate contamination in groundwater in the north of Iran (Mazandaran plain) and finally the best method was selected for mapping.

8 citations

Journal ArticleDOI
TL;DR: In this article , multilayer perceptron (MLP) neural network and support vector regression (SVR) models were developed to assess the suitability of groundwater for drinking purposes in the northern Khartoum area, Sudan.
Abstract: Abstract In the present study, multilayer perceptron (MLP) neural network and support vector regression (SVR) models were developed to assess the suitability of groundwater for drinking purposes in the northern Khartoum area, Sudan. The groundwater quality was evaluated by predicting the groundwater quality index (GWQI). GWQI is a statistical model that uses sub-indices and accumulation functions to reduce the dimensionality of groundwater quality data. In the first stage, GWQI was calculated using 11 physiochemical parameters collected from 20 groundwater wells. These parameters include pH, EC, TDS, TH, Cl − , SO 4 −2 , NO 3 − , Ca +2 , Mg +2 , Na + , and HCO 3 − . The primary investigation confirmed that all parameters except for EC and NO 3 − are beyond the standard limits of the World Health Organization (WHO). The measured GWQI ranged from 21 to 396. As a result, groundwater samples were classified into three classes. The majority of the samples, roughly 75%, projected into the excellent water category; 20% were considered good water and 5% were classified as unsuitable. GWQI models are powerful tools in groundwater quality assessment; however, the computation is lengthy, time-consuming, and often associated with calculation errors. To overcome these limitations, this study applied artificial intelligence (AI) techniques to develop a reliable model for the prediction of GWQI by employing MLP neural network and SVR models. In this stage, the input data were the detected physiochemical parameters, and the output was the computed GWQI. The dataset was divided into two groups with a ratio of 80% to 20% for models training and validation. The predicted (AI) and actual (calculated GWQI) models were compared using four statistical criteria, namely, mean square error (MSE), root mean squared error (RMSE), mean absolute error (MAE), and coefficient of determination ( R 2 ). Based on the obtained values of the performance measures, the results revealed the robustness and efficiency of MLP and SVR models in modeling GWQI. Consequently, groundwater quality in the north Khartoum area is evaluated as suitable for human consumption except for BH 18, where highly mineralized water is observed. The developed approach is advantageous in groundwater quality evaluation and is recommended to be incorporated in groundwater quality modeling.

8 citations

References
More filters
01 Jan 1998
TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Abstract: A comprehensive look at learning and generalization theory. The statistical theory of learning and generalization concerns the problem of choosing desired functions on the basis of empirical data. Highly applicable to a variety of computer science and robotics fields, this book offers lucid coverage of the theory as a whole. Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.

26,531 citations

Journal ArticleDOI
TL;DR: In this paper, the authors describe a self-organizing system in which the signal representations are automatically mapped onto a set of output responses in such a way that the responses acquire the same topological order as that of the primary events.
Abstract: This work contains a theoretical study and computer simulations of a new self-organizing process. The principal discovery is that in a simple network of adaptive physical elements which receives signals from a primary event space, the signal representations are automatically mapped onto a set of output responses in such a way that the responses acquire the same topological order as that of the primary events. In other words, a principle has been discovered which facilitates the automatic formation of topologically correct maps of features of observable events. The basic self-organizing system is a one- or two-dimensional array of processing units resembling a network of threshold-logic units, and characterized by short-range lateral feedback between neighbouring units. Several types of computer simulations are used to demonstrate the ordering process as well as the conditions under which it fails.

8,247 citations

Book
01 Jan 1984
TL;DR: The purpose and nature of Biological Memory, as well as some of the aspects of Memory Aspects, are explained.
Abstract: 1. Various Aspects of Memory.- 1.1 On the Purpose and Nature of Biological Memory.- 1.1.1 Some Fundamental Concepts.- 1.1.2 The Classical Laws of Association.- 1.1.3 On Different Levels of Modelling.- 1.2 Questions Concerning the Fundamental Mechanisms of Memory.- 1.2.1 Where Do the Signals Relating to Memory Act Upon?.- 1.2.2 What Kind of Encoding is Used for Neural Signals?.- 1.2.3 What are the Variable Memory Elements?.- 1.2.4 How are Neural Signals Addressed in Memory?.- 1.3 Elementary Operations Implemented by Associative Memory.- 1.3.1 Associative Recall.- 1.3.2 Production of Sequences from the Associative Memory.- 1.3.3 On the Meaning of Background and Context.- 1.4 More Abstract Aspects of Memory.- 1.4.1 The Problem of Infinite-State Memory.- 1.4.2 Invariant Representations.- 1.4.3 Symbolic Representations.- 1.4.4 Virtual Images.- 1.4.5 The Logic of Stored Knowledge.- 2. Pattern Mathematics.- 2.1 Mathematical Notations and Methods.- 2.1.1 Vector Space Concepts.- 2.1.2 Matrix Notations.- 2.1.3 Further Properties of Matrices.- 2.1.4 Matrix Equations.- 2.1.5 Projection Operators.- 2.1.6 On Matrix Differential Calculus.- 2.2 Distance Measures for Patterns.- 2.2.1 Measures of Similarity and Distance in Vector Spaces.- 2.2.2 Measures of Similarity and Distance Between Symbol Strings.- 2.2.3 More Accurate Distance Measures for Text.- 3. Classical Learning Systems.- 3.1 The Adaptive Linear Element (Adaline).- 3.1.1 Description of Adaptation by the Stochastic Approximation.- 3.2 The Perceptron.- 3.3 The Learning Matrix.- 3.4 Physical Realization of Adaptive Weights.- 3.4.1 Perceptron and Adaline.- 3.4.2 Classical Conditioning.- 3.4.3 Conjunction Learning Switches.- 3.4.4 Digital Representation of Adaptive Circuits.- 3.4.5 Biological Components.- 4. A New Approach to Adaptive Filters.- 4.1 Survey of Some Necessary Functions.- 4.2 On the "Transfer Function" of the Neuron.- 4.3 Models for Basic Adaptive Units.- 4.3.1 On the Linearization of the Basic Unit.- 4.3.2 Various Cases of Adaptation Laws.- 4.3.3 Two Limit Theorems.- 4.3.4 The Novelty Detector.- 4.4 Adaptive Feedback Networks.- 4.4.1 The Autocorrelation Matrix Memory.- 4.4.2 The Novelty Filter.- 5. Self-Organizing Feature Maps.- 5.1 On the Feature Maps of the Brain.- 5.2 Formation of Localized Responses by Lateral Feedback.- 5.3 Computational Simplification of the Process.- 5.3.1 Definition of the Topology-Preserving Mapping.- 5.3.2 A Simple Two-Dimensional Self-Organizing System.- 5.4 Demonstrations of Simple Topology-Preserving Mappings.- 5.4.1 Images of Various Distributions of Input Vectors.- 5.4.2 "The Magic TV".- 5.4.3 Mapping by a Feeler Mechanism.- 5.5 Tonotopic Map.- 5.6 Formation of Hierarchical Representations.- 5.6.1 Taxonomy Example.- 5.6.2 Phoneme Map.- 5.7 Mathematical Treatment of Self-Organization.- 5.7.1 Ordering of Weights.- 5.7.2 Convergence Phase.- 5.8 Automatic Selection of Feature Dimensions.- 6. Optimal Associative Mappings.- 6.1 Transfer Function of an Associative Network.- 6.2 Autoassociative Recall as an Orthogonal Projection.- 6.2.1 Orthogonal Projections.- 6.2.2 Error-Correcting Properties of Projections.- 6.3 The Novelty Filter.- 6.3.1 Two Examples of Novelty Filter.- 6.3.2 Novelty Filter as an Autoassociative Memory.- 6.4 Autoassociative Encoding.- 6.4.1 An Example of Autoassociative Encoding.- 6.5 Optimal Associative Mappings.- 6.5.1 The Optimal Linear Associative Mapping.- 6.5.2 Optimal Nonlinear Associative Mappings.- 6.6 Relationship Between Associative Mapping, Linear Regression, and Linear Estimation.- 6.6.1 Relationship of the Associative Mapping to Linear Regression.- 6.6.2 Relationship of the Regression Solution to the Linear Estimator.- 6.7 Recursive Computation of the Optimal Associative Mapping.- 6.7.1 Linear Corrective Algorithms.- 6.7.2 Best Exact Solution (Gradient Projection).- 6.7.3 Best Approximate Solution (Regression).- 6.7.4 Recursive Solution in the General Case.- 6.8 Special Cases.- 6.8.1 The Correlation Matrix Memory.- 6.8.2 Relationship Between Conditional Averages and Optimal Estimator.- 7. Pattern Recognition.- 7.1 Discriminant Functions.- 7.2 Statistical Formulation of Pattern Classification.- 7.3 Comparison Methods.- 7.4 The Subspace Methods of Classification.- 7.4.1 The Basic Subspace Method.- 7.4.2 The Learning Subspace Method (LSM).- 7.5 Learning Vector Quantization.- 7.6 Feature Extraction.- 7.7 Clustering.- 7.7.1 Simple Clustering (Optimization Approach).- 7.7.2 Hierarchical Clustering (Taxonomy Approach).- 7.8 Structural Pattern Recognition Methods.- 8. More About Biological Memory.- 8.1 Physiological Foundations of Memory.- 8.1.1 On the Mechanisms of Memory in Biological Systems.- 8.1.2 Structural Features of Some Neural Networks.- 8.1.3 Functional Features of Neurons.- 8.1.4 Modelling of the Synaptic Plasticity.- 8.1.5 Can the Memory Capacity Ensue from Synaptic Changes?.- 8.2 The Unified Cortical Memory Model.- 8.2.1 The Laminar Network Organization.- 8.2.2 On the Roles of Interneurons.- 8.2.3 Representation of Knowledge Over Memory Fields.- 8.2.4 Self-Controlled Operation of Memory.- 8.3 Collateral Reading.- 8.3.1 Physiological Results Relevant to Modelling.- 8.3.2 Related Modelling.- 9. Notes on Neural Computing.- 9.1 First Theoretical Views of Neural Networks.- 9.2 Motives for the Neural Computing Research.- 9.3 What Could the Purpose of the Neural Networks be?.- 9.4 Definitions of Artificial "Neural Computing" and General Notes on Neural Modelling.- 9.5 Are the Biological Neural Functions Localized or Distributed?.- 9.6 Is Nonlinearity Essential to Neural Computing?.- 9.7 Characteristic Differences Between Neural and Digital Computers.- 9.7.1 The Degree of Parallelism of the Neural Networks is Still Higher than that of any "Massively Parallel" Digital Computer.- 9.7.2 Why the Neural Signals Cannot be Approximated by Boolean Variables.- 9.7.3 The Neural Circuits do not Implement Finite Automata.- 9.7.4 Undue Views of the Logic Equivalence of the Brain and Computers on a High Level.- 9.8 "Connectionist Models".- 9.9 How can the Neural Computers be Programmed?.- 10. Optical Associative Memories.- 10.1 Nonholographic Methods.- 10.2 General Aspects of Holographic Memories.- 10.3 A Simple Principle of Holographic Associative Memory.- 10.4 Addressing in Holographic Memories.- 10.5 Recent Advances of Optical Associative Memories.- Bibliography on Pattern Recognition.- References.

8,197 citations

Book
01 Jan 1996
TL;DR: This text provides a comprehensive treatment of the methodologies underlying neuro-fuzzy and soft computing with equal emphasis on theoretical aspects of covered methodologies, empirical observations, and verifications of various applications in practice.
Abstract: Included in Prentice Hall's MATLAB Curriculum Series, this text provides a comprehensive treatment of the methodologies underlying neuro-fuzzy and soft computing. The book places equal emphasis on theoretical aspects of covered methodologies, empirical observations, and verifications of various applications in practice.

4,082 citations