scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Neural Networks for Slope Movement Prediction

01 Apr 2002-International Journal of Geomechanics (American Society of Civil Engineers)-Vol. 2, Iss: 2, pp 153-173
TL;DR: In this article, the authors used two layer perceptrons trained using the Levenberg-Marquardt algorithm, based on backpropagation of error, for the prediction of landslide velocity.
Abstract: This article presents the use of neural networks for the prediction of movement of natural slopes. The aim is to predict velocity changes of a moving soil mass using climatological and physical data, such as rainfall and pore water pressure, which are used as input parameters in an artificial neural network (ANN). The network is designed to function as an alarm and is a decision-making tool for persons in charge of landslide monitoring. The raw data were obtained from a continuously monitored landslide, located in Salledes, near Clermont–Ferrand (France), and include daily precipitation, evaporation, pore water pressure, and landslide velocity values. The various networks used in this study are two layer perceptrons trained using the Levenberg–Marquardt algorithm, based on backpropagation of error. The most sophisticated model presented in this article was developed by cascading two recurrent networks of the same type. This model permits a satisfactory 3-day prediction of landslide velocity if qu...
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, a back-propagation neural network model was used to predict the displacement behavior of colluvial landslides in the Three Gorges Reservoir, which can be used for displacement prediction and early warning.
Abstract: The prediction of active landslide displacement is a critical component of an early warning system and helps prevent property damage and loss of human lives. For the colluvial landslides in the Three Gorges Reservoir, the monitored displacement, precipitation, and reservoir level indicated that the characteristics of the deformations were closely related to the seasonal fluctuation of rainfall and reservoir level and that the displacement curve versus time showed a stepwise pattern. Besides the geological conditions, landslide displacement also depended on the variation in the influencing factors. Two typical colluvial landslides, the Baishuihe landslide and the Bazimen landslide, were selected for case studies. To analyze the different response components of the total displacement, the accumulated displacement was divided into a trend and a periodic component using a time series model. For the prediction of the periodic displacement, a back-propagation neural network model was adopted with selected factors including (1) the accumulated precipitation during the last 1-month period, (2) the accumulated precipitation over a 2-month period, (3) change of reservoir level during the last 1 month, (4) the average elevation of the reservoir level in the current month, and (5) the accumulated displacement increment during 1 year. The prediction of the displacement showed a periodic response in the displacement as a function of the variation of the influencing factors. The prediction model provided a good representation of the measured slide displacement behavior at the Baishuihe and the Bazimen sites, which can be adopted for displacement prediction and early warning of colluvial landslides in the Three Gorges Reservoir.

236 citations


Cites background from "Neural Networks for Slope Movement ..."

  • ...Neural networks are non-linear statistical data modeling tools, which are usually used to model complex relationships between multi-variables (inputs) and responses (outputs) (Mayoraz and Vulliet 2002; Ran et al. 2010; Pradhan and Buchroithner 2010; Biswajeet and Saro 2010)....

    [...]

Journal Article
TL;DR: A state-of-the-art examination of ANNs in geotechnical engineering and insights into the modeling issues ofANNs are presented.
Abstract: Over the last few years, artificial neural networks (ANNs) have been used successfully for modeling almost all aspects of geotechnical engineering problems. Whilst ANNs provide a great deal of promise, they suffer from a number of shortcomings such as knowledge extraction, extrapolation and uncertainty. This paper presents a state-of-the-art examination of ANNs in geotechnical engineering and provides insights into the modeling issues of ANNs. The paper also discusses current research directions of ANNs that need further attention in the future.

167 citations


Cites background from "Neural Networks for Slope Movement ..."

  • ...…(Benardos and Kaliampakos 2004; Lee and Sterling 1992; Moon et al. 1995; Neaupane and Achet 2004; Shi et al. 1998; Shi 2000; Yoo and Kim 2007) and slope stability (Ferentinou and Sakellariou 2007; Goh and Kulhawy 2003; Mayoraz and Vulliet 2002; Neaupane and Achet 2004; Ni et al. 1996; Zhao 2008)....

    [...]

Journal ArticleDOI
TL;DR: An overview of the operation of ANN modeling, investigates the current research directions of ANNs in geotechnical engineering, and discusses some ANN modeling issues that need further attention in the future, including model robustness; transparency and knowledge extraction; extrapolation; uncertainty.
Abstract: Artificial neural networks (ANNs) are a form of artificial intelligence that has proved to provide a high level of competency in solving many complex engineering problems that are beyond the computational capability of classicalmathematics and traditional procedures. In particular, ANNs have been applied successfully to almost all aspects of geotechnical engineering problems. Despite the increasing number and diversity of ANN applications in geotechnical engineering, the contents of reported applications indicate that the progress in ANN development and procedures is marginal and not moving forward since the mid-1990s. This paper presents a brief overview of ANN applications in geotechnical engineering, briefly provides an overview of the operation of ANN modeling, investigates the current research directions of ANNs in geotechnical engineering, and discusses some ANN modeling issues that need further attention in the future, including model robustness; transparency and knowledge extraction; extrapolation; uncertainty.

120 citations

Journal ArticleDOI
TL;DR: An overlook of the methods associated with this approach is presented and a unique expression encompassing most of the previously proposed equations for TSF prediction is proposed, thus offering a general framework useful for comparisons between different methods.
Abstract: The prediction of time to slope failure (TSF) is a goal of major importance for both landslide researchers and practitioners. A reasonably accurate prediction of TSF allows human losses to be avoided, damages to property to be reduced and adequate countermeasures to be designed. A pure “phenomenological” approach based on the observation and interpretation of the monitored data is generally employed in TSF prediction. Such an approach infers TSF mainly from the ground surface displacements using regression techniques based on empirical functions. These functions neglect the rheological soil parameters in order to reduce the prediction uncertainties. This paper presents an overlook of the methods associated with this approach and proposes a unique expression encompassing most of the previously proposed equations for TSF prediction, thus offering a general framework useful for comparisons between different methods. The methods discussed in this paper provide an effective tool, and sometimes the only tool, for TSF prediction. The fundamental problem is always one of data quality. A full confidence in all assumptions and parameters used in the prediction model is rarely, if ever, achieved. Therefore, TSF prediction models should be applied with care and the results interpreted with caution. Documented case studies represent the most useful source of information to calibrate the TSF prediction models.

114 citations


Cites background from "Neural Networks for Slope Movement ..."

  • ...(1996) and Mayoraz and Vulliet (2002). The same authors predicted the velocity changes in a sliding soil mass based on meteorological and physical data and different neural network configurations....

    [...]

  • ...5 Sallèdes 3-day prediction (after Mayoraz and Vulliet 2002) Fig....

    [...]

  • ...4 Verhulst inverse function (after Li et al. 1996) Mayoraz et al. (1996) and Mayoraz and Vulliet (2002)....

    [...]

Journal ArticleDOI
TL;DR: Application results demonstrate that the proposed multiple ANNs switched prediction method can significantly improve model generalization and perform similarly to, or better than, the best individual ANN predictor.

114 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, it is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under another and gives the same results, although perhaps not in the same time.

14,937 citations

Journal ArticleDOI
TL;DR: In this article, the problem of least square problems with non-linear normal equations is solved by an extension of the standard method which insures improvement of the initial solution, which can also be considered an extension to Newton's method.
Abstract: The standard method for solving least squares problems which lead to non-linear normal equations depends upon a reduction of the residuals to linear form by first order Taylor approximations taken about an initial or trial solution for the parameters.2 If the usual least squares procedure, performed with these linear approximations, yields new values for the parameters which are not sufficiently close to the initial values, the neglect of second and higher order terms may invalidate the process, and may actually give rise to a larger value of the sum of the squares of the residuals than that corresponding to the initial solution. This failure of the standard method to improve the initial solution has received some notice in statistical applications of least squares3 and has been encountered rather frequently in connection with certain engineering applications involving the approximate representation of one function by another. The purpose of this article is to show how the problem may be solved by an extension of the standard method which insures improvement of the initial solution.4 The process can also be used for solving non-linear simultaneous equations, in which case it may be considered an extension of Newton's method. Let the function to be approximated be h{x, y, z, • • • ), and let the approximating function be H{oc, y, z, • • ■ ; a, j3, y, ■ • ■ ), where a, /3, 7, • ■ ■ are the unknown parameters. Then the residuals at the points, yit zit • • • ), i = 1, 2, ■ • • , n, are

11,253 citations


"Neural Networks for Slope Movement ..." refers methods in this paper

  • ...The Levenberg– Marquardt algorithm, [34, 35] often used for problems of nonlinear least squares, has also been adapted for the learning processes of perceptrons [35]....

    [...]

Book
01 Jan 1984
TL;DR: The purpose and nature of Biological Memory, as well as some of the aspects of Memory Aspects, are explained.
Abstract: 1. Various Aspects of Memory.- 1.1 On the Purpose and Nature of Biological Memory.- 1.1.1 Some Fundamental Concepts.- 1.1.2 The Classical Laws of Association.- 1.1.3 On Different Levels of Modelling.- 1.2 Questions Concerning the Fundamental Mechanisms of Memory.- 1.2.1 Where Do the Signals Relating to Memory Act Upon?.- 1.2.2 What Kind of Encoding is Used for Neural Signals?.- 1.2.3 What are the Variable Memory Elements?.- 1.2.4 How are Neural Signals Addressed in Memory?.- 1.3 Elementary Operations Implemented by Associative Memory.- 1.3.1 Associative Recall.- 1.3.2 Production of Sequences from the Associative Memory.- 1.3.3 On the Meaning of Background and Context.- 1.4 More Abstract Aspects of Memory.- 1.4.1 The Problem of Infinite-State Memory.- 1.4.2 Invariant Representations.- 1.4.3 Symbolic Representations.- 1.4.4 Virtual Images.- 1.4.5 The Logic of Stored Knowledge.- 2. Pattern Mathematics.- 2.1 Mathematical Notations and Methods.- 2.1.1 Vector Space Concepts.- 2.1.2 Matrix Notations.- 2.1.3 Further Properties of Matrices.- 2.1.4 Matrix Equations.- 2.1.5 Projection Operators.- 2.1.6 On Matrix Differential Calculus.- 2.2 Distance Measures for Patterns.- 2.2.1 Measures of Similarity and Distance in Vector Spaces.- 2.2.2 Measures of Similarity and Distance Between Symbol Strings.- 2.2.3 More Accurate Distance Measures for Text.- 3. Classical Learning Systems.- 3.1 The Adaptive Linear Element (Adaline).- 3.1.1 Description of Adaptation by the Stochastic Approximation.- 3.2 The Perceptron.- 3.3 The Learning Matrix.- 3.4 Physical Realization of Adaptive Weights.- 3.4.1 Perceptron and Adaline.- 3.4.2 Classical Conditioning.- 3.4.3 Conjunction Learning Switches.- 3.4.4 Digital Representation of Adaptive Circuits.- 3.4.5 Biological Components.- 4. A New Approach to Adaptive Filters.- 4.1 Survey of Some Necessary Functions.- 4.2 On the "Transfer Function" of the Neuron.- 4.3 Models for Basic Adaptive Units.- 4.3.1 On the Linearization of the Basic Unit.- 4.3.2 Various Cases of Adaptation Laws.- 4.3.3 Two Limit Theorems.- 4.3.4 The Novelty Detector.- 4.4 Adaptive Feedback Networks.- 4.4.1 The Autocorrelation Matrix Memory.- 4.4.2 The Novelty Filter.- 5. Self-Organizing Feature Maps.- 5.1 On the Feature Maps of the Brain.- 5.2 Formation of Localized Responses by Lateral Feedback.- 5.3 Computational Simplification of the Process.- 5.3.1 Definition of the Topology-Preserving Mapping.- 5.3.2 A Simple Two-Dimensional Self-Organizing System.- 5.4 Demonstrations of Simple Topology-Preserving Mappings.- 5.4.1 Images of Various Distributions of Input Vectors.- 5.4.2 "The Magic TV".- 5.4.3 Mapping by a Feeler Mechanism.- 5.5 Tonotopic Map.- 5.6 Formation of Hierarchical Representations.- 5.6.1 Taxonomy Example.- 5.6.2 Phoneme Map.- 5.7 Mathematical Treatment of Self-Organization.- 5.7.1 Ordering of Weights.- 5.7.2 Convergence Phase.- 5.8 Automatic Selection of Feature Dimensions.- 6. Optimal Associative Mappings.- 6.1 Transfer Function of an Associative Network.- 6.2 Autoassociative Recall as an Orthogonal Projection.- 6.2.1 Orthogonal Projections.- 6.2.2 Error-Correcting Properties of Projections.- 6.3 The Novelty Filter.- 6.3.1 Two Examples of Novelty Filter.- 6.3.2 Novelty Filter as an Autoassociative Memory.- 6.4 Autoassociative Encoding.- 6.4.1 An Example of Autoassociative Encoding.- 6.5 Optimal Associative Mappings.- 6.5.1 The Optimal Linear Associative Mapping.- 6.5.2 Optimal Nonlinear Associative Mappings.- 6.6 Relationship Between Associative Mapping, Linear Regression, and Linear Estimation.- 6.6.1 Relationship of the Associative Mapping to Linear Regression.- 6.6.2 Relationship of the Regression Solution to the Linear Estimator.- 6.7 Recursive Computation of the Optimal Associative Mapping.- 6.7.1 Linear Corrective Algorithms.- 6.7.2 Best Exact Solution (Gradient Projection).- 6.7.3 Best Approximate Solution (Regression).- 6.7.4 Recursive Solution in the General Case.- 6.8 Special Cases.- 6.8.1 The Correlation Matrix Memory.- 6.8.2 Relationship Between Conditional Averages and Optimal Estimator.- 7. Pattern Recognition.- 7.1 Discriminant Functions.- 7.2 Statistical Formulation of Pattern Classification.- 7.3 Comparison Methods.- 7.4 The Subspace Methods of Classification.- 7.4.1 The Basic Subspace Method.- 7.4.2 The Learning Subspace Method (LSM).- 7.5 Learning Vector Quantization.- 7.6 Feature Extraction.- 7.7 Clustering.- 7.7.1 Simple Clustering (Optimization Approach).- 7.7.2 Hierarchical Clustering (Taxonomy Approach).- 7.8 Structural Pattern Recognition Methods.- 8. More About Biological Memory.- 8.1 Physiological Foundations of Memory.- 8.1.1 On the Mechanisms of Memory in Biological Systems.- 8.1.2 Structural Features of Some Neural Networks.- 8.1.3 Functional Features of Neurons.- 8.1.4 Modelling of the Synaptic Plasticity.- 8.1.5 Can the Memory Capacity Ensue from Synaptic Changes?.- 8.2 The Unified Cortical Memory Model.- 8.2.1 The Laminar Network Organization.- 8.2.2 On the Roles of Interneurons.- 8.2.3 Representation of Knowledge Over Memory Fields.- 8.2.4 Self-Controlled Operation of Memory.- 8.3 Collateral Reading.- 8.3.1 Physiological Results Relevant to Modelling.- 8.3.2 Related Modelling.- 9. Notes on Neural Computing.- 9.1 First Theoretical Views of Neural Networks.- 9.2 Motives for the Neural Computing Research.- 9.3 What Could the Purpose of the Neural Networks be?.- 9.4 Definitions of Artificial "Neural Computing" and General Notes on Neural Modelling.- 9.5 Are the Biological Neural Functions Localized or Distributed?.- 9.6 Is Nonlinearity Essential to Neural Computing?.- 9.7 Characteristic Differences Between Neural and Digital Computers.- 9.7.1 The Degree of Parallelism of the Neural Networks is Still Higher than that of any "Massively Parallel" Digital Computer.- 9.7.2 Why the Neural Signals Cannot be Approximated by Boolean Variables.- 9.7.3 The Neural Circuits do not Implement Finite Automata.- 9.7.4 Undue Views of the Logic Equivalence of the Brain and Computers on a High Level.- 9.8 "Connectionist Models".- 9.9 How can the Neural Computers be Programmed?.- 10. Optical Associative Memories.- 10.1 Nonholographic Methods.- 10.2 General Aspects of Holographic Memories.- 10.3 A Simple Principle of Holographic Associative Memory.- 10.4 Addressing in Holographic Memories.- 10.5 Recent Advances of Optical Associative Memories.- Bibliography on Pattern Recognition.- References.

8,197 citations


"Neural Networks for Slope Movement ..." refers methods in this paper

  • ...In the following, automates based on nonsupervised learning, such as the Kohonen cards [28], will not be presented, but we will concentrate on ANNs with supervised learning....

    [...]

Journal ArticleDOI
TL;DR: This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification and exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components.
Abstract: Artificial neural net models have been studied for many years in the hope of achieving human-like performance in the fields of speech and image recognition. These models are composed of many nonlinear computational elements operating in parallel and arranged in patterns reminiscent of biological neural nets. Computational elements or nodes are connected via weights that are typically adapted during use to improve performance. There has been a recent resurgence in the field of artificial neural nets caused by new net topologies and algorithms, analog VLSI implementation techniques, and the belief that massive parallelism is essential for high performance speech and image recognition. This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification. These nets are highly parallel building blocks that illustrate neural net components and design principles and can be used to construct more complex systems. In addition to describing these nets, a major emphasis is placed on exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components. Single-layer nets can implement algorithms required by Gaussian maximum-likelihood classifiers and optimum minimum-error classifiers for binary patterns corrupted by noise. More generally, the decision regions required by any classification algorithm can be generated in a straightforward manner by three-layer feed-forward nets.

7,798 citations


"Neural Networks for Slope Movement ..." refers methods in this paper

  • ...Theoretical bases The perceptron is the type of neural network most commonly used for tasks of classification and function approximation [24]....

    [...]