scispace - formally typeset
Search or ask a question

Showing papers on "Recurrent neural network published in 2002"


Journal ArticleDOI
TL;DR: A new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks, based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry.
Abstract: A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology.

3,446 citations


Proceedings Article
01 Jan 2002
TL;DR: An online adaptation scheme based on the RLS algorithm known from adaptive linear systems is described, as an example, a 10-th order NARMA system is adaptively identified.
Abstract: Echo state networks (ESN) are a novel approach to recurrent neural network training. An ESN consists of a large, fixed, recurrent "reservoir" network, from which the desired output is obtained by training suitable output connection weights. Determination of optimal output weights becomes a linear, uniquely solvable task of MSE minimization. This article reviews the basic ideas and describes an online adaptation scheme based on the RLS algorithm known from adaptive linear systems. As an example, a 10-th order NARMA system is adaptively identified. The known benefits of the RLS algorithms carry over from linear systems to nonlinear ones; specifically, the convergence rate and misadjustment can be determined at design time.

562 citations


Journal ArticleDOI
TL;DR: The recurrent neural network with implicit dynamics is deliberately developed in the way that its trajectory is guaranteed to converge exponentially to the time-varying solution of a given Sylvester equation.
Abstract: Presents a recurrent neural network for solving the Sylvester equation with time-varying coefficient matrices. The recurrent neural network with implicit dynamics is deliberately developed in the way that its trajectory is guaranteed to converge exponentially to the time-varying solution of a given Sylvester equation. Theoretical results of convergence and sensitivity analysis are presented to show the desirable properties of the recurrent neural network. Simulation results of time-varying matrix inversion and online nonlinear output regulation via pole assignment for the ball and beam system and the inverted pendulum on a cart system are also included to demonstrate the effectiveness and performance of the proposed neural network.

464 citations


Journal ArticleDOI
TL;DR: The proposal calls for the design of TRFN by either neural network or genetic algorithms depending on the learning environment, which develops from a series of recurrent fuzzy if-then rules with TSK-type consequent parts.
Abstract: In this paper, a TSK-type recurrent fuzzy network (TRFN) structure is proposed. The proposal calls for the design of TRFN by either neural network or genetic algorithms depending on the learning environment. A recurrent fuzzy network is described which develops from a series of recurrent fuzzy if-then rules with TSK-type consequent parts. The recurrent property comes from feeding the internal variables, derived from fuzzy firing strengths, back to both the network input and output layers. In this configuration, each internal variable is responsible for memorizing the temporal history of its corresponding fuzzy rule. The internal variable is also combined with external input variables in each rule's consequence, which shows an increase in network learning ability. TRFN design under different learning environments is next advanced. For problems where supervised training data is directly available, TRFN with supervised learning (TRFN-S) is proposed, and a neural network (NN) learning approach is adopted for TRFN-S design. An online learning algorithm with concurrent structure and parameter learning is proposed. With flexibility of partition in the precondition part, and outcome of TSK-type, the TRFN-S displays both small network size and high learning accuracy. For problems where gradient information for NN learning is costly to obtain or unavailable, like reinforcement learning, TRFN with Genetic learning (TRFN-G) is put forward. The precondition parts of TRFN-G are also partitioned in a flexible way, and all free parameters are designed concurrently by genetic algorithm. Owing to the well-designed network structure of TRFN, TRFN-G, like TRFN-S, is characterized by high learning accuracy. To demonstrate the superior properties of TRFN, TRFN-S is applied to dynamic system identification and TRFN-G to dynamic system control. By comparing the results to other types of recurrent networks and design configurations, the efficiency of TRFN is verified.

449 citations


Journal ArticleDOI
TL;DR: It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent, globally asymptotically stable, and globally exponentially stable, respectively under different conditions.
Abstract: In this paper, we present a recurrent neural network for solving the nonlinear projection formulation. It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent, globally asymptotically stable, and globally exponentially stable, respectively under different conditions. Compared with the existing neural network for solving the projection formulation, the proposed neural network has a single-layer structure and is amenable to parallel implementation. Moreover, the proposed neural network has no Lipschitz condition, and, thus can be applied to solve a very broad class of constrained optimization problems that are special cases of the nonlinear projection formulation. Simulation shows that the proposed neural network is effective in solving these constrained optimization problems.

302 citations


Journal ArticleDOI
01 Jun 2002
TL;DR: In modeling the stochastic nature of reliability data, both the ARIMA and the recurrent neural network (RNN) models outperform the feed-forward model; in terms of lower predictive errors and higher percentage of correct reversal detection, however, both models perform better with short term forecasting.
Abstract: This paper aims to investigate suitable time series models for repairable system failure analysis. A comparative study of the Box-Jenkins autoregressive integrated moving average (ARIMA) models and the artificial neural network models in predicting failures are carried out. The neural network architectures evaluated are the multi-layer feed-forward network and the recurrent network. Simulation results on a set of compressor failures showed that in modeling the stochastic nature of reliability data, both the ARIMA and the recurrent neural network (RNN) models outperform the feed-forward model; in terms of lower predictive errors and higher percentage of correct reversal detection. However, both models perform better with short term forecasting. The effect of varying the damped feedback weights in the recurrent net is also investigated and it was found that RNN at the optimal weighting factor gives satisfactory performances compared to the ARIMA model.

300 citations


Journal ArticleDOI
01 Apr 2002
TL;DR: A new fuzzy model, the Dynamic Fuzzy Neural Network (DFNN), consisting of recurrent TSK rules, is developed, which compares favorably with its competing rivals and thus it can be considered for efficient system identification.
Abstract: This paper presents a fuzzy modeling approach for identification of dynamic systems. In particular, a new fuzzy model, the Dynamic Fuzzy Neural Network (DFNN), consisting of recurrent TSK rules, is developed. The premise and defuzzification parts are static while the consequent parts of the fuzzy rules are recurrent neural networks with internal feedback and time delay synapses. The network is trained by means of a novel learning algorithm, named Dynamic-Fuzzy Neural Constrained Optimization Method (D-FUNCOM), based on the concept of constrained optimization. The proposed algorithm is general since it can be applied to locally as well as fully recurrent networks, regardless of their structures. An adaptation mechanism of the maximum parameter change is presented as well. The proposed dynamic model, equipped with the learning algorithm, is applied to several temporal problems, including modeling of a NARMA process and the noise cancellation problem. Performance comparisons are conducted with a series of static and dynamic systems and some existing recurrent fuzzy models. Simulation results show that DFNN compares favorably with its competing rivals and thus it can be considered for efficient system identification.

272 citations


Proceedings ArticleDOI
07 Nov 2002
TL;DR: Long short-term memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing and counting and the learning of context sensitive languages, and it is shown that LSTM is also a good mechanism for learning to compose music.
Abstract: We consider the problem of extracting essential ingredients of music signals, such as a well-defined global temporal structure in the form of nested periodicities (or meter). We investigate whether we can construct an adaptive signal processing device that learns by example how to generate new instances of a given musical style. Because recurrent neural networks (RNNs) can, in principle, learn the temporal structure of a signal, they are good candidates for such a task. Unfortunately, music composed by standard RNNs often lacks global coherence. The reason for this failure seems to be that RNNs cannot keep track of temporally distant events that indicate global music structure. Long short-term memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing and counting and the learning of context sensitive languages. We show that LSTM is also a good mechanism for learning to compose music. We present experimental results showing that LSTM successfully learns a form of blues music and is able to compose novel (and we believe pleasing) melodies in that style. Remarkably, once the network has found the relevant structure, it does not drift from it: LSTM is able to play the blues with good timing and proper structure as long as one is willing to listen.

238 citations


Journal ArticleDOI
TL;DR: In this paper, an approach to freeway travel time prediction based on recurrent neural networks is presented, which is capable of dealing with complex nonlinear spatio-temporal relationships among flows, speeds, and densities.
Abstract: An approach to freeway travel time prediction based on recurrent neural networks is presented. Travel time prediction requires a modeling approach that is capable of dealing with complex nonlinear spatio-temporal relationships among flows, speeds, and densities. Based on the literature, feedforward neural networks are a class of mathematical models well suited for solving this problem. A drawback of the feed-forward approach is that the size and composition of the input time series are inherently design choices and thus fixed for all input. This may lead to unnecessarily large models. Moreover, for different traffic conditions, different sizes and compositions of input time series may be required, a requirement not satisfied by any feedforward data-driven method. The recurrent neural network topology presented is capable of dealing with the spatiotemporal relationships implicitly. The topology of this neural net is derived from a state-space formulation of the travel time prediction problem, which is in l...

219 citations


Journal ArticleDOI
TL;DR: In this paper, the combination of self-organizing map (SOM) and feedback is used to represent sequences of inputs, and the resulting representations are adapted to the temporal statistics of the input series.

209 citations


15 Mar 2002
TL;DR: Long Short-Term Memory is shown to be able to play the blues with good timing and proper structure as long as one is willing to listen, and once the network has found the relevant structure it does not drift from it.
Abstract: In general music composed by recurrent neural networks (RNNs) suffers from a lack of global structure. Though networks can learn note-by-note transition probabilities and even reproduce phrases, attempts at learning an entire musical form and using that knowledge to guide composition have been unsuccessful. The reason for this failure seems to be that RNNs cannot keep track of temporally distant events that indicate global music structure. Long Short-Term Memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing \& counting and CSL learning. In the current study I show that LSTM is also a good mechanism for learning to compose music. I compare this approach to previous attempts, with particular focus on issues of data representation. I present experimental results showing that LSTM successfully learns a form of blues music and is able to compose novel (and I believe pleasing) melodies in that style. Remarkably, once the network has found the relevant structure it does not drift from it: LSTM is able to play the blues with good timing and proper structure as long as one is willing to listen.

Journal ArticleDOI
TL;DR: A vision-based system that can interpret a user's gestures in real time to manipulate windows and objects within a graphical user interface and users who tested it found the gestures intuitive and the application easy to use.

Journal Article
TL;DR: The state of the art of neural network ensemble is surveyed from three aspects including implementation methods, theoretical analysis, and applications.
Abstract: Neural network ensemble can significantly improve the generalization ability of learning systems through training a finite number of neural networks and then combining their results. It is not only helpful for scientists to investigate machine learning and neural computing but also helpful for common engineers to solve real world problems using neural network techniques. Therefore neural network ensemble has been regarded as an engineering neural computing technology that has great application prospect. Also it has become a hot topic in both machine learning and neural computing communities. In this paper, the state of the art of neural network ensemble is surveyed from three aspects including implementation methods, theoretical analysis, and applications. Moreover, some issues valuable for future exploration in this area are indicated and discussed.

Journal ArticleDOI
01 Jul 2002
TL;DR: A new set of flexible machine learning architectures for the prediction of contact maps, as well as other information processing and pattern recognition tasks, are developed and it is shown that these architectures can be trained from examples and yield contact map predictors that outperform previously reported methods.
Abstract: Motivation: Accurate prediction of protein contact maps is an important step in computational structural proteomics. Because contact maps provide a translation and rotation invariant topological representation of a protein, they can be used as a fundamental intermediary step in protein structure prediction. Results: We develop a new set of flexible machine learning architectures for the prediction of contact maps, as well as other information processing and pattern recognition tasks. The architectures can be viewed as recurrent neural network implemantations of a class of Bayesian networks we call generalized input-output HMMs (GIOHMMs). For the specific case of contact maps, contextual information is propagated laterally through four hidden planes, one for each cardinal corner. We show that these architectures can be trained from examples and yield contact map predictors that outperform previously reported methods. While several extensions and improvements are in progress, the current version can accurately predict 60.5% of contacts at a distance cutoff of 8 ˚ A and 45% of distant contacts at 10 ˚

Journal ArticleDOI
TL;DR: In this paper, the existence, uniqueness and global exponential stability of the equilibrium point and periodic solutions of delayed recurrent neural networks with delays are analyzed. And sufficient conditions for the existence and uniqueness of these networks are derived.

Journal ArticleDOI
TL;DR: Compared to other recurrent neural networks, the proposed dual network with fewer neurons can solve quadratic programming problems subject to equality, inequality, and bound constraints and is shown to be globally exponentially convergent to optimal solutions of quadratics programming problems.

Journal ArticleDOI
TL;DR: Improved neural network and fuzzy models used for exchange rate prediction are presented, using real exchange daily rate values of the US Dollar vs. British Pound.
Abstract: Forecasting currency exchange rates are an important financial problem that is receiving increasing attention, especially because of its intrinsic difficulty and practical applications. During the last few years, a number of nonlinear models have been proposed for obtaining accurate prediction results, in an attempt to ameliorate the performance of the traditional linear approaches. Among them, neural network models have been used with encouraging results. This paper presents improved neural network and fuzzy models used for exchange rate prediction. Several approaches, including multi-layer perceptions, radial basis functions, dynamic neural networks and neuro-fuzzy systems, have been proposed and discussed. Their performances for one-step and multiple step ahead predictions have been evaluated through a study, using real exchange daily rate values of the US Dollar vs. British Pound.

Journal ArticleDOI
TL;DR: This paper introduces a learning algorithm which applies both to recurrent and feedforward multiple signal class random neural networks (MCRNNs) based on gradient descent optimization of a cost function, and applies it to color texture modeling (learning), based on learning the weights of a recurrent network directly from the color texture image.
Abstract: Spiked recurrent neural networks with "multiple classes" of signals have been recently introduced by Gelenbe and Fourneau (1999), as an extension of the recurrent spiked random neural network introduced by Gelenbe (1989). These new networks can represent interconnected neurons, which simultaneously process multiple streams of data such as the color information of images, or networks which simultaneously process streams of data from multiple sensors. This paper introduces a learning algorithm which applies both to recurrent and feedforward multiple signal class random neural networks (MCRNNs). It is based on gradient descent optimization of a cost function. The algorithm exploits the analytical properties of the MCRNN and requires the solution of a system of nC linear and nC nonlinear equations (where C is the number of signal classes and n is the number of neurons) each time the network learns a new input-output pair. Thus, the algorithm is of O([nC]/sup 3/) complexity for the recurrent case, and O([nC]/sup 2/) for a feedforward MCRNN. Finally, we apply this learning algorithm to color texture modeling (learning), based on learning the weights of a recurrent network directly from the color texture image. The same trained recurrent network is then used to generate a synthetic texture that imitates the original. This approach is illustrated with various synthetic and natural textures.

Reference BookDOI
01 Jan 2002
TL;DR: Some neural network models qualitative analysis of analogue Hopfield-type neural networks - global results stability analysis of linear systems operating on a closed hypercube qualitative results of parameter perturbations qualitative effects of time delays.
Abstract: Some neural network models qualitative analysis of analogue Hopfield-type neural networks - global results stability analysis of linear systems operating on a closed hypercube - system (M) qualitative analysis of Hopfield-type neural networks - local results qualitative effects of parameter perturbations qualitative effects of time delays.

Journal ArticleDOI
TL;DR: The results demonstrate that the RTRL network has a learning capacity with high efficiency and is an adequate model for time‐series prediction, and can be applied with high accuracy to the study of real‐time stream‐flow forecasting networks.
Abstract: Various types of neural networks have been proposed in previous papers for applications in hydrological events. However, most of these applied neural networks are classified as static neural networks, which are based on batch processes that update action only after the whole training data set has been presented. The time variate characteristics in hydrological processes have not been modelled well. In this paper, we present an alternative approach using an artificial neural network, termed real-time recurrent learning (RTRL) for stream-flow forecasting. To define the properties of the RTRL algorithm, we first compare the predictive ability of RTRL with least-square estimated autoregressive integrated moving average models on several synthetic time-series. Our results demonstrate that the RTRL network has a learning capacity with high efficiency and is an adequate model for time-series prediction. We also investigated the RTRL network by using the rainfall–runoff data of the Da-Chia River in Taiwan. The results show that RTRL can be applied with high accuracy to the study of real-time stream-flow forecasting networks. Copyright © 2002 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: An adaptive backstepping control system using a recurrent neural network (RNN) is proposed to control the mover position of a linear induction motor (LIM) drive to compensate the uncertainties including the friction force in this paper.
Abstract: An adaptive backstepping control system using a recurrent neural network (RNN) is proposed to control the mover position of a linear induction motor (LIM) drive to compensate the uncertainties including the friction force in this paper. First, the dynamic model of an indirect field-oriented LIM drive is derived. Then, a backstepping approach is proposed to compensate the uncertainties including the friction force occurred in the motion control system. With the proposed backstepping control system, the mover position of the LIM drive possesses the advantages of good transient control performance and robustness to uncertainties for the tracking of periodic reference trajectories. Moreover, to further increase the robustness of the LIM drive, an RNN uncertainty observer is proposed to estimate the required lumped uncertainty in the backstepping control system. In addition, an online parameter training methodology, which is derived using the gradient-descent method, is proposed to increase the learning capability of the RNN. The effectiveness of the proposed control scheme is verified by both the simulated and experimental results.

Journal ArticleDOI
TL;DR: This work proposes a new approach for the stability analysis of RNNs with sector-type monotone nonlinearities and nonzero biases, and readily permits one to account for non-zero biases usually present in Rnns for improved approximation capabilities.
Abstract: We address the problem of global Lyapunov stability of discrete-time recurrent neural networks (RNNs) in the unforced (unperturbed) setting. It is assumed that network weights are fixed to some values, for example, those attained after training. Based on classical results of the theory of absolute stability, we propose a new approach for the stability analysis of RNNs with sector-type monotone nonlinearities and nonzero biases. We devise a simple state-space transformation to convert the original RNN equations to a form suitable for our stability analysis. We then present appropriate linear matrix inequalities (LMIs) to be solved to determine whether the system under study is globally exponentially stable. Unlike previous treatments, our approach readily permits one to account for non-zero biases usually present in RNNs for improved approximation capabilities. We show how recent results of others on the stability analysis of RNNs can be interpreted as special cases within our approach. We illustrate how to use our approach with examples. Though illustrated on the stability analysis of recurrent multilayer perceptrons, the approach proposed can also be applied to other forms of time-lagged RNNs.

Journal ArticleDOI
TL;DR: The primary purpose of this paper is to prove the effectiveness of class-modular neural networks in terms of their convergence and recognition power.

Dissertation
19 Jun 2002
TL;DR: The continuous TD(lambda) algorithm is refined to handle situations with discontinuous states and controls, and the vario-eta algorithm is proposed as a simple but efficient method to perform gradient descent.
Abstract: This thesis is a study of practical methods to estimate value functions with feedforward neural networks in model-based reinforcement learning. Focus is placed on problems in continuous time and space, such as motor-control tasks. In this work, the continuous TD(lambda) algorithm is refined to handle situations with discontinuous states and controls, and the vario-eta algorithm is proposed as a simple but efficient method to perform gradient descent. The main contributions of this thesis are experimental successes that clearly indicate the potential of feedforward neural networks to estimate high-dimensional value functions. Linear function approximators have been often preferred in reinforcement learning, but successful value function estimations in previous works are restricted to mechanical systems with very few degrees of freedom. The method presented in this thesis was tested successfully on an original task of learning to swim by a simulated articulated robot, with 4 control variables and 12 independent state variables, which is significantly more complex than problems that have been solved with linear function approximators so far.

Journal ArticleDOI
TL;DR: An associative neural network (ASNN) as mentioned in this paper is a combination of an ensemble of the feed-forward neural networks and the K-nearest neighbor technique, which uses correlation between ensemble responses as a measure of distance among the analyzed cases for the nearest neighbor technique and provides an improved prediction by bias correction of the neural network ensemble both for function approximation and classification.
Abstract: An associative neural network (ASNN) is a combination of an ensemble of the feed-forward neural networks and the K-nearest neighbor technique. The introduced network uses correlation between ensemble responses as a measure of distance among the analyzed cases for the nearest neighbor technique and provides an improved prediction by the bias correction of the neural network ensemble both for function approximation and classification. Actually, the proposed method corrects a bias of a global model for a considered data case by analyzing the biases of its nearest neighbors determined in the space of calculated models. An associative neural network has a memory that can coincide with the training set. If new data become available the network can provide a reasonable approximation of such data without a need to retrain the neural network ensemble. Applications of ASNN for prediction of lipophilicity of chemical compounds and classification of UCI letter and satellite data set are presented. The developed algorithm is available on-line at http://www.virtuallaboratory.org/lab/asnn.

Journal ArticleDOI
TL;DR: New results for recurrent neural networks applied to online computation of feedback gains of linear time-invariant multivariable systems via pole assignment are presented.
Abstract: Global exponential stability is the most desirable stability property of recurrent neural networks. The paper presents new results for recurrent neural networks applied to online computation of feedback gains of linear time-invariant multivariable systems via pole assignment. The theoretical analysis focuses on the global exponential stability, convergence rates, and selection of design parameters. The theoretical results are further substantiated by simulation results conducted for synthesizing linear feedback control systems with different specifications and design requirements.

Journal ArticleDOI
TL;DR: Several approaches including Gaussian encoding backpropagation (BP), window random activation, radial basis function networks, real-time recurrent neural networks and their innovative variations are proposed, compared and discussed in this paper.

Book ChapterDOI
TL;DR: SOMBIP serves as a new model of bilingual processing and provides a new perspective on connectionist bilingualism, which has the potential of explaining a wide variety of empirical and theoretical issues in bilingual research.
Abstract: Current connectionist models of bilingual language processing have been largely restricted to localist stationary models. To fully capture the dynamics of bilingual processing, we present SOMBIP, a self-organizing model of bilingual processing that has learning characteristics. SOMBIP consists of two interconnected self-organizing neural networks, coupled with a recurrent neural network that computes lexical co-occurrence constraints. Simulations with our model indicate that (1) the model can account for distinct patterns of the bilingual lexicon without the use of language nodes or language tags, (2) it can develop meaningful lexicalsemantic categories through self-organizing processes, (3) it can account for a variety of priming and interference effects based on associative pathways between phonology and semantics in the lexicon, and (4) it can explain lexical representation in bilinguals with different levels of proficiency and working memory capacity. These capabilities of our model are due to its design characteristics in that (a) it combines localist and distributed properties of processing, (b) it combines representation and learning, and (c) it combines lexicon and sentences in bilingual processing. Thus, SOMBIP serves as a new model of bilingual processing and provides a new perspective on connectionist bilingualism. It has the potential of explaining a wide variety of empirical and theoretical issues in bilingual research.

Journal ArticleDOI
TL;DR: In this article, the authors investigated global asymptotic stability (GAS) and global exponential stability (GES) of a class of continuous-time recurrent neural networks and provided necessary and sufficient conditions for the existence and uniqueness of equilibrium of the neural networks with Lipschitz continuous activation functions.
Abstract: This paper investigates global asymptotic stability (GAS) and global exponential stability (GES) of a class of continuous-time recurrent neural networks. First, we introduce a necessary and sufficient condition for the existence and uniqueness of equilibrium of the neural networks with Lipschitz continuous activation functions. Next, we present two sufficient conditions to ascertain the GAS of the neural networks with globally Lipschitz continuous and monotone nondecreasing activation functions. We then give two GES conditions for the neural networks whose activation functions may not be monotone nondecreasing. We also provide a Lyapunov diagonal stability condition, without the nonsingularity requirement for the connection weight matrices, to ascertain the GES of the neural networks with globally Lipschitz continuous and monotone nondecreasing activation functions. This Lyapunov diagonal stability condition generalizes and unifies many of the existing GAS and GES results. Moreover, two higher exponential convergence rates are estimated.

Journal ArticleDOI
07 Aug 2002
TL;DR: Based on globally Lipschitz continous activation functions, new conditions ensuring existence, uniqueness and global robust exponential stability of the equilibrium point of interval neural networks with delays are obtained.
Abstract: In this paper, based on globally Lipschitz continuous activation functions, new conditions ensuring existence, uniqueness and global robust exponential stability of the equilibrium point of interval neural networks with delays are obtained. The delayed Hopfield network, bidirectional associative memory network and cellular neural network are special cases of the network model considered. All the results obtained are generalizations of some recent results reported in the literature for neural networks with constant delays.