scispace - formally typeset
Search or ask a question

Showing papers on "Recurrent neural network published in 1999"


Proceedings ArticleDOI
01 Jan 1999
TL;DR: This work identifies a weakness of LSTM networks processing continual input streams without explicitly marked sequence ends and proposes an adaptive "forget gate" that enables an L STM cell to learn to reset itself at appropriate times, thus releasing internal resources.
Abstract: Long short-term memory (LSTM) can solve many tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams without explicitly marked sequence ends. Without resets, the internal state values may grow indefinitely and eventually cause the network to break down. Our remedy is an adaptive "forget gate" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review an illustrative benchmark problem on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve a continual version of that problem. LSTM with forget gates, however, easily solves it in an elegant way.

2,961 citations


BookDOI
01 Jan 1999
TL;DR: Recurrent Neural Networks: Design and Applications reflects the tremendous, worldwide interest in and virtually unlimited potential of RNNs - providing a summary of the design, applications, current research, and challenges of this dynamic and promising field.
Abstract: From the Publisher: With applications ranging from motion detection to financial forecasting, recurrent neural networks (RNNs) have emerged as an interesting and important part of neural network research. Recurrent Neural Networks: Design and Applications reflects the tremendous, worldwide interest in and virtually unlimited potential of RNNs - providing a summary of the design, applications, current research, and challenges of this dynamic and promising field.

551 citations


Journal ArticleDOI
01 Nov 1999
TL;DR: A family of novel architectures which can learn to make predictions based on variable ranges of dependencies are introduced, extending recurrent neural networks and introducing non-causal bidirectional dynamics to capture both upstream and downstream information.
Abstract: Motivation: Predicting the secondary structure of a protein (alpha-helix, beta-sheet, coil) is an important step towards elucidating its three-dimensional structure, as well as its function. Presently, the best predictors are based on machine learning approaches, in particular neural network architectures with a fixed, and relatively short, input window of amino acids, centered at the prediction site. Although a fixed small window avoids overfitting problems, it does not permit capturing variable long-rang information. Results: We introduce a family of novel architectures which can learn to make predictions based on variable ranges of dependencies. These architectures extend recurrent neural networks, introducing non-causal bidirectional dynamics to capture both upstream and downstream information. The prediction algorithm is completed by the use of mixtures of estimators that leverage evolutionary information, expressed in terms of multiple alignments, both at the input and output levels. While our system currently achieves an overall performance close to 76% correct prediction ‐ at least comparable to the best existing systems ‐ the main emphasis here is on the development of new algorithmic

509 citations


Journal ArticleDOI
TL;DR: This research employs standard backpropagation training techniques for a recurrent neural network in the task of learning to predict the next character in a simple deterministic CFL (DCFL), and shows that an RNN can learn to recognize the structure of a simple DCFL.
Abstract: Parallel distributed processing (PDP) architectures demonstrate a potentially radical alternative to the traditional theories of language processing that are based on serial computational models. However, learning complex structural relationships in temporal data presents a serious challenge to PDP systems. For example, automata theory dictates that processing strings from a context-free language (CFL) requires a stack or counter memory device. While some PDP models have been hand-crafted to emulate such a device, it is not clear how a neural network might develop such a device when learning a CFL. This research employs standard backpropagation training techniques for a recurrent neural network (RNN) in the task of learning to predict the next character in a simple deterministic CFL (DCFL). We show that an RNN can learn to recognize the structure of a simple DCFL. We use dynamical systems theory to identify how network states reflect that structure by building counters in phase space. The work is an empirical investigation which is complementary to theoretical analyses of network capabilities, yet original in its specific configuration of dynamics involved. The application of dynamical systems theory helps us relate the simulation results to theoretical results, and the learning task enables us to highlight some issues for understanding dynamical systems that process language with counters.

250 citations


Journal ArticleDOI
TL;DR: The heuristic methods discussed in this paper produce optimal or near-optimal performance artificial neural networks using only a fraction of the time needed for a full factorial design.
Abstract: Artificial neural networks were used to support applications across a variety of business and scientific disciplines during the past years. Artificial neural network applications are frequently viewed as black boxes which mystically determine complex patterns in data. Contrary to this popular view, neural network designers typically perform extensive knowledge engineering and incorporate a significant amount of domain knowledge into artificial neural networks. This paper details heuristics that utilize domain knowledge to produce an artificial neural network with optimal output performance. The effect of using the heuristics on neural network performance is illustrated by examining several applied artificial neural network systems. Identification of an optimal performance artificial neural network requires that a full factorial design with respect to the quantity of input nodes, hidden nodes, hidden layers, and learning algorithm be performed. The heuristic methods discussed in this paper produce optimal or near-optimal performance artificial neural networks using only a fraction of the time needed for a full factorial design.

185 citations


Journal ArticleDOI
TL;DR: A new gradient-based procedure called recursive backpropagation (RBP) is proposed whose on-line version, causal recursive back propagation (CRBP), presents some advantages with respect to the other on- line training methods.
Abstract: This paper focuses on online learning procedures for locally recurrent neural nets with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose online version, causal recursive backpropagation (CRBP), has some advantages over other online methods. CRBP includes as particular cases backpropagation (BP), temporal BP, Back-Tsoi algorithm (1991) among others, thereby providing a unifying view on gradient calculation for recurrent nets with local feedback. The only learning method known for locally recurrent nets with no architectural restriction is the one by Back and Tsoi. The proposed algorithm has better stability and faster convergence with respect to the Back-Tsoi algorithm. The computational complexity of the CRBP is comparable with that of the Back-Tsoi algorithm, e.g., less that a factor of 1.5 for usual architectures and parameter settings. The superior performance of the new algorithm, however, easily justifies this small increase in computational burden. In addition, the general paradigms of truncated BPTT and RTRL are applied to networks with local feedback and compared with CRBP. CRBP exhibits similar performances and the detailed analysis of complexity reveals that CRBP is much simpler and easier to implement, e.g., CRBP is local in space and in time while RTRL is not local in space.

170 citations


Journal ArticleDOI
TL;DR: A novel artificial neural-network decision tree algorithm (ANN-DT), which extracts binary decision trees from a trained neural network, and is shown to have significant benefits in certain cases when compared with the standard criteria of minimum weighted variance over the branches.
Abstract: Although artificial neural networks can represent a variety of complex systems with a high degree of accuracy, these connectionist models are difficult to interpret. This significantly limits the applicability of neural networks in practice, especially where a premium is placed on the comprehensibility or reliability of systems. A novel artificial neural-network decision tree algorithm (ANN-DT) is therefore proposed, which extracts binary decision trees from a trained neural network. The ANN-DT algorithm uses the neural network to generate outputs for samples interpolated from the training data set. In contrast to existing techniques, ANN-DT can extract rules from feedforward neural networks with continuous outputs. These rules are extracted from the neural network without making assumptions about the internal structure of the neural network or the features of the data. A novel attribute selection criterion based on a significance analysis of the variables on the neural-network output is examined. It is shown to have significant benefits in certain cases when compared with the standard criteria of minimum weighted variance over the branches. In three case studies the ANN-DT algorithm compared favorably with CART, a standard decision tree algorithm.

138 citations


Journal ArticleDOI
TL;DR: For combinatorial optimization with neural networks, this paper shows that TCNN's and DRNN's do have global searching ability and their attracting set encloses not only local minima, but also global minima for the commonly used objective functions.
Abstract: This paper aims to theoretically prove that both transiently chaotic neural networks (TCNN's) and discrete-time recurrent neural networks (DRNN's) have a global attracting set which ensures that the neural networks carry out a global search. A significant property of TCNN's and DRNN's is that their attracting sets are generated by a bounded fixed point, which is the unique repeller when absolute values of the self-feedback connection weights in TCNN and the difference time in DRNN are sufficiently large. We provide sufficient conditions under which the neural networks have a trapping region where the global unstable set of the fixed point actually evolves into a global attracting set. We also prove the coexistence of an attracting set and a transversal homoclinic orbit in the same region, which may result in complicated chaotic dynamics. For combinatorial optimization with neural networks, this paper shows that TCNN's and DRNN's do have global searching ability and their attracting set encloses not only local minima, but also global minima for the commonly used objective functions. To demonstrate the theoretical results of this paper, several numerical simulations are provided as illustrative examples.

126 citations


Journal ArticleDOI
TL;DR: This work proposes a connectionist architecture, DRAMA, for dynamic control and learning of autonomous robots, a time-delay recurrent neural network, using Hebbian update rules, and uses a teacher-learner scenario, based on mutual following of the two agents, to enable transmission of a vocabulary from one robot to the other.
Abstract: Adaptation to their environment is a fundamental capability for living agents, from which autonomous robots could also benefit. This work proposes a connectionist architecture, DRAMA, for dynamic control and learning of autonomous robots. DRAMA stands for dynamical recurrent associative memory architecture. It is a time-delay recurrent neural network, using Hebbian update rules. It allows learning of spatio-temporal regularities and time series in discrete sequences of inputs, in the face of an important amount of noise. The first part of this paper gives the mathematical description of the architecture and analyses theoretically and through numerical simulations its performance. The second part of this paper reports on the implementation of DRAMA in simulated and physical robotic experiments. Training and rehearsal of the DRAMA architecture is computationally fast and inexpensive, which makes the model particularly suitable for controlling ecomputationally-challengedi robots. In the experiments, we use a basic hardware system with very limited computational capability and show that our robot can carry out real time computation and on-line learning of relatively complex cognitive tasks. In these experiments, two autonomous robots wander randomly in a fixed environment, collecting information about its elements. By mutually associating information of their sensors and actuators, they learn about physical regularities underlying their experience of varying stimuli. The agents learn also from their mutual interactions. We use a teacher-learner scenario, based on mutual following of the two agents, to enable transmission of a vocabulary from one robot to the other.

122 citations


Journal ArticleDOI
TL;DR: It is argued that appropriately structured recurrent neural networks can provide conveniently parameterized dynamic models for many nonlinear systems for use in adaptive control.
Abstract: The paper reports application of recently developed adaptive control techniques based on neural networks to the induction motor control. This case study represents one of the more difficult control problems due to the complex, nonlinear, and time-varying dynamics of the motor and unavailability of full-state measurements. A partial solution is first presented based on a single input-single output (SISO) algorithm employing static multilayer perceptron (MLP) networks. A novel technique is subsequently described which is based on a recurrent neural network employed as a dynamical model of the plant. Recent stability results for this algorithm are reported. The technique is applied to multiinput-multioutput (MIMO) control of the motor. A simulation study of both methods is presented. It is argued that appropriately structured recurrent neural networks can provide conveniently parameterized dynamic models for many nonlinear systems for use in adaptive control.

118 citations


Journal ArticleDOI
Liang Jin1, M.M. Gupta
TL;DR: Two new learning schemes, called the multiplier and constrained learning rate algorithms, are proposed in this paper to provide stable adaptive updating processes for both the synaptic and somatic parameters of the network.
Abstract: To avoid unstable phenomenon during the learning process, two new learning schemes, called the multiplier and constrained learning rate algorithms, are proposed in this paper to provide stable adaptive updating processes for both the synaptic and somatic parameters of the network. Based on the explicit stability conditions, in the multiplier method these conditions are introduced into the iterative error index, and the new updating formulations contain a set of inequality constraints. In the constrained learning rate algorithm, the learning rate is updated at each iterative instant by an equation derived using the stability conditions. With these stable dynamic backpropagation algorithms, any analog target pattern may be implemented by a steady output vector which is a nonlinear vector function of the stable equilibrium point. The applicability of the approaches presented is illustrated through both analog and binary pattern storage examples.

Journal ArticleDOI
01 Feb 1999
TL;DR: A novel neural network architecture, referred to as a variable neural network, is proposed and shown to be useful in approximating the unknown nonlinearities of dynamical systems.
Abstract: This paper is concerned with the adaptive control of continuous-time nonlinear dynamical systems using neural networks. A novel neural network architecture, referred to as a variable neural network, is proposed and shown to be useful in approximating the unknown nonlinearities of dynamical systems. In the variable neural networks, the number of basis functions can be either increased or decreased with time, according to specified design strategies, so that the network will not overfit or underfit the data set. Based on the Gaussian radial basis function (GRBF) variable neural network, an adaptive control scheme is presented. The location of the centers and the determination of the widths of the GRBFs in the variable neural network are analyzed to make a compromise between orthogonality and smoothness. The weight-adaptive laws developed using the Lyapunov synthesis approach guarantee the stability of the overall control scheme, even in the presence of modeling error(s). The tracking errors converge to the required accuracy through the adaptive control algorithm derived by combining the variable neural network and Lyapunov synthesis techniques. The operation of an adaptive control scheme using the variable neural network is demonstrated using two simulated examples.

Journal ArticleDOI
TL;DR: A notion of approximation for interpretations is defined and it is proved that there exists a 3-layered feed forward neural network that approximates the calculation of TP for a given first order acyclic logic program P with an injective level mapping arbitrarily well.
Abstract: In [1] we have shown how to construct a 3-layered recurrent neural network that computes the fixed point of the meaning function T_P of a given propositional logic program {\cal P}, which corresponds to the computation of the semantics of P. In this article we consider the first order case. We define a notion of approximation for interpretations and prove that there exists a 3-layered feed forward neural network that approximates the calculation of T_P for a given first order acyclic logic program P with an injective level mapping arbitrarily well. Extending the feed forward network by recurrent connections we obtain a recurrent neural network whose iteration approximates the fixed point of T_P. This result is proven by taking advantage of the fact that for acyclic logic programs the function T_P is a contraction mapping on a complete metric space defined by the interpretations of the program. Mapping this space to the metric space R with Euclidean distance, a real valued function f_P can be defined which corresponds to T_P and is continuous as well as a contraction. Consequently it can be approximated by an appropriately chosen class of feed forward neural networks.

Journal ArticleDOI
TL;DR: The proposed Lagrangian network is shown to be capable of asymptotic tracking for the motion control of kinematically redundant manipulators.
Abstract: A recurrent neural network, called the Lagrangian network, is presented for the kinematic control of redundant robot manipulators. The optimal redundancy resolution is determined by the Lagrangian network through real-time solution to the inverse kinematics problem formulated as a quadratic optimization problem. While the signal for a desired velocity of the end-effector is fed into the inputs of the Lagrangian network, it generates the joint velocity vector of the manipulator in its outputs along with the associated Lagrange multipliers. The proposed Lagrangian network is shown to be capable of asymptotic tracking for the motion control of kinematically redundant manipulators.

Proceedings ArticleDOI
10 Jul 1999
TL;DR: An efficient second order algorithm for training feedforward neural networks that has a similar convergence rate as the Lavenberg-Marquardt (LM) method and it is less computationally intensive and requires less memory.
Abstract: Efficient second order algorithm for training feedforward neural networks is presented. The algorithm has a similar convergence rate as the Lavenberg-Marquardt (LM) method and it is less computationally intensive and requires less memory. This is especially important for large neural networks where the LM algorithm becomes impractical. Algorithm was verified with several examples.

Journal ArticleDOI
01 Sep 1999
TL;DR: In this article, a synthesis method is proposed for mapping fuzzy finite state automata (FFA) into recurrent neural networks, which is suitable for direct implementation in very large scale integration (VLSI) systems.
Abstract: Neurofuzzy systems-the combination of artificial neural networks with fuzzy logic-have become useful in many application domains. However, conventional neurofuzzy models usually need enhanced representation power for applications that require context and state (e.g., speech, time series prediction, control). Some of these applications can be readily modeled as finite state automata. Previously, it was proved that deterministic finite state automata (DFA) can be synthesized by or mapped into recurrent neural networks by directly programming the DFA structure into the weights of the neural network. Based on those results, a synthesis method is proposed for mapping fuzzy finite state automata (FFA) into recurrent neural networks. Furthermore, this mapping is suitable for direct implementation in very large scale integration (VLSI), i.e., the encoding of FFA as a generalization of the encoding of DFA in VLSI systems. The synthesis method requires FFA to undergo a transformation prior to being mapped into recurrent networks. The neurons are provided with an enriched functionality in order to accommodate a fuzzy representation of FFA states. This enriched neuron functionality also permits fuzzy parameters of FFA to be directly represented as parameters of the neural network. We also prove the stability of fuzzy finite state dynamics of the constructed neural networks for finite values of network weight and, through simulations, give empirical validation of the proofs. Hence, we prove various knowledge equivalence representations between neural and fuzzy systems and models of automata.

Journal ArticleDOI
TL;DR: The results indicate that best performance can be achieved by the combination of the recurrent neural network and the linear error model for modeling the blood glucose metabolism of a diabetic.
Abstract: We study the application of neural networks to modeling the blood glucose metabolism of a diabetic. In particular we consider recurrent neural networks and time series convolution neural networks which we compare to linear models and to nonlinear compartment models. We include a linear error model to take into account the uncertainty in the system and for handling missing blood glucose observations. Our results indicate that best performance can be achieved by the combination of the recurrent neural network and the linear error model.

Journal ArticleDOI
TL;DR: The use of genetic algorithms (GAs) to train the Elman and Jordan networks for dynamic systems identification is described, which is an efficient, guided, random search procedure which can simultaneously obtain the optimal weights of both the feedforward and feedback connections.

Journal ArticleDOI
TL;DR: A new approach for constrained multivariable predictive control based on the use of a recurrent neural network as a non-linear prediction model of the plant under control is proposed, and it is a representation of the system in the state-space form.

Proceedings ArticleDOI
10 Jul 1999
TL;DR: A method for designing multistep-ahead predictors using dynamic recurrent neural networks based on a dynamic gradient descent learning algorithm is presented and its effectiveness is demonstrated through applications to an open-loop unstable process system, namely a heat-exchanger.
Abstract: In numerous problems, such as in process control utilizing predictive control algorithms, it is required that a variable of interest be predicted multiple time-steps ahead into the future without having measurements of that variable in the horizon of interest. Additionally, in applications involving forecasting and fault diagnosis the availability of multistep-ahead predictors (MSP) is desired. MSPs are difficult to design because lack of measurements in the prediction horizon necessitates the recursive use of single-step-ahead predictors for reaching the final point in the horizon. Even small prediction errors resulting from noise at each point in the horizon accumulate and propagate, often resulting in poor prediction accuracy. We present a method for designing MSP using dynamic recurrent neural networks. The method is based on a dynamic gradient descent learning algorithm and its effectiveness is demonstrated through applications to an open-loop unstable process system, namely a heat-exchanger.

Journal ArticleDOI
TL;DR: The network and training stability is addressed by exploiting the BDRNN structure to directly monitor and maintain stability during weight updates by developing a functional measure of system stability that augments the cost function being minimized.
Abstract: Deals with a discrete-time recurrent neural network (DTRNN) with a block-diagonal feedback weight matrix, called the block-diagonal recurrent neural network (BDRNN), that allows a simplified approach to online training and to address network and training stability issues. The structure of the BDRNN is exploited to modify the conventional backpropagation through time (BPTT) algorithm. To reduce its storage requirement by a numerically stable method of recomputing the network state variables. The network and training stability is addressed by exploiting the BDRNN structure to directly monitor and maintain stability during weight updates by developing a functional measure of system stability that augments the cost function being minimized. Simulation results are presented to demonstrate the performance of the BDRNN architecture, its training algorithm, and the stabilization method.

Journal ArticleDOI
TL;DR: The conditions for absolute stability and dissipativity of continuous-time recurrent neural networks with two hidden layers with multilayer perceptron nonlinearity are presented and can be employed for nonlinear H/sub /spl infin// control and imposing closed-loop stability in dynamic backpropagation.
Abstract: Sufficient conditions for absolute stability and dissipativity of continuous-time recurrent neural networks with two hidden layers are presented. In the autonomous case this is related to a Lur'e system with multilayer perceptron nonlinearity. Such models are obtained after parametrizing general nonlinear models and controllers by a multilayer perceptron with one hidden layer and representing the control scheme in standard plant form. The conditions are expressed as matrix inequalities and can be employed for nonlinear H/sub /spl infin// control and imposing closed-loop stability in dynamic backpropagation.

BookDOI
01 Jan 1999
TL;DR: Fundamentals of Artificial Neural Networks (Z. Waszczyszyn) and Applications of Neural Networks in Modeling and Design of Structural Systems (P. Jenkins).
Abstract: Fundamentals of Artificial Neural Networks (Z. Waszczyszyn).- Genetic Algorithms and Neural Networks (W. M. Jenkins).- Applications of Neural Networks in Modeling and Design of Structural Systems (P. Hajela).- The Neural Network Approach in Plasticity and Fracture Mechanics (P. D. Panagiotopoulos, Z. Waszczyszyn).- Neural Networks in Advanced Computational Problems (B. H. V. Topping et al.).- Neural Networks and Fuzzy Logic in Active Control of Mechanical Systems (P. Venini).

Journal ArticleDOI
01 Jan 1999
TL;DR: A real-time iterative learning algorithm is developed and used to train the RNN and ensures that the learning error converges to zero, as a result, the stability of the control system is always assured.
Abstract: The paper discusses a class of nonlinear discrete sliding-mode control. The control system is designed on the basis of a discrete Lyapunov function. Part of the equivalent control is estimated by an on-line estimator, which is realised by a recurrent neural network (RNN) because of its outstanding ability for modelling a dynamical process. A real-time iterative learning algorithm is developed and used to train the RNN. Unlike the conventional learning algorithms for RNNs, the proposed algorithm ensures that the learning error converges to zero. As a result, the stability of the control system is always assured. In addition, this learning algorithm can be applied for on-line estimation. The proposed controller eliminates chattering and provides sliding-mode motion on the selected manifolds in the state space. Numerical examples are given and simulation results strongly demonstrate that the control scheme is very effective.

Proceedings Article
29 Nov 1999
TL;DR: A form of plasticity in which synapses depress when a presynaptic spike is followed by a postsynaptic spike, and potentiate with the opposite temporal ordering can be approximated by learning rules based on firing rates.
Abstract: We analyze the conditions under which synaptic learning rules based on action potential timing can be approximated by learning rules based on firing rates. In particular, we consider a form of plasticity in which synapses depress when a presynaptic spike is followed by a postsynaptic spike, and potentiate with the opposite temporal ordering. Such differential anti-Hebbian plasticity can be approximated under certain conditions by a learning rule that depends on the time derivative of the postsynaptic firing rate. Such a learning rule acts to stabilize persistent neural activity patterns in recurrent neural networks.

Journal ArticleDOI
01 May 1999
TL;DR: Two neural network approaches to minimum infinity-norm solution of the velocity inverse kinematics problem for redundant robots are presented, with the second approach being better in terms of accuracy and optimality.
Abstract: This paper presents two neural network approaches to minimum infinity-norm solution of the velocity inverse kinematics problem for redundant robots. Three recurrent neural networks are applied for determining a joint velocity vector with its maximum absolute value component being minimal among all possible joint velocity vectors corresponding to the desired end-effector velocity. In each proposed neural network approach, two cooperating recurrent neural networks are used. The first approach employs two Tank-Hopfield networks for linear programming. The second approach employs two two-layer recurrent neural networks for quadratic programming and linear programming, respectively. Both the minimal 2-norm and infinity-norm of joint velocity vector can be obtained from the output of the recurrent neural networks. Simulation results demonstrate that the proposed approaches are effective with the second approach being better in terms of accuracy and optimality.

01 Jan 1999
TL;DR: A survey of global minimization methods used for optimization of neural structures and network cost functions, including some aspects of genetic algorithms, are provided.
Abstract: Neural networks are usually trained using local, gradient-based procedures. Such methods frequently find suboptimal solutions being trapped in local minima. Optimization of neural structures and global minimization methods applied to network cost functions have strong influence on all aspects of network performance. Recently genetic algorithms are frequently combined with neural methods to select best architectures and avoid drawbacks of local minimization methods. Many other global minimization methods are suitable for that purpose, although they are used rather rarely in this context. This paper provides a survey of such global methods, including some aspects of genetic algorithms.

Journal ArticleDOI
TL;DR: Two improved discrete-time neural networks with faster convergence rate are proposed by use of scaling techniques, and can solve a linear inequality and equality system, and thus extend and modify existing neural networks for solving linear equations or inequalities.
Abstract: This paper presents two types of recurrent neural networks, continuous-time and discrete-time ones, for solving linear inequality and equality systems. In addition to the basic continuous-time and discrete-time neural-network models, two improved discrete-time neural networks with faster convergence rate are proposed by use of scaling techniques. The proposed neural networks can solve a linear inequality and equality system, can solve a linear program and its dual simultaneously, and thus extend and modify existing neural networks for solving linear equations or inequalities. Rigorous proofs on the global convergence of the proposed neural networks are given. Digital realization of the proposed recurrent neural networks are also discussed.

Journal ArticleDOI
TL;DR: Parsimonious DRNN models are able to find an appropriate internal representation of various chaotic processes from the observation of a subset of the state variables of a dynamical system.

Journal ArticleDOI
TL;DR: An efficient technique that combines two popular adaptive filtering techniques, namely adaptive noise cancellation and adaptive signal enhancement, in a single recurrent neural network is proposed for the adaptive removal of ocular artifacts from EEG.
Abstract: The electroencephalogram (EEG) is susceptible to various large signal contaminations or artifacts. Ocular artifacts act as major source of noise, making it difficult to distinguish normal brain activities from the abnormal ones. In this letter, an efficient technique that combines two popular adaptive filtering techniques, namely adaptive noise cancellation and adaptive signal enhancement, in a single recurrent neural network is proposed for the adaptive removal of ocular artifacts from EEG. A real time recurrent learning algorithm is employed for training the proposed neural network which converges faster to a lower mean squared error. This technique is suitable for real-time processing.