scispace - formally typeset
Search or ask a question

Showing papers on "Recurrent neural network published in 2001"


Book
01 Aug 2001
TL;DR: This book shows researchers how recurrent neural networks can be implemented to expand the range of traditional signal processing techniques.
Abstract: From the Publisher: From mobile communications to robotics to space technology to medical instrumentation, new technologies are demanding increasingly complex methods of digital signal processing (DSP). This book shows researchers how recurrent neural networks can be implemented to expand the range of traditional signal processing techniques. Featuring original research on stability in neural networks, the book combines rigorous mathematical analysis with application examples. Experimental evidence as well as an overview of existing approaches are also included. Market: Engineers working in signal processing, neural networks, communications, nonlinear control, and time series analysis.

707 citations


Journal ArticleDOI
TL;DR: Long short-term memory (LSTM) variants are also the first RNNs to learn a simple context-sensitive language, namely a(n)b( n)c(n).
Abstract: Previous work on learning regular languages from exemplary training sequences showed that long short-term memory (LSTM) outperforms traditional recurrent neural networks (RNNs). We demonstrate LSTMs superior performance on context-free language benchmarks for RNNs, and show that it works even better than previous hardwired or highly specialized architectures. To the best of our knowledge, LSTM variants are also the first RNNs to learn a simple context-sensitive language, namely a/sup n/b/sup n/c/sup n/.

689 citations


Book ChapterDOI
21 Aug 2001
TL;DR: This paper makes meta- learning in large systems feasible by using recurrent neural networks with attendant learning routines as meta-learning systems and shows that the approach to gradient descent methods forms non-stationary time series prediction.
Abstract: This paper introduces the application of gradient descent methods to meta-learning. The concept of "meta-learning", i.e. of a system that improves or discovers a learning algorithm, has been of interest in machine learning for decades because of its appealing applications. Previous meta-learning approaches have been based on evolutionary methods and, therefore, have been restricted to small models with few free parameters. We make meta-learning in large systems feasible by using recurrent neural networks withth eir attendant learning routines as meta-learning systems. Our system derived complex well performing learning algorithms from scratch. In this paper we also show that our approachp erforms non-stationary time series prediction.

645 citations


MonographDOI
05 Sep 2001
TL;DR: Within this text neural networks are considered as massively interconnected nonlinear adaptive filters.
Abstract: Within this text neural networks are considered as massively interconnected nonlinear adaptive filters.

636 citations


Journal ArticleDOI
TL;DR: The best ebooks about Neural And Adaptive Systems Fundamentals Through Simulations that you can get for free are listed here.
Abstract: The best ebooks about Neural And Adaptive Systems Fundamentals Through Simulations that you can get for free here by download this Neural And Adaptive Systems Fundamentals Through Simulations and save to your desktop. This ebooks is under topic such as neural and adaptive systems: fundamentals through neural and adaptive systems fundamentals through simulations neural and adaptive systems fundamentals through 66,50mb neural and adaptive systems fundamentals through size 12,64mb neural and adaptive systems fundamentals neural and adaptive systems fundamentals through simulations size 20,51mb neural and adaptive systems fundamentals neural and adaptive systems: fundamentals through neural and adaptive systems: fundamentals through neural and adaptive systems sisisrzw-berlin verification & validation of neural networks for aerospace verification and validation of neural networks for elements of artificial neural networks (complex adaptive reviews the power of neural networks neurosolutions application of neural networks in diagnosing cancer ee 614 artificial neural systems spring 2015, tue+thu 2:30 engineering practice no adduha cpsc 636: neural networks spring 2006 time room: hrbb 126 web-enabled interface for an adaptive systemsâ€ÂTM interactive standard course syllabus eceosu hydrological data network modelling using artificial harmony in context second edition workbook answers lipoic acid a medical dictionary bibliography a cruise control on manual transmission loobys diagnosis of lung cancer disease using neuro-fuzzy logic free elements of artificial neural networks (complex citrus fruit chemistry tubiby following jesus to jerusalem xbshop arrow transportation co v a o smith nine days a queen namlod lncs 3973 automatic recognition and evaluation of historical dictionary of shamanism tweve the blockhouse avon books number t 379 qqntf

552 citations


Journal ArticleDOI
TL;DR: It is shown that the symbolic representation aids the extraction of symbolic knowledge from the trained recurrent neural networks in the form of deterministic finite state automata which explain the operation of the system and are often relatively simple.
Abstract: Financial forecasting is an example of a signal processing problem which is challenging due to small sample sizes, high noise, non-stationarity, and non-linearity. Neural networks have been very successful in a number of signal processing applications. We discuss fundamental limitations and inherent difficulties when using neural networks for the processing of high noise, small sample size signals. We introduce a new intelligent signal processing method which addresses the difficulties. The method proposed uses conversion into a symbolic representation with a self-organizing map, and grammatical inference with recurrent neural networks. We apply the method to the prediction of daily foreign exchange rates, addressing difficulties with non-stationarity, overfitting, and unequal a priori class probabilities, and we find significant predictability in comprehensive experiments covering 5 different foreign exchange rates. The method correctly predicts the direction of change for the next day with an error rate of 47.1%. The error rate reduces to around 40% when rejecting examples where the system has low confidence in its prediction. We show that the symbolic representation aids the extraction of symbolic knowledge from the trained recurrent neural networks in the form of deterministic finite state automata. These automata explain the operation of the system and are often relatively simple. Automata rules related to well known behavior such as tr end following and mean reversal are extracted.

397 citations


Journal ArticleDOI
TL;DR: Simulation results suggest that the RNN is the most efficient of the ANN models tested for a calibration period as short as 7 years, and shows that RNN may offer a robust framework for improving water supply planning in semiarid areas where aquifer information is not available.
Abstract: Three types of functionally different artificial neural network (ANN) models are calibrated using a relatively short length of groundwater level records and related hydrometeorological data to simulate water table fluctuations in the Gondo aquifer, Burkina Faso. Input delay neural network (IDNN) with static memory structure and globally recurrent neural network (RNN) with inherent dynamical memory are proposed for monthly water table fluctuations modeling. The simulation performance of the IDNN and the RNN models is compared with results obtained from two variants of radial basis function (RBF) networks, namely, a generalized RBF model (GRBF) and a probabilistic neural network (PNN). Overall, simulation results suggest that the RNN is the most efficient of the ANN models tested for a calibration period as short as 7 years. The results of the IDNN and the PNN are almost equivalent despite their basically different learning procedures. The GRBF performs very poorly as compared to the other models. Furthermore, the study shows that RNN may offer a robust framework for improving water supply planning in semiarid areas where aquifer information is not available. This study has significant implications for groundwater management in areas with inadequate groundwater monitoring network.

374 citations


Journal ArticleDOI
TL;DR: Three alternative types of ANNs, namely multilayer feedforward Neural Network, partial recurrent neural networks, and time delay neural networks were identified, developed and found to provide reasonable predictions of the rainfall depth one time-step in advance.

291 citations


Book
01 May 2001
TL;DR: This book introduces several neural network architectures and examples of how they have been used to solve a variety of electromagnetic problems, and demonstrates how neural networks can be used in conjunction with other standard methods used in electromagnetics.
Abstract: From the Book: Since the early 1990s, a plethora of electromagnetic problems have been tackled using neural networks, some more successfully than others. Because neural networks and electromagnetics are two different scientific fields, not too many electromagnetic scientists are aware of the capabilities of neural networks. This book's purpose is to bridge these two fields and make it easier for electromagnetic s experts to understand how to use neural networks in their applications of interest. To achieve this goal, this book introduces several neural network architectures and examples of how they have been used to solve a variety of electromagnetic problems. These solutions are then compared to some of the classical solutions to demonstrate the merits of using neural networks. This book contains 10 chapters. Chapter 1 is an introduction to neural networks. The reader is introduced to the basic building blocks of a neural network and its functions. It is shown how simple processors (neurons) are interconnected massively with each other (as in the human brain). Based on information in this chapter, the engineer will realize how the inherent nature of neural networks to act as distributed or massively parallel computers can be employed to speed up complex optimization problems in electromagnetics. Chapters 2 through 5 introduce some of the main neural architectures used today in electromagnetics and other applications. These architectures include the single-layer perceptron, the multilayer perceptron, the radial basis function, the Kohonen network, the ART neural networks, and the recurrent neural networks. Chapters 2 through 5 examine the basics of these architectures, their respective strengths and limitations, and the algorithms that allow us to train these architectures to perform their required tasks. These chapters conclude with an application where one of these architectures has been used with success. Several simple MATLAB examples are included to show the reader how to effectively use MATLAB commands to train and test this architecture on a problem of interest. Chapters G through 10 discuss applications in electromagnetics that are solved by using neural networks. The emphasis in Chapter 6 is on problems related to antennas. The inherent nonlinearities associated with antenna radiation patterns make antennas suitable candidates for neural networks. Several examples dealing with reflector, microstrip, and other antennas are presented. Chapter 7 deals with applications in remote sensing and target classification. In this chapter, the neural network tasks of association, pattern classification, prediction, and clustering are used primarily to classify radar targets. It is shown how measured data from scaled models can be used to train neural networks for any possible scenarios that may exist in real life. Some of these scenarios may not be possible to model by existing analytical or even numerical techniques. In Chapter 8, the high-speed capability of the neural networks is utilized in problems where real-time performance is required. Examples with adaptive array antennas for beamforming and null steering are presented and discussed in detail. These applications can be incorporated into both military and civilian systems, including GPS, cellular, and mobile communications. Chapter 9 deals primarily with the modeling of microwave devices and circuits. Here, neural networks are used as a distributed computer employed to speed up optimization problems. It is shown how neural networks can be used to achieve a more practical and interactive optimization process. And finally, in Chapter 10 it is demonstrated how neural networks can be used in conjunction with other standard methods used in electromagnetics, such as the finite element method (FEM), the finite difference method, and the method of moments. This book is intended for students, engineers, and researchers in electromagnetics with minimal background in neural networks. We hope that these readers find in this book the necessary tools and examples that can help to them in applying neural networks to some of their research problems. This book can also serve as a basic reference book for courses such as "Advanced Topics in Electromagnetics," "Applications of Neural Networks in Communications," and others.

274 citations


Proceedings Article
Bram Bakker1
03 Jan 2001
TL;DR: Model-free RL-LSTM using Advantage (λ) learning and directed exploration can solve non-Markovian tasks with long-term dependencies between relevant events.
Abstract: This paper presents reinforcement learning with a Long Short-Term Memory recurrent neural network: RL-LSTM. Model-free RL-LSTM using Advantage (λ) learning and directed exploration can solve non-Markovian tasks with long-term dependencies between relevant events. This is demonstrated in a T-maze task, as well as in a difficult variation of the pole balancing task.

252 citations


Journal ArticleDOI
TL;DR: A Real-Coded Genetic Algorithm is presented that uses the appropriate operators for this encoding type to train Recurrent Neural Networks and is compared with the Real-Time Recurrent Learning algorithm to perform the fuzzy grammatical inference.

01 Jan 2001
TL;DR: Backpropagation learning is described for feedforward networks, adapted to suit the authors' (probabilistic) modeling needs, and extended to cover recurrent networks.
Abstract: This paper provides guidance to some of the concepts surrounding recurrent neural networks. Contrary to feedforward networks, recurrent networks can be sensitive, and be adapted to past inputs. Backpropagation learning is described for feedforward networks, adapted to suit our (probabilistic) modeling needs, and extended to cover recurrent networks. The aim of this brief paper is to set the scene for applying and understanding recurrent neural networks.

DOI
01 Jan 2001
TL;DR: These Ecole polytechnique federale de Lausanne EPFL, n° 2366 (2001)Faculte informatique et communicationsJury: Paolo Frasconi, Roger Hersch, Martin Rajman, Jurgen Schmidhuber Public defense.
Abstract: These Ecole polytechnique federale de Lausanne EPFL, n° 2366 (2001)Faculte informatique et communicationsJury: Paolo Frasconi, Roger Hersch, Martin Rajman, Jurgen Schmidhuber Public defense: 2001-4-6 Reference doi:105075/epfl-thesis-2366Print copy in library catalog Record created on 2005-03-16, modified on 2016-08-08

Journal ArticleDOI
01 Feb 2001
TL;DR: The dual network is presented, which is composed of a single layer of neurons, and the number of neurons is equal to the dimensionality of the workspace, and is proven to be globally exponentially stable.
Abstract: The inverse kinematics problem in robotics can be formulated as a time-varying quadratic optimization problem. A new recurrent neural network, called the dual network, is presented in this paper. The proposed neural network is composed of a single layer of neurons, and the number of neurons is equal to the dimensionality of the workspace. The proposed dual network is proven to be globally exponentially stable. The proposed dual network is also shown to be capable of asymptotic tracking for the motion control of kinematically redundant manipulators.

Journal ArticleDOI
TL;DR: This paper focuses on dynamic neural networks to address the temporal relationships of the hydrological series and suggests that the use of input time delays significantly improves the conventional multilayer perceptron (MLP) network but does not provide any improvement in the RNN model.
Abstract: An experiment on predicting multivariate water resource time series, specifically the prediction of hydropower reservoir inflow using temporal neural networks, is presented. This paper focuses on dynamic neural networks to address the temporal relationships of the hydrological series. Three types of temporal neural network architectures with different inherent representations of temporal information are investigated. An input delayed neural network (IDNN) and a recurrent neural network (RNN) with and without input time delays are proposed for multivariate reservoir inflow forecasting. The forecast results indicate that, overall, the RNN obtained the best performance. The results also suggest that the use of input time delays significantly improves the conventional multilayer perceptron (MLP) network but does not provide any improvement in the RNN model. However, the RNN with input time delays remains slightly more effective for multivariate reservoir inflow prediction than the IDNN model. Moreover, it is ...

Journal ArticleDOI
TL;DR: A brief tutorial on sequence learning is presented, which requires comparing, contrasting, and combining the existing techniques, approaches, and paradigms, to develop better, more powerful algorithms.
Abstract: 2 1094-7167/01/$10.00 © 2001 IEEE IEEE INTELLIGENT SYSTEMS So, it’s logical that sequence learning is an important component of learning in many task domains of intelligent systems: inference, planning, reasoning, robotics, natural language processing, speech recognition, adaptive control, time series prediction, financial engineering, DNA sequencing, and so on. Naturally, the unique perspectives of these domains lead to different sequence-learning approaches. These approaches deal with somewhat differently formulated sequence-learning problems (for example, some with actions and some without) and with different aspects of sequence learning (for example, sequence prediction versus sequence recognition). Despite the plethora of approaches, sequence learning is still difficult. We believe that the right approach to improving sequence learning is to first better understand the state of the art in the different disciplines related to this topic. This requires comparing, contrasting, and combining the existing techniques, approaches, and paradigms, to develop better, more powerful algorithms. Toward that end, we present here a brief tutorial on sequence learning.

Journal ArticleDOI
TL;DR: An effective approach to study global and local stability of the networks is proposed and many of well known existing results are unified in this framework, which gives much better test conditions for global andLocal stability.
Abstract: In this paper, we discuss dynamical behaviors of recurrently asymmetrically connected neural networks in detail. We propose an effective approach to study global and local stability of the networks. Many of well known existing results are unified in our framework, which gives much better test conditions for global and local stability. Sufficient conditions for the uniqueness of the equilibrium point and its stability conditions are given, too.

Journal ArticleDOI
TL;DR: A systematic description of key issues in neural modeling approach such as data generation, range and distribution of samples in model input parameter space, data scaling, etc., is presented.
Abstract: () ABSTRACT: Artificial neural networks ANN recently gained attention as a fast and flexible vehicle to microwave modeling and design. Fast neural models trained from measured simulated microwave data can be used during microwave design to provide instant answers to the task they have learned. We review two important aspects of neural-network-based microwave modeling, namely, model development issues and nonlin- ear modeling. A systematic description of key issues in neural modeling approach such as data generation, range and distribution of samples in model input parameter space, data scaling, etc., is presented. Techniques that pave the way for automation of neural model development could be of immense interest to microwave engineers, whose knowledge about ANN is limited. As such, recent techniques that could lead to automatic neural model development, e.g., adaptive controller and adaptive sampling, are discussed. Neural model- ing of nonlinear device circuit characteristics has emerged as an important research area. An overview of nonlinear techniques including small large signal neural modeling of () transistors and dynamic recurrent neural network RNN modeling of circuits is presented. Practical microwave examples are used to illustrate the reviewed techniques. 2001 John Wiley & Sons, Inc. Int J RF and Microwave CAE 11: 421, 2001.

Journal ArticleDOI
TL;DR: It is shown that the optimal control strategy based on the hybrid stacked neural network model offers much more robust performance than that based on a single neural network.
Abstract: This paper presents a novel nonlinear hybrid modeling approach aimed at obtaining improvements in model performance and robustness to new data in the optimal control of a batch MMA polymerization reactor. The hybrid model contains a simplified mechanistic model that does not consider the gel effect and stacked recurrent neural networks. Stacked recurrent neural networks are built to characterize the gel effect, which is one of the most difficult parts of polymerization modeling. Sparsely sampled data on polymer quality were interpolated using a cubic spline function to generate data for neural network training. Comparative studies with the use of a single neural network show that stacked networks give superior performance and improved robustness. Optimal reactor temperature control policies are then calculated using the hybrid stacked neural network model. It is shown that the optimal control strategy based on the hybrid stacked neural network model offers much more robust performance than that based on a...

Proceedings ArticleDOI
15 Jul 2001
TL;DR: A supervised learning algorithm is derived for a spiking neural network which encodes information in the timing of spike trains which is similar to the classical error backpropagation algorithm for sigmoidal neural network but the learning parameter is adaptively changed.
Abstract: We derive a supervised learning algorithm for a spiking neural network which encodes information in the timing of spike trains. This algorithm is similar to the classical error backpropagation algorithm for sigmoidal neural network but the learning parameter is adaptively changed. The algorithm is applied to a complex nonlinear classification problem and the results show that the spiking neural network is capable of performing nonlinearly separable classification tasks. Several issues concerning the spiking neural network are discussed.

Journal ArticleDOI
Gustavo Deco1, Bernd Schürmann1
TL;DR: A recurrent neural network architecture of single cells in the primary visual cortex is derived that dynamically improves a 2D-Gabor wavelet based representation of an image by minimizing the corresponding reconstruction error via feedback connections.
Abstract: We derive a recurrent neural network architecture of single cells in the primary visual cortex that dynamically improves a 2D-Gabor wavelet based representation of an image by minimizing the corresponding reconstruction error via feedback connections. Furthermore, we demonstrate that the reconstruction error is a Lyapunov function of the herein proposed recurrent network. Our model of the primary visual cortex combines a modulatory feedforward strategy and a feedback subtractive correction for obtaining an optimal coding. The fed back error is used in our system for a dynamical improvement of the feedforward Gabor representation of the images, in the sense that the feedforward redundant representation due to the non-orthogonality of the Gabor wavelets is dynamically corrected. The redundancy of the Gabor feature representation is therefore dynamically eliminated by improving the reconstruction capability of the internal representation. The dynamics therefore introduce a nonlinear correction to the standard linear representation of Gabor filters that generates a more efficient predictive coding.

Journal ArticleDOI
TL;DR: This work proves analytic results on the convergence and stable attractors of the CLM, which generalize earlier results on winner-take-all networks, and incorporate deterministic annealing for robustness against local minima.
Abstract: We present a recurrent neural network for feature binding and sensory segmentation: the competitive-layer model (CLM). The CLM uses topographically structured competitive and cooperative interactions in a layered network to partition a set of input features into salient groups. The dynamics is formulated within a standard additive recurrent network with linear threshold neurons. Contextual relations among features are coded by pairwise compatibilities, which define an energy function to be minimized by the neural dynamics. Due to the usage of dynamical winner-take-all circuits, the model gains more flexible response properties than spin models of segmentation by exploiting amplitude information in the grouping process. We prove analytic results on the convergence and stable attractors of the CLM, which generalize earlier results on winner-take-all networks, and incorporate deterministic annealing for robustness against local minima. The piecewise linear dynamics of the CLM allows a linear eigensubspace analysis, which we use to analyze the dynamics of binding in conjunction with annealing. For the example of contour detection, we show how the CLM can integrate figure-ground segmentation and grouping into a unified model.

Journal ArticleDOI
TL;DR: A range of language tasks are shown in which an SRN develops solutions that not only count but also copy and store counting information, demonstrating how SRNs may be an alternative psychological model of language or sequence processing.
Abstract: It has been shown that if a recurrent neural network (RNN) learns to process a regular language, one can extract a finite-state machine (FSM) by treating regions of phase-space as FSM states. However, it has also been shown that one can construct an RNN to implement Turing machines by using RNN dynamics as counters. But how does a network learn languages that require counting? Rodriguez, Wiles, and Elman (1999) showed that a simple recurrent network (SRN) can learn to process a simple context-free language (CFL) by counting up and down. This article extends that to show a range of language tasks in which an SRN develops solutions that not only count but also copy and store counting information. In one case, the network stores information like an explicit storage mechanism. In other cases, the network stores information more indirectly in trajectories that are sensitive to slight displacements that depend on context. In this sense, an SRN can learn analog computation as a set of interdependent counters. This demonstrates how SRNs may be an alternative psychological model of language or sequence processing.

Journal ArticleDOI
TL;DR: An investigation has been made into the use of stochastic arithmetic to implement an artificial neural network solution to a typical pattern recognition application, with results indicating an order of magnitude improvement over the floating-point implementation assuming clock frequency parity.
Abstract: For pt. I see ibid., p.891-905. An investigation has been made into the use of stochastic arithmetic to implement an artificial neural network solution to a typical pattern recognition application. Optical character recognition is performed on very noisy characters in the E-13B MICR font. The artificial neural network is composed of two layers, the first layer being a set of soft competitive learning subnetworks and the second a set of fully connected linear output neurons. The observed number of clock cycles in the stochastic case represents an order of magnitude improvement over the floating-point implementation assuming clock frequency parity. Network generalization capabilities were also compared based on the network squared error as a function of the amount of noise added to the input patterns. The stochastic network maintains a squared error within 10 percent of that of the floating-point implementation for a wide range of noise levels.

Journal ArticleDOI
TL;DR: Non-adaptive and adaptive state filtering algorithms are presented with both off-line and online learning stages, and extended Kalman filters (EKFs) are developed and compared to the filter algorithms proposed.
Abstract: Practical algorithms are presented for adaptive state filtering in nonlinear dynamic systems when the state equations are unknown. The state equations are constructively approximated using neural networks. The algorithms presented are based on the two-step prediction-update approach of the Kalman filter. The proposed algorithms make minimal assumptions regarding the underlying nonlinear dynamics and their noise statistics. Non-adaptive and adaptive state filtering algorithms are presented with both off-line and online learning stages. The algorithms are implemented using feedforward and recurrent neural network and comparisons are presented. Furthermore, extended Kalman filters (EKFs) are developed and compared to the filter algorithms proposed. For one of the case studies, the EKF converges but results in higher state estimation errors that the equivalent neural filters. For another, more complex case study with unknown system dynamics and noise statistics, the developed EKFs do not converge. The off-line trained neural state filters converge quite rapidly and exhibit acceptable performance. Online training further enhances the estimation accuracy of the developed adaptive filters, effectively decoupling the eventual filter accuracy from the accuracy of the process model.

Journal ArticleDOI
TL;DR: Two conditions that ensure the nondivergence of additive recurrent networks with unsaturating piecewise linear transfer functions are established and can be used to identify in their model regions of maximal orientation-selective amplification and symmetry breaking.
Abstract: We establish two conditions that ensure the nondivergence of additive recurrent networks with unsaturating piecewise linear transfer functions, also called linear threshold or semilinear transfer functions. As Hahnloser, Sarpeshkar, Mahowald, Douglas, and Seung (2000) showed, networks of this type can be efficiently built in silicon and exhibit the coexistence of digital selection and analog amplification in a single circuit. To obtain this behavior, the network must be multistable and nondivergent, and our conditions allow determining the regimes where this can be achieved with maximal recurrent amplification. The first condition can be applied to nonsymmetric networks and has a simple interpretation of requiring that the strength of local inhibition match the sum over excitatory weights converging onto a neuron. The second condition is restricted to symmetric networks, but can also take into account the stabilizing effect of nonlocal inhibitory interactions. We demonstrate the application of the conditions on a simple example and the orientation-selectivity model of Ben-Yishai, Lev Bar-Or, and Sompolinsky (1995). We show that the conditions can be used to identify in their model regions of maximal orientation-selective amplification and symmetry breaking.

Proceedings ArticleDOI
15 Jul 2001
TL;DR: This work observes that the model learns to generate melodies according to composition rules on tonality and rhythm with interesting variations and finds a neural network that maximizes the chance of generating good melodies.
Abstract: Music composition is a domain well-suited for evolutionary reinforcement learning. Instead of applying explicit composition rules, a neural network is used to generate melodies. An evolutionary algorithm is used to find a neural network that maximizes the chance of generating good melodies. Composition rules on tonality and rhythm are used as a fitness function for the evolution. We observe that the model learns to generate melodies according to these rules with interesting variations.


Journal ArticleDOI
TL;DR: The proposed neurocontrol approach represents a novel application of recurrent neural networks to the nonlinear output regulation problem and completely inherits the stability and asymptotic tracking properties guaranteed by original non linear output regulation systems, due to its globally exponential convergence.

Journal ArticleDOI
TL;DR: It can be inferred that the linear VIP has a unique solution for the class of Lyapunov diagonally stable matrices, and that the synthesized RNN is globally exponentially convergent to the unique solution.
Abstract: This paper investigates the existence, uniqueness, and global exponential stability (GES) of the equilibrium point for a large class of neural networks with globally Lipschitz continuous activations including the widely used sigmoidal activations and the piecewise linear activations. The provided sufficient condition for GES is mild and some conditions easily examined in practice are also presented. The GES of neural networks in the case of locally Lipschitz continuous activations is also obtained under an appropriate condition. The analysis results given in the paper extend substantially the existing relevant stability results in the literature, and therefore expand significantly the application range of neural networks in solving optimization problems. As a demonstration, we apply the obtained analysis results to the design of a recurrent neural network (RNN) for solving the linear variational inequality problem (VIP) defined on any nonempty and closed box set, which includes the box constrained quadratic programming and the linear complementarity problem as the special cases. It can be inferred that the linear VIP has a unique solution for the class of Lyapunov diagonally stable matrices, and that the synthesized RNN is globally exponentially convergent to the unique solution. Some illustrative simulation examples are also given.