scispace - formally typeset
Book ChapterDOI

Neural Networks with Online Sequential Learning Ability for a Reinforcement Learning Algorithm

TLDR
A novel on-line sequential learning evolving neural network model design for RL is proposed, the use of minimal resource allocation neural network (mRAN), and a mRAN function approximation approach to RL systems are developed.
Abstract
Reinforcement learning (RL) algorithms that employ neural networks as function approximators have proven to be powerful tools for solving optimal control problems. However, neural network function approximators suffer from a number of problems like learning becomes difficult when the training data are given sequentially, difficult to determine structural parameters, and usually result in local minima or overfitting. In this paper, a novel on-line sequential learning evolving neural network model design for RL is proposed. We explore the use of minimal resource allocation neural network (mRAN), and develop a mRAN function approximation approach to RL systems. Potential of this approach is demonstrated through a case study. The mean square error accuracy, computational cost, and robustness properties of this scheme are compared with static structure neural networks.

read more

References
More filters
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Journal ArticleDOI

A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks

TL;DR: The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance on benchmark problems drawn from the regression, classification and time series prediction areas.
Journal ArticleDOI

A resource-allocating network for function interpolation

John Platt
- 01 Jun 1991 - 
TL;DR: A network that allocates a new computational unit whenever an unusual pattern is presented to the network, which learns much faster than do those using backpropagation networks and uses a comparable number of synapses.
Proceedings ArticleDOI

A combined SVM and LDA approach for classification

TL;DR: It is shown that existing SVM software can be used to solve the SVM/LDA formulation and empirical comparisons of the proposed algorithm with SVM and LDA using both synthetic and real world benchmark data are presented.
Journal ArticleDOI

A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation

TL;DR: The paper first introduces the concept of significance for the hidden neurons and then uses it in the learning algorithm to realize parsimonious networks, which outperforms several other sequential learning algorithms in terms of learning speed, network size and generalization performance regardless of the sampling density function of the training data.
Related Papers (5)