Journal ArticleDOI
Improving the generalization performance of RBF neural networks using a linear regression technique
TLDR
The method uses a statistical linear regression technique which is based on the orthogonal least squares (OLS) algorithm, substituting a QR algorithm for the traditional Gram-Schmidt algorithm, to find the connected weight of the hidden layer neurons.Abstract:
In this paper we present a method for improving the generalization performance of a radial basis function (RBF) neural network. The method uses a statistical linear regression technique which is based on the orthogonal least squares (OLS) algorithm. We first discuss a modified way to determine the center and width of the hidden layer neurons. Then, substituting a QR algorithm for the traditional Gram-Schmidt algorithm, we find the connected weight of the hidden layer neurons. Cross-validation is utilized to determine the stop training criterion. The generalization performance of the network is further improved using a bootstrap technique. Finally, the solution method is used to solve a simulation and a real problem. The results demonstrate the improved generalization performance of our algorithm over the existing methods.read more
Citations
More filters
Journal ArticleDOI
Short-term wind speed forecasting based on a hybrid model
TL;DR: A novel approach named WTT-SAM-RBFNN for short-term wind speed forecasting is proposed by applying wavelet transform technique (WTT) into hybrid model which hybrids the seasonal adjustment method (SAM) and the RBFNN.
Journal ArticleDOI
Applications of the fuzzy Lyapunov linear matrix inequality criterion to a chaotic structural system
TL;DR: The following articles are retracted because after thorough investigation evidence points towards them having at least one author or being reviewed by at leastOne reviewer who has been implicated in the peer review ring and/or citation ring.
Journal ArticleDOI
RETRACTED: Applications of linear differential inclusion-based criterion to a nonlinear chaotic system: a critical review:
TL;DR: The following articles are retracted because after thorough investigation evidence points towards them having at least one author or being reviewed by at leastOne reviewer who has been implicated in the peer review ring and/or citation ring.
Journal ArticleDOI
RETRACTED: Path planning for autonomous robots – a comprehensive analysis by a greedy algorithm
TL;DR: The following articles are retracted because after thorough investigation evidence points towards them having at least one author or being reviewed by at leastOne reviewer who has been implicated in the peer review ring and/or citation ring.
Journal ArticleDOI
RETRACTED: Neural-network fuzzy control for chaotic tuned mass damper systems with time delays
TL;DR: The following articles are retracted because after thorough investigation evidence points towards them having at least one author or being reviewed by at leastOne reviewer who has been implicated in the peer review ring and/or citation ring.
References
More filters
Book
An introduction to the bootstrap
Bradley Efron,Robert Tibshirani +1 more
TL;DR: This article presents bootstrap methods for estimation, using simple arguments, with Minitab macros for implementing these methods, as well as some examples of how these methods could be used for estimation purposes.
Book
Neural Networks: A Comprehensive Foundation
TL;DR: Thorough, well-organized, and completely up to date, this book examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks.
Journal ArticleDOI
An Introduction to the Bootstrap.
Book
Robust Regression and Outlier Detection
TL;DR: This paper presents the results of a two-year study of the statistical treatment of outliers in the context of one-Dimensional Location and its applications to discrete-time reinforcement learning.
Journal ArticleDOI
Fast learning in networks of locally-tuned processing units
John Moody,Christian J. Darken +1 more
TL;DR: This work proposes a network architecture which uses a single internal layer of locally-tuned processing units to learn both classification tasks and real-valued function approximations (Moody and Darken 1988).