scispace - formally typeset
Journal ArticleDOI

Smooth function approximation using neural networks

Reads0
Chats0
TLDR
An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented and shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.
Abstract
An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

read more

Citations
More filters
Journal ArticleDOI

Extreme learning machine: Theory and applications

TL;DR: A new learning algorithm called ELM is proposed for feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs which tends to provide good generalization performance at extremely fast learning speed.
Journal ArticleDOI

A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks

TL;DR: The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance on benchmark problems drawn from the regression, classification and time series prediction areas.
Journal ArticleDOI

Neural network potential-energy surfaces in chemistry: a tool for large-scale simulations

TL;DR: In this Perspective, the current status of NN potentials is reviewed, and their advantages and limitations are discussed.
Journal ArticleDOI

What are Extreme Learning Machines? Filling the Gap Between Frank Rosenblatt’s Dream and John von Neumann’s Puzzle

TL;DR: ELM theories manage to address the open problem which has puzzled the neural networks, machine learning and neuroscience communities for 60 years: whether hidden nodes/neurons need to be tuned in learning, and proved that in contrast to the common knowledge and conventional neural network learning tenets,hidden nodes/NEurons do not need to been iteratively tuned in wide types of neural networks and learning models.
Journal ArticleDOI

Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm

TL;DR: A hybrid Taguchi-genetic algorithm (HTGA) is applied to solve the problem of tuning both network structure and parameters of a feedforward neural network and can obtain better results than the existing method reported recently in the literature.
References
More filters
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI

Multilayer feedforward networks are universal approximators

TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.
Journal ArticleDOI

Approximation by superpositions of a sigmoidal function

TL;DR: It is demonstrated that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube.
Journal ArticleDOI

Identification and control of dynamical systems using neural networks

TL;DR: It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems and the models introduced are practically feasible.
Journal ArticleDOI

Training feedforward networks with the Marquardt algorithm

TL;DR: The Marquardt algorithm for nonlinear least squares is presented and is incorporated into the backpropagation algorithm for training feedforward neural networks and is found to be much more efficient than either of the other techniques when the network contains no more than a few hundred weights.
Related Papers (5)