Open AccessProceedings Article
The Cascade-Correlation Learning Architecture
Scott E. Fahlman,Christian Lebiere +1 more
- Vol. 2, pp 524-532
Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.Abstract:
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.read more
Citations
More filters
Proceedings ArticleDOI
Finding the number of hidden neurons for an MLP neural network using coarse to fine search technique
TL;DR: The coarse to fine search method is employed to find the number of neurons in the hidden layer of the multi-layer perceptron (MLP) neural network, using the YCbCr colour space due to its capability to separate the luminance and chrominance components explicitly.
Patent
Automatic neural-net model generation and maintenance
Meng Zhuo,Pao Yoh-Han +1 more
TL;DR: In this article, a function approximation node is incrementally added to the neural net model and function parameters of other nodes in the neural network model are updated by using the function parameter of the other nodes prior to the addition of the function approximation.
Journal ArticleDOI
Biometrics on smart cards: an approach to keyboard behavioral signature
TL;DR: The architecture of the FGCs sequential and parallel inference machines are described and it is shown that the parallel machines outperform the sequential ones in terms of power and efficiency.
Journal ArticleDOI
Min-max predictive control of a heat exchanger using a neural network solver
TL;DR: The use of a neural network (NN) to approximate the solution of the min-max problem is proposed and the number of inputs of the NN is determined by the order and time delay of the model together with the control horizon.
Journal ArticleDOI
An Optimized Artificial Neural Network Structure to Predict Clay Sensitivity in a High Landslide Prone Area Using Piezocone Penetration Test (CPTu) Data: A Case Study in Southwest of Sweden
TL;DR: In this paper, a developed and optimized five layer feed-forward back-propagation neural network with 4-4-4,3-1 topology, network error of 0.00201 and R2 = 0.941 under the conjugate gradient descent ANN training algorithm was introduced to predict the clay sensitivity parameter in a specified area in southwest of Sweden.
References
More filters
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
MonographDOI
Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations
Book
Learning internal representations by error propagation
TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI
Increased Rates of Convergence Through Learning Rate Adaptation
TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.