Open AccessProceedings Article
The Cascade-Correlation Learning Architecture
Scott E. Fahlman,Christian Lebiere +1 more
- Vol. 2, pp 524-532
Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.Abstract:
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.read more
Citations
More filters
Journal ArticleDOI
A systematic review on overfitting control in shallow and deep neural networks
TL;DR: A systematic review of the overfit controlling methods and categorizes them into passive, active, and semi-active subsets, which includes the theoretical and experimental backgrounds of these methods, their strengths and weaknesses, and the emerging techniques for overfitting detection.
Journal ArticleDOI
Solving differential equations with constructed neural networks
TL;DR: A novel hybrid method for the solution of ordinary and partial differential equations is presented, which creates trial solutions in neural network form using a scheme based on grammatical evolution.
Journal ArticleDOI
Robust Full Bayesian Learning for Radial Basis Networks
TL;DR: It is shown that by calibrating the full hierarchical Bayesian prior, the classical Akaike Information criterion, Bayesian information criterion, and minimum description length model selection criteria within a penalized likelihood framework can be obtained.
Journal ArticleDOI
Process modeling using stacked neural networks
TL;DR: A new technique for neural-network-based modeling of chemical processes is proposed, inspired by the technique of stacked generalization proposed by Wolpert, and results obtained demonstrate the promise of this approach for improved neural- network-based plant-process modeling.
Journal ArticleDOI
A Fast Simplified Fuzzy ARTMAP Network
TL;DR: An algorithmic variant of the simplified fuzzy ARTMAP (SFAM) network, whose structure resembles those of feed-forward networks, is presented, and it is shown that its algorithm is much faster than Kasuba's algorithm, and by increasing the number of training samples, the difference in speed grows enormously.
References
More filters
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
MonographDOI
Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations
Book
Learning internal representations by error propagation
TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI
Increased Rates of Convergence Through Learning Rate Adaptation
TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.