Open AccessProceedings Article
The Cascade-Correlation Learning Architecture
Scott E. Fahlman,Christian Lebiere +1 more
- Vol. 2, pp 524-532
Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.Abstract:
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.read more
Citations
More filters
Journal ArticleDOI
A Robust Evolutionary Algorithm for Training Neural Networks
Jinn-Moon Yang,Cheng-Yan Kao +1 more
TL;DR: Experimental results indicate that the new approach, called the Family Competition Evolutionary Algorithm (FCEA), is able to stably solve problems, and is very competitive with the comparative evolutionary algorithms.
Journal ArticleDOI
Forecasting output using oil prices: A cascaded artificial neural network approach
Farooq Malik,Mahdi Nasereddin +1 more
TL;DR: In this paper, an application of artificial neural network for short-term forecasting of GDP using oil prices and utilizing cascaded learning is proposed, and the authors find that the mean absolute forecasting error and the mean square forecasting error is reduced by applying cascaded neural network relative to conventional artificial neural networks and popular linear models.
Journal ArticleDOI
Divide and conquer neural networks
TL;DR: An algorithm called Divide and Conquer Neural Networks is described, which creates a feedforward neural network architecture during training, based upon the training examples, and the results show the algorithm effectively learns viable architectures that can generalize.
Posted Content
Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing
TL;DR: The Contextual Graph Markov Model (CGMM) as discussed by the authors is an approach combining ideas from generative models and neural networks for the processing of graph data, which is based on a constructive methodology to build a deep architecture comprising layers of probabilistic models.
Journal ArticleDOI
A genetic approach to automatic neural network architecture optimization
TL;DR: This work introduces a novel strategy which is capable to generate a network topology with overfitting being avoided in the majority of the cases at affordable computational cost.
References
More filters
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
MonographDOI
Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations
Book
Learning internal representations by error propagation
TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI
Increased Rates of Convergence Through Learning Rate Adaptation
TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.