scispace - formally typeset
Open AccessProceedings Article

The Cascade-Correlation Learning Architecture

Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Abstract
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI

A Cascade Network Algorithm Employing Progressive RPROP

TL;DR: Casper, like Cascor, is a constructive learning algorithm which builds cascade networks, but instead of using weight freezing and a correlation measure to install new neurons, Casper uses a variation of RPROP to train the whole network.
Journal ArticleDOI

Learning flexible sensori-motor mappings in a complex network

TL;DR: This work studies different reinforcement learning rules in a multilayer network in order to reproduce monkey behavior in a visuomotor association task and shows the interesting feature that the learning performance does not substantially degrade when adding layers to the network, even for a complex problem.
Journal ArticleDOI

Fast initialization for cascade-correlation learning

TL;DR: Empirical simulations show that the new method can significantly speed-up the cascade-correlation learning compared to the case where the candidate training is used, and the overall performance remained similar or was even better than with the candidateTraining.
Journal ArticleDOI

Sequential learning in neural networks: A review and a discussion of pseudorehearsal based methods

TL;DR: This review explores the topic of sequential learning, where information to be learned and retained arrives in separate episodes over time, in the context of artificial neural networks, and examines the pseudorehearsal mechanism, which is an effective solution to the catastrophic forgetting problem in back propagation type networks.
Dissertation

Rapid learning in robotics

TL;DR: A new learning algorithm, the Parameterized Self-Organizing Maps, is derived from a model of neural self-organization that has a number of benefits that make it particularly suited for applications in the field of robotics.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

Increased Rates of Convergence Through Learning Rate Adaptation

TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.