scispace - formally typeset
Open AccessProceedings Article

The Cascade-Correlation Learning Architecture

Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Abstract
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Cooperative-competitive genetic evolution of radial basis function centers and widths for time series prediction

TL;DR: In this paper, a cooperative-competitive genetic algorithm is proposed to evolve radial basis function (RBF) networks, where the RBF centers and widths can be evolved by a cooperative competitive genetic algorithm.
Journal ArticleDOI

Toward global optimization of neural networks: a comparison of the genetic algorithm and backpropagation

TL;DR: It is demonstrated that the genetic algorithm cannot only serve as a global search algorithm but by appropriately defining the objective function it can simultaneously achieve a parsimonious architecture.
Journal ArticleDOI

Neural networks for wave forecasting

TL;DR: A simple 3-layered feed forward type of network is developed and shows that an appropriately trained network could provide satisfactory results in open wider areas, in deep water and also when the sampling and prediction interval is large, such as a week.
Journal ArticleDOI

Précis of Beyond modularity: A developmental perspective on cognitive science

TL;DR: What is special about human cognition is considered by speculating on the status of representations underlying the structure of behavior in other species by looking at Fodor's anticonstructivist nativism and Piaget's antinativist constructivism.
Journal ArticleDOI

The effect of internal parameters and geometry on the performance of back-propagation neural networks: an empirical study

TL;DR: The results obtained indicate that learning rate, momentum, the gain of the transfer function, epoch size and network geometry have a significant impact on training speed, but not on generalisation ability.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

Increased Rates of Convergence Through Learning Rate Adaptation

TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.