scispace - formally typeset
Open AccessProceedings Article

The Cascade-Correlation Learning Architecture

Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Abstract
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Ensemble Learning Using Decorrelated Neural Networks

Bruce E. Rosen
- 01 Dec 1996 - 
TL;DR: Empirical results show than when individual networks are forced to be decorrelated with one another the resulting ensemble NNs have lower mean squared errors than the ensemble networks having independently trained individual networks without decorrelation training.
Journal ArticleDOI

Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

TL;DR: The R package RSNNS is described that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS, and encapsulation of the relevant SNNs parts in a C++ class for sequential and parallel usage of different networks.
Journal ArticleDOI

Trading spaces: Computation, representation, and the limits of uninformed learning

TL;DR: The most distinctive features of human cognition – language and culture – may themselves be viewed as adaptations enabling this representation/computation trade-off to be pursued on an even grander scale.
Journal ArticleDOI

Connectionist modelling in psychology: A localist manifesto

TL;DR: A generalized localist model is proposed that exhibits many of the properties of fully distributed models and can be applied to a number of problems that are difficult for fully distributed Models.
Journal ArticleDOI

Comparing neural network and autoregressive moving average techniques for the provision of continuous river flow forecasts in two contrasting catchments

TL;DR: In this article, the forecasting power of neural network (NN) and autoregressive moving average (ARMA) models are compared based on 3-year continuous river flow data for two contrasting catchments: the Upper River Wye and the River Ouse.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

Increased Rates of Convergence Through Learning Rate Adaptation

TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.