scispace - formally typeset
Open AccessProceedings Article

The Cascade-Correlation Learning Architecture

Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Abstract
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.

read more

Content maybe subject to copyright    Report

Citations
More filters

A Review of Dimension Reduction Techniques

TL;DR: A survey of several techniques for dimension reduction, including principal component analysis, projection pursuit and projection pursuit regression, principal curves and methods based on topologically continuous maps, such as Kohonen’s maps or the generalised topographic mapping are given.
Journal ArticleDOI

River flow forecasting using recurrent neural networks

TL;DR: Recurrent neural networks were used to train and forecast the monthly flows of a river in India, with a catchment area of 5189 km2 up to the gauging site, and performed better than the feed forward networks.
Journal ArticleDOI

Bootstrapping with Noise: An Effective Regularization Technique

Yuval Raviv, +1 more
- 01 Dec 1996 - 
TL;DR: The two-spiral problem, a highly non-linear, noise-free data, is used to demonstrate the effectiveness of noisy bootstrap samples with noise for training feedforward networks and for other statistical methods such as generalized additive models.
Journal ArticleDOI

Neural network-based screening for groundwater reclamation under uncertainty

TL;DR: In this article, the authors used the pattern classification capability of a neural network and its ability to learn from examples to identify two important features that determine the level of criticalness of a realization.
Journal ArticleDOI

A new pruning heuristic based on variance analysis of sensitivity information

TL;DR: A new pruning algorithm is presented that uses the sensitivity analysis to quantify the relevance of input and hidden units and a new statistical pruning heuristic is proposed, based on the variance analysis, to decide which units to prune.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

Increased Rates of Convergence Through Learning Rate Adaptation

TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.