scispace - formally typeset
Open AccessProceedings Article

The Cascade-Correlation Learning Architecture

Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Abstract
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Pruning using parameter and neuronal metrics

TL;DR: In this paper, the distance from the original network to the new network in a metric defined by the probability distributions of all possible networks is defined as a measure of optimality for architecture selection algorithms for neural networks.
Journal ArticleDOI

Neuro-genetic system for stock index prediction

TL;DR: The proposed neuro-genetic system for short-term stock index prediction works well in cases of both upward or downward trends and is compared with the results of four other models of stock market trading.
Book ChapterDOI

A new cuckoo search based levenberg-marquardt (CSLM) algorithm

TL;DR: An improved Levenberg-Marquardt back propagation (LMBP) algorithm integrated and trained with Cuckoo Search (CS) algorithm to avoided local minima problem and achieves fast convergence is proposed.
Journal ArticleDOI

Geometrical synthesis of MLP neural networks

TL;DR: It will be shown in this paper that the presented procedure leads to single-hidden layer neural networks able to solve any problem in classifying a finite number of patterns.
Proceedings ArticleDOI

Learning by gradient descent in function space

TL;DR: A modified backpropagation algorithm for connectionist networks, which performs gradient descent in function space, is presented, and its advantages include faster learning and ease of interpretation of the trained network.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

Increased Rates of Convergence Through Learning Rate Adaptation

TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.