scispace - formally typeset
Open AccessProceedings Article

The Cascade-Correlation Learning Architecture

Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Abstract
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Employee turnover

TL;DR: This research found that a NNSOA trained NN in a 10-fold cross validation experimental design can predict with a high degree of accuracy the turnover rate for a small mid-west manufacturing company.
Patent

Hierarchical deep convolutional neural network for image classification

TL;DR: In this article, a hierarchical branching deep convolutional neural network (HD-CNN) is proposed to improve the performance of CNNs by using multinomial logistic loss and a temporal sparsity penalty.
Proceedings ArticleDOI

Classification using hierarchical mixtures of experts

TL;DR: This paper extends the hierarchical mixture of experts to classification and results are reported for three common classification benchmark tests: exclusive-OR, N-input parity and two spirals.
Journal ArticleDOI

Constructive learning of recurrent neural networks: limitations of recurrent cascade correlation and a simple solution

TL;DR: This work proves that one method, recurrent cascade correlation, has fundamental limitations in representation and thus in its learning capabilities, and gives a "preliminary" approach on how to get around these limitations by devising a simple constructive training method.
Journal ArticleDOI

Translation initiation start prediction in human cDNAs with high accuracy.

TL;DR: This work improves upon current methods and provides a performance guaranteed prediction of the Translation Initiation Start in cDNA sequences by using two modules, one sensitive to the conserved motif and the othersensitive to the coding/non-coding potential around the start codon.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

Increased Rates of Convergence Through Learning Rate Adaptation

TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.