scispace - formally typeset
Open AccessProceedings Article

The Cascade-Correlation Learning Architecture

Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Abstract
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

From neural networks to the brain: autonomous mental development

TL;DR: Why autonomous development is necessary according to a concept called task muddiness is discussed and results for a series of research issues are introduced, including the new paradigm for autonomous development, mental architectures, developmental algorithm, a refined classification of types of machine learning, spatial complexity and time complexity.
Journal ArticleDOI

Training a single sigmoidal neuron is hard

TL;DR: It is proved that the simplest architecture containing only a single neuron that applies a sigmoidal activation function sigma, satisfying certain natural axioms, to the weighted sum of n inputs is hard to train.
Journal Article

Progressive Feature Extraction with a Greedy Network-growing Algorithm.

Ryotaro Kamimura
- 01 Jan 2003 - 
TL;DR: Experimental results confirm that the new method can acquire significant information and that more explicit features can be extracted and the new model can cope with inappropriate feature detection in the early stage of learning.
Proceedings ArticleDOI

Neural Network Trainer with Second Order Learning Algorithms

TL;DR: A software (NNT) developed for neural network training in addition to the traditional Error Back Propagation (EBP) algorithm, several second order algorithms were implemented and they are able to train arbitrarily connected feedforward neural networks.
Journal ArticleDOI

A New Data Mining Scheme Using Artificial Neural Networks

TL;DR: A novel algorithm to extract symbolic rules from ANNs that are easily explainable and comparable with other methods in terms of number of rules, average number of conditions for a rule, and the accuracy is proposed.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

Increased Rates of Convergence Through Learning Rate Adaptation

TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.