scispace - formally typeset
Open AccessProceedings Article

The Cascade-Correlation Learning Architecture

Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Abstract
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Application of the cascade correlation algorithms (CCA) to bearing fault classification problems

TL;DR: From the results of the experiments provided in the paper, the CCA is shown to be a viable network structure for identifying several different bearing conditions utilizing a minimum network structure without a compromise in accuracy.
Proceedings ArticleDOI

G-Prop-II: global optimization of multilayer perceptrons using GAs

TL;DR: The G-Prop-II algorithm is proposed, which combines the advantages of the global search performed by the GA over the MLP parameter space and the local search of the BP algorithm to train MLPs with a single hidden layer.
Journal ArticleDOI

MLP iterative construction algorithm

TL;DR: A novel multi-layer perceptron neural network architecture selection and weight training algorithm for classification problems that improves the effectiveness of Backprop and enables Backprop to solve a new class of problems, i.e., problems with areas of low mean-squared error.
Journal ArticleDOI

A parallel growing architecture for self-organizing maps with unsupervised learning

TL;DR: This work presents a SOM that processes the whole input in parallel and organizes itself over time, and develops networks that do not reorganize their structure from scratch every time a new set of input vectors is presented, but rather adjust their internal architecture in accordance with previous mappings.
Journal ArticleDOI

The application of intelligent and soft-computing techniques to software engineering problems: a review

TL;DR: It is found that NNs is the most often used non-parametric method in SE and there exists immense scope to apply other equally famous methods such as fuzzy logic, decision trees and rough sets.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

Increased Rates of Convergence Through Learning Rate Adaptation

TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.