scispace - formally typeset
Open AccessProceedings Article

The Cascade-Correlation Learning Architecture

Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Abstract
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Artificial neural network modeling for predicting final moisture content of individual Sugi ( Cryptomeria japonica ) samples during air-drying

TL;DR: In this article, an ANN model was developed based on initial moisture content, basic density, annual ring orientation and heartwood ratio to predict the final moisture content (MCf) of individual wood samples, and the ANN model showed good agreement with the experimentally measured MCf with a higher correlation coefficient and a lower root mean square error (RMSE).
Journal ArticleDOI

RBF networks training using a dual extended Kalman filter

Iulian B. Ciocoiu
- 01 Oct 2002 - 
TL;DR: A new supervised learning procedure for training RBF networks is proposed, which uses a pair of parallel running Kalman filters to sequentially update both the output weights and the centres of the network.
Proceedings ArticleDOI

Understanding the Evolutionary Process of Grammatical Evolution Neural Networks for Feature Selection in Genetic Epidemiology

TL;DR: The evolutionary characteristics of GENN are compared to that of a random search neural network strategy to better understand the benefits provided by the evolutionary learning process - including advantages with respect to chromosome size and the representation of functional versus non-functional features within the models generated by the two approaches.
Journal ArticleDOI

Practical complexity control in multilayer perceptrons

TL;DR: The dependency of overfitting on neural networks complexity is analysed, and within the perspective of the bias-variance trade-off, the error evolution and the effects of these techniques is characterized.
Journal ArticleDOI

Combinatorial evolution of regression nodes in feedforward neural networks

TL;DR: A novel algorithm (CERN) is proposed which uses a special form of combinatorial search to optimise groups of neural nodes to achieve significantly better accuracy with fewer nodes than spherical basis nodes optimised by k-means clustering.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

Increased Rates of Convergence Through Learning Rate Adaptation

TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.