scispace - formally typeset
Open AccessProceedings Article

The Cascade-Correlation Learning Architecture

Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Abstract
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal Article

A Comparative Performance Study on Hybrid Swarm Model for Micro array Data

TL;DR: A Flexible Neural Tree (FNT) model for microarray data is constructed using Nature-inspired algorithms and is superior to the existing metaheuristic algorithm and solves multimodal optimization problems.
Proceedings ArticleDOI

Comparative analysis of RSSI, SNR and Noise level parameters applicability for WLAN positioning purposes

TL;DR: In this article, an extensive experimental analysis of RSSI, SNR, and Noise level parameters usefulness for WLAN positioning purposes was conducted, and the obtained results pointed out that the space distribution of the noise level parameter contains less location dependant information than RSSI or SNR.
Posted Content

The Incredible Shrinking Neural Network: New Perspectives on Learning Representations Through The Lens of Pruning

TL;DR: It is discovered that there is a straightforward way, however expensive, to serially prune 40-70\% of the neurons in a trained network with minimal effect on the learning representation and without any re-training.
Proceedings ArticleDOI

Network of dynamic neurons in fault detection systems

TL;DR: The cascade network of dynamic neurons (CNDN) as a neural-residual generator for fault detection in a dynamic systems is considered and is useful in neural modelling of dynamic system for FDI (Fault Detection and Isolation).
Proceedings ArticleDOI

On lateral connections in feed-forward neural networks

TL;DR: A new algorithm, based on gradient descent, is derived for the proposed architecture and its efficacy evaluated through simulations, which aims to reduce the herd effect in a feed-forward network.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

Increased Rates of Convergence Through Learning Rate Adaptation

TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.