scispace - formally typeset
Open AccessProceedings Article

The Cascade-Correlation Learning Architecture

Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Abstract
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Neurogenetic learning: an integrated method of designing and training neural networks using genetic algorithms

TL;DR: The proposed neurogenetic learning scheme provides an integrated means to design and train neural networks, and use the gradient-descend approach for fine-tuning of the network weights and biases.
Journal ArticleDOI

Deep Cascade Learning

TL;DR: The features learned by the deep cascade learning algorithm are investigated and it is found that better, domain-specific, representations are learned in early layers when compared to what is learned in end–end training.
Proceedings Article

Transfer functions: hidden possibilities for better neural networks.

TL;DR: Several possibilities of using transfer functions of different types in neural models are discussed, including enhance- ment of input features, selection of functions from a fixed pool, optimization of parameters of general type of functions, regularization of large networks with heterogeneous nodes and constructive approaches.
Journal ArticleDOI

A Decade of Kasabov's Evolving Connectionist Systems: A Review

TL;DR: This paper reviews the current state of the art in the field of ECoS networks via a substantial literature review and suggests some suggestions of future directions of research into E CoS networks.
Journal ArticleDOI

Dealing with Missing Values in Data

TL;DR: This paper presents simple methods for missing values imputation like using most common value, mean or median, closest fit approach and methods based on data mining algorithms like k-nearest neighbor, neural networks and association rules, discusses their usability and presents issues with their applicability on examples.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

Increased Rates of Convergence Through Learning Rate Adaptation

TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.