scispace - formally typeset
Open AccessProceedings Article

The Cascade-Correlation Learning Architecture

Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Abstract
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Bone age cluster assessment and feature clustering analysis based on phalangeal image rough segmentation

TL;DR: Experimental results reveal that the presented FNN system provides a very good ability to assign a hand radiograph to an appropriate bone age cluster and demonstrates the rationality of those new defined stages.
Journal ArticleDOI

Combining linear discriminant functions with neural networks for supervised learning

TL;DR: A novel supervised learning method is proposed by combining linear discriminant functions with neural networks that results in a tree-structured hybrid architecture that provides an efficient way to apply existing neural networks for solving a large scale problem.
Journal ArticleDOI

An Experimentation Platform for On-Chip Integration of Analog Neural Networks: A Pathway to Trusted and Robust Analog/RF ICs

TL;DR: The design of an experimentation platform intended for prototyping low-cost analog neural networks for on-chip integration with analog/RF circuits and a robust learning strategy is discussed, and the system performance on several benchmark problems, such as the XOR2-6 and two-spirals classification tasks is evaluated.
Journal ArticleDOI

Modeling tongue-palate contact patterns in the production of speech

TL;DR: An attempt to reduce EPG data to a small number of articulatorily relevant parameters in an empirical way, and to model the configuration of the linguo-palatal contacts in speech as a combination of these parameters provided evidence for the hypothesis that the tongue tip/blade and the tongue dorsum are two independently controllable articulators.
Proceedings Article

Mining multivariate time-series sensor data to discover behavior envelopes

TL;DR: This paper addresses large-scale regression tasks using a novel combination of greedy input selection and asymmetric cost that can be more effective than traditional techniques, such as static red-line limits, variance-based error bars, and general probability density estimation.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

Increased Rates of Convergence Through Learning Rate Adaptation

TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.