Open AccessProceedings Article
The Cascade-Correlation Learning Architecture
Scott E. Fahlman,Christian Lebiere +1 more
- Vol. 2, pp 524-532
Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.Abstract:
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.read more
Citations
More filters
Journal ArticleDOI
A neural network for classifying the financial health of a firm
TL;DR: This research is the first to use Cascade-Correlation for corporate health estimation, and it solves the hidden architecture enigma encountered using other types of neural networks.
Journal ArticleDOI
A penalty-function approach for pruning feedforward neural networks
TL;DR: The effectiveness of this penalty function for pruning feedforward neural network by weight elimination is tested on three well-known problems: the contiguity problem, the parity problems, and the monks problems.
Journal ArticleDOI
Comparative evaluation of pattern recognition techniques for detection of microcalcifications in mammography
K. Woods,Christopher C. Doss,Kevin W. Bowyer,Jeffrey L. Solka,Carey E. Priebe,W. Philip Kegelmeyer +5 more
TL;DR: This paper focuses on the classification of segmented local bright spots as either calcification or noncalcification in mammographic images and seven classifiers (linear and quadratic classifiers, binary decision trees, standard backpropagation network, 2 dynamic neural networks, and a K-nearest neighbor) are compared.
Posted Content
An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution
Rosanne Liu,Joel Lehman,Piero Molino,Felipe Petroski Such,Eric Frank,Alex Sergeev,Jason Yosinski +6 more
TL;DR: Preliminary evidence that swapping convolution for CoordConv can improve models on a diverse set of tasks is shown, which works by giving convolution access to its own input coordinates through the use of extra coordinate channels without sacrificing the computational and parametric efficiency of ordinary convolution.
Journal ArticleDOI
Global optimization for neural network training
Yi Shang,Benjamin W. Wah +1 more
TL;DR: A novel global minimization method, called NOVEL (Nonlinear Optimization via External Lead), is proposed, and its superior performance on neural network learning problems is demonstrated.
References
More filters
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
MonographDOI
Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations
Book
Learning internal representations by error propagation
TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI
Increased Rates of Convergence Through Learning Rate Adaptation
TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.