Open AccessProceedings Article
The Cascade-Correlation Learning Architecture
Scott E. Fahlman,Christian Lebiere +1 more
- Vol. 2, pp 524-532
Reads0
Chats0
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.Abstract:
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.read more
Citations
More filters
Journal ArticleDOI
A Review of Automated Methods for Detection of Myocardial Ischemia and Infarction Using Electrocardiogram and Electronic Health Records
Sardar Ansari,Negar Farzaneh,Marlena Duda,Kelsey Horan,Hedvig Bille Andersson,Zachary D. Goldberger,Brahmajee K. Nallamothu,Kayvan Najarian +7 more
TL;DR: An overview of the methods proposed for automatic detection of ischemia and myocardial infarction using computer algorithms focuses on their historical evolution, the publicly available datasets that they have used to evaluate their performance, and the details of their algorithms for ECG and EHR analysis.
Journal ArticleDOI
Bioelectronic noses: a status report. Part II1Co-ordinating authors: Ch. Ziegler and W. Göpel. All other authors are contributing authors.1
Ch. Ziegler,Wolfgang Göpel,H. Hämmerle,H. Hatt,Günther Jung,L. Laxhuber,H.-L. Schmidt,S. Schütz,F. Vögtle,A. Zell +9 more
Journal ArticleDOI
Rule extraction by successive regularization
TL;DR: A novel approach to rule extraction named successive regularization is proposed, which generates a small number of dominant rules at an earlier stage and less dominant rules or exceptions at later stages and indicates superior performance of rule extraction in terms of the number and the size of rules for explaining data.
Journal ArticleDOI
Correction of AI systems by linear discriminants: Probabilistic foundations
Alexander N. Gorban,A. M. Golubkov,Bogdan Grechuk,Eugenij Moiseevich Mirkes,Ivan Tyukin,Ivan Tyukin +5 more
TL;DR: In this article, a series of new stochastic separation theorems are proven for fast non-destructive correction of AI systems, including binary classifiers, which separate the situations with high risk of errors from the situations where the AI systems work properly.
Journal ArticleDOI
Recovery of forest canopy characteristics through inversion of a complex 3D model
TL;DR: In this article, a 3D model (Discrete anisotropic radiative transfer, DART) was inverted for a wide range of simulated forest canopies using POLDER-like data.
References
More filters
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
MonographDOI
Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations
Book
Learning internal representations by error propagation
TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI
Increased Rates of Convergence Through Learning Rate Adaptation
TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.