scispace - formally typeset
Proceedings ArticleDOI

The generalization capabilities of ARTMAP

Reads0
Chats0
TLDR
Bounds on the number of training examples needed to guarantee a certain level of generalization performance in the ARTMAP architecture are derived.
Abstract
Bounds on the number of training examples needed to guarantee a certain level of generalization performance in the ARTMAP architecture are derived. Conditions are derived under which ARTMAP can achieve a specific level of performance assuming any unknown, but fixed, probability distribution on the training data.

read more

Citations
More filters
Journal ArticleDOI

Ontologies and Worlds in Category Theory: Implications for Neural Systems

M.J. Healy, +1 more
- 01 Mar 2006 - 
TL;DR: Categorical logic and model theory are applied, based upon viewing an ontology as a sub-category of a category of theories expressed in a formal logic, for defining ontologies in an unambiguous language with analytical and constructive features.
BookDOI

Recent advances in artificial neural networks: design and applications

TL;DR: A Neuro-Symbolic Hybrid Intelligent Architecture With Applications and Efficient Neural Network-Based Methodology for the Design of Multiple Classifiers.

Neural Networks, Knowledge and Cognition: A Mathematical Semantic Model Based upon Category Theory

TL;DR: The categorical semantic model described here explains the learning process as the derivation of colimits and limits in a concept category as part of a system of functors and natural transformations that constrain neural network designs capable of the most important aspects of cognitive behavior.
Journal ArticleDOI

Guaranteed two-pass convergence for supervised and inferential learning

TL;DR: A main result is a proof that the new architecture, called LAPART 2, converges in two passes through a fixed training set of inputs, and it is proved that it does not suffer from template proliferation.

Generalized Lattices Express Parallel Distributed Concept Learning.

TL;DR: Using categorical constructs based upon composition together with structure-preserving mappings that preserve compositional structure, a recently-developed semantic theory shows how abstract and specialized concepts are learned by a neural network.
References
More filters
Proceedings ArticleDOI

A theory of the learnable

TL;DR: This paper regards learning as the phenomenon of knowledge acquisition in the absence of explicit programming, and gives a precise methodology for studying this phenomenon from a computational viewpoint.
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Journal ArticleDOI

ARTMAP: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network

TL;DR: A new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success, which is a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success.
Journal ArticleDOI

Properties of learning in ARTMAP

TL;DR: It is shown that if ARTMAP is repeatedly presented with a list of input/output pairs, it establishes the required mapping in at most M a −1 list presentations, where M a corresponds to the total number of ones in each one of the input patterns.
Related Papers (5)