scispace - formally typeset
Open AccessJournal ArticleDOI

On learning a union of half spaces

E. B. Baum
- 01 Mar 1990 - 
- Vol. 6, Iss: 1, pp 67-101
Reads0
Chats0
TLDR
It is proved that no such approach to evading the CAP can work, and a new, fast algorithm is given for learning unions of half spaces in fixed dimension, suggesting a generalization of this approach which naively would avoid a credit assignment problem and learn in time polynomial in dimension.
About
This article is published in Journal of Complexity.The article was published on 1990-03-01 and is currently open access. It has received 126 citations till now. The article focuses on the topics: Time complexity & Polynomial.

read more

Citations
More filters
Journal ArticleDOI

The Strength of Weak Learnability

TL;DR: In this paper, a method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy, and it is shown that these two notions of learnability are equivalent.
MonographDOI

The random projection method

TL;DR: This paper presents a meta-modelling framework for embedding metrics in Euclidean space using a random projection approach and shows how this approach can be improved on the basis of prior work on similar models.
Journal ArticleDOI

Almost optimal set covers in finite VC-dimension

TL;DR: A deterministic polynomial-time method for finding a set cover in a set system (X, ℛ) of dual VC-dimensiond such that the size of the authors' cover is at most a factor ofO(d log(dc)) from the optimal size,c.
Journal ArticleDOI

Does the nervous system use equilibrium-point control to guide single and multiple joint movements?

TL;DR: The hypothesis that the central nervous system generates movement as a shift of the limb's equilibrium posture has been corroborated experimentally in studies involving single- and multijoint motions and can now be investigated in the neurophysiological machinery of the spinal cord.
Journal ArticleDOI

An algorithmic theory of learning: robust concepts and random projection

TL;DR: This work provides a novel algorithmic analysis via a model of robust concept learning (closely related to “margin classifiers”), and shows that a relatively small number of examples are sufficient to learn rich concept classes.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Pattern classification and scene analysis

TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Proceedings ArticleDOI

A theory of the learnable

TL;DR: This paper regards learning as the phenomenon of knowledge acquisition in the absence of explicit programming, and gives a precise methodology for studying this phenomenon from a computational viewpoint.
Related Papers (5)