scispace - formally typeset
Open AccessJournal ArticleDOI

Learning with Restricted Focus of Attention

TLDR
A formal framework for the analysis of learning tasks in which the learner faces restrictions on the amount of information he can extract from each example he encounters is introduced and a new simple noise-tolerant algorithm is obtained by constructing an intuitivek-wRFA algorithm for this task.
About
This article is published in Journal of Computer and System Sciences.The article was published on 1998-06-01 and is currently open access. It has received 66 citations till now. The article focuses on the topics: Learnability & VC dimension.

read more

Citations
More filters
Journal ArticleDOI

Statistical Algorithms and a Lower Bound for Detecting Planted Cliques

TL;DR: The main application is a nearly optimal lower bound on the complexity of any statistical query algorithm for detecting planted bipartite clique distributions when the planted clique has size O(n1/2 − δ) for any constant δ > 0.
Posted Content

On-Device Machine Learning: An Algorithms and Learning Theory Perspective.

TL;DR: This survey reformulates the problem of on-device learning as resource constrained learning where the resources are compute and memory to allow tools, techniques, and algorithms from a wide variety of research areas to be compared equitably.
Journal ArticleDOI

Every Linear Threshold Function has a Low-Weight Approximator

TL;DR: This work shows that any linear threshold function f is specified to within error ε by estimates of its Chow parameters (degree 0 and 1 Fourier coefficients) which are accurate to within an additive, and gives the first polynomial bound on the number of examples required for learning linear threshold functions in the “restricted focus of attention” framework.
Posted Content

On the Complexity of Random Satisfiability Problems with Planted Solutions

TL;DR: In this article, the problem of identifying a planted assignment given a random $k-SAT formula consistent with the assignment exhibits a large algorithmic gap: while the planted solution becomes unique and can be identified given a formula with $O(n\log n)$ clauses, there are distributions over clauses for which the best known efficient algorithms require $n^{k/2}$ clauses.
Journal Article

Efficient Learning with Partially Observed Attributes

TL;DR: In this article, the authors investigate three variants of budgeted learning, a setting in which the learner is allowed to access a limited number of attributes from training or test examples, and design and analyze an efficient algorithm for learning linear predictors that actively samples the attributes of each training instance.
References
More filters
Proceedings ArticleDOI

A theory of the learnable

TL;DR: This paper regards learning as the phenomenon of knowledge acquisition in the absence of explicit programming, and gives a precise methodology for studying this phenomenon from a computational viewpoint.
Book ChapterDOI

On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities

TL;DR: This chapter reproduces the English translation by B. Seckler of the paper by Vapnik and Chervonenkis in which they gave proofs for the innovative results they had obtained in a draft form in July 1966 and announced in 1968 in their note in Soviet Mathematics Doklady.
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Journal ArticleDOI

Learning Decision Lists

TL;DR: This paper introduces a new representation for Boolean functions, called decision lists, and shows that they are efficiently learnable from examples, and strictly increases the set of functions known to be polynomially learnable, in the sense of Valiant (1984).
Journal ArticleDOI

Learning From Noisy Examples

TL;DR: This paper shows that when the teacher may make independent random errors in classifying the example data, the strategy of selecting the most consistent rule for the sample is sufficient, and usually requires a feasibly small number of examples, provided noise affects less than half the examples on average.
Related Papers (5)