scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
01 Jan 1993
TL;DR: This work designs a learning algorithm that learns any union of k rectangles with O(k3 log n) queries, while the time complexity of this algorithm is bounded by O( k5 logn).
Abstract: We investigate the e cient learnability of unions of k rectangles in the discrete plane f1; : : : ; ng2 with equivalence and membership queries. We exhibit a learning algorithm that learns any union of k rectangles with O(k3 log n) queries, while the time complexity of this algorithm is bounded by O(k5 logn). We design our learning algorithm by nding \corners" and \edges" for rectangles contained in the target concept and then constructing the target concept from those \corners" and \edges". Our result provides a rst approach to on-line learning of nontrivial subclasses of unions of intersections of halfspaces with equivalence and membership queries.

11 citations

Journal ArticleDOI
TL;DR: This paper focuses on a general setup for obtaining sample size lower bounds for learning concept classes under fixed distribution laws in an extended PAC learning framework based on incompressibility methods drawn from Kolmogorov Complexity and Algorithmic Probability theories.

11 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...This section describes our learning framework and a few further notational conventions we adopt throughout the paper; see [2, 6 ,5,21,22] for reference....

    [...]

  • ...For instance, we may want to analyze consistent (P,P”)-learning algorithms [ 6 ] or disagreement minimization (P,P” x R)-learning algorithms [2]....

    [...]

01 Oct 2013
TL;DR: This work specifically investigates this insight that the right kind of interaction is the key to making the intractable tractable in the context of learning theory.
Abstract: : The key insight underlying this thesis is that the right kind of interaction is the key to making the intractable tractable. This work specifically investigates this insight in the context of learning theory. While much of the learning theory literature has traditionally focused on protocols that are either non-interactive or involving unrealistically strong forms of interaction, there have recently been several exciting advances in the design and analysis of methods for realistic interactive learning protocols. Perhaps one of the most interesting of these is active learning. In active learning, a learning algorithm is given access to a large pool of unlabeled examples, and is allowed to sequentially request their labels so as to learn how to accurately predict the labels of new examples. This thesis contains a number of interesting advances in our understanding of the capabilities of active learning methods. Specifically, I summarize the main contributions below.

11 citations

Proceedings ArticleDOI
09 Jun 1997
TL;DR: Bounds on the number of training examples needed to guarantee a certain level of generalization performance in the ARTMAP architecture are derived.
Abstract: Bounds on the number of training examples needed to guarantee a certain level of generalization performance in the ARTMAP architecture are derived. Conditions are derived under which ARTMAP can achieve a specific level of performance assuming any unknown, but fixed, probability distribution on the training data.

11 citations

Journal ArticleDOI
TL;DR: It is shown that if the bound on the accuracy is taken into account, quantum machine learning algorithms for supervised learning---for which statistical guarantees are available---cannot achieve polylogarithmic runtimes in the input dimension.
Abstract: Within the framework of statistical learning theory it is possible to bound the minimum number of samples required by a learner to reach a target accuracy. We show that if the bound on the accuracy is taken into account, quantum machine learning algorithms for supervised learning---for which statistical guarantees are available---cannot achieve polylogarithmic runtimes in the input dimension. We conclude that, when no further assumptions on the problem are made, quantum machine learning algorithms for supervised learning can have at most polynomial speedups over efficient classical algorithms, even in cases where quantum access to the data is naturally available.

11 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations