scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
15 Jun 2021
TL;DR: In this article, the authors present a coreset framework that simultaneously improves upon the best known bounds for a large variety of settings, ranging from Euclidean space, doubling metric, minor-free metric, and general metric cases.
Abstract: Given a metric space, the (k,z)-clustering problem consists of finding k centers such that the sum of the of distances raised to the power z of every point to its closest center is minimized. This encapsulates the famous k-median (z=1) and k-means (z=2) clustering problems. Designing small-space sketches of the data that approximately preserves the cost of the solutions, also known as coresets, has been an important research direction over the last 15 years. In this paper, we present a new, simple coreset framework that simultaneously improves upon the best known bounds for a large variety of settings, ranging from Euclidean space, doubling metric, minor-free metric, and the general metric cases.

30 citations

Journal ArticleDOI
TL;DR: The paper uses the task of learning to formally discriminate among several emotional states as both a working example and a test bench for a comparison with previous symbolic and subsymbolic methods in the field.
Abstract: With the aim of getting understandable symbolic rules to explain a given phenomenon, we split the task of learning these rules from sensory data in two phases: a multilayer perceptron maps features into propositional variables and a set of subsequent layers operated by a PAC-like algorithm learns Boolean expressions on these variables. The special features of this procedure are that: i) the neural network is trained to produce a Boolean output having the principal task of discriminating between classes of inputs; ii) the symbolic part is directed to compute rules within a family that is not known a priori; iii) the welding point between the two learning systems is represented by a feedback based on a suitability evaluation of the computed rules. The procedure we propose is based on a computational learning paradigm set up recently in some papers in the fields of theoretical computer science, artificial intelligence and cognitive systems. The present article focuses on information management aspects of the procedure. We deal with the lack of prior information about the rules through learning strategies that affect both the meaning of the variables and the description length of the rules into which they combine. The paper uses the task of learning to formally discriminate among several emotional states as both a working example and a test bench for a comparison with previous symbolic and subsymbolic methods in the field.

30 citations

Proceedings ArticleDOI
05 Jul 1995
TL;DR: This work shows that if a class of f0 1g-valued functions is efficiently agnostically learnable (using the same function class) with the discrete loss function, then the class of linear combinations of functions from the class is efficientlyAgnostic learnable with the quadratic loss function.
Abstract: We consider efficient agnostic learning of linear combinations of basis functions when the sum of absolute values of the weights of the linear combinations is bounded. With the quadratic loss function, we show that the class of linear combinations of a set of basis functions is efficiently agnostically learnable if and only if the class of basis functions is efficiently agnostically learnable. We also show that the sample complexity for learning the linear combinations grows polynomially if and only if a combinatorial property of the class of basis functions, called the fat-shattering function, grows at most polynomially. We also relate the problem to agnostic learning of f0 1g-valued function classes by showing that if a class of f0 1g-valued functions is efficiently agnostically learnable (using the same function class) with the discrete loss function, then the class of linear combinations of functions from the class is efficiently agnostically learnable with the quadratic loss function.

30 citations


Cites background or methods from "Learnability and the Vapnik-Chervon..."

  • ...k-term DNF is not properly PAC learnable but is learnable when k-CNF is used as the hypothesis class; see [7])....

    [...]

  • ...If G is agnostically PAC learnable, then the fat-shattering function (which is the same as the VC-dimension for f0 1gvalued functions) is polynomial in the complexity parameters [7]....

    [...]

Journal ArticleDOI
TL;DR: With this new notion, it is shown that H-EFS(m, k, t, r) is polynomial-time learnable, which is the class of languages definable by EFSs consisting of at mostm hereditary definite clauses with predicate symbols of arity at mostr, wherek andt bound the number of variable occurrences in the head and thenumber of atoms in the body, respectively.
Abstract: An elementary formal system (EFS) is a logic program consisting of definite clauses whose arguments have patterns instead of first-order terms. We investigate EFSs for polynomial-time PAC-learnability. A definite clause of an EFS is hereditary if every pattern in the body is a subword of a pattern in the head. With this new notion, we show that H-EFS(m, k, t, r) is polynomial-time learnable, which is the class of languages definable by EFSs consisting of at mostm hereditary definite clauses with predicate symbols of arity at mostr, wherek andt bound the number of variable occurrences in the head and the number of atoms in the body, respectively. The class defined by all finite unions of EFSs in H-EFS(m, k, t, r) is also polynomial-time learnable. We also show an interesting series ofNC-learnable classes of EFSs. As hardness results, the class of regular pattern languages is shown not polynomial-time learnable unlessRP=NP. Furthermore, the related problem of deciding whether there is a common subsequence which is consistent with given positive and negative examples is shownNP-complete.

29 citations

Proceedings ArticleDOI
05 Jul 1995
TL;DR: A general upper bound on the expected absolute error of this algorithm is proved in terms of a scale-sensitive generalization of the Vapnik dimension proposed by Alon, Ben-David, Cesa-Bianchi and Haussler.
Abstract: We present a new general-purpose algorithm for learning classes of [0; 1]-valued functions in a generalization of the prediction model, and prove a general upper bound on the expected absolute error of this algorithm in terms of a scale-sensitive generalization of the Vapnik dimension proposed by Alon, Ben-David, Cesa-Bianchi and Haussler. We give lower bounds implying that our upper bounds cannot be improved by more than a constant in general. We apply this result, together with techniques due to Haussler, to obtain new upper bounds on packing numbers in terms of this scale-sensitive notion of dimension. Using a different technique, we obtain new bounds on packing numbers in terms of Kearns and Schapire’s fat-shattering function. We show how to apply both packing bounds to obtain improved general bounds on the sample complexity of agnostic learning. For each > 0, we establish weaker sufficient and stronger necessary conditions for a class of [0; 1]-valued functions to be agnostically learnable to within , to be an uniform Glivenko-Cantelli class, and to be agnostically learnable to within by an algorithm using only hypotheses from the class.

29 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations