scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
19 Apr 1990
TL;DR: In this paper, the authors introduce a notion of teachability with which they establish a relationship between the learnability and teachability, and discuss the complexity issues of a teacher in relation to learning.
Abstract: This paper considers computational learning from the view-point of teaching. We introduce a notion of teachability with which we establish a relationship between the learnability and teachability. We also discuss the complexity issues of a teacher in relation to learning.

90 citations

Journal ArticleDOI
TL;DR: A new framework for constructing learning algorithms which involve master algorithms which use learning algorithms for intersection-closed concept classes as subroutines and show that these algorithms are optimal or nearly optimal with respect to several different criteria.
Abstract: This paper introduces a new framework for constructing learning algorithms. Our methods involve master algorithms which use learning algorithms for intersection-closed concept classes as subroutines. For example, we give a master algorithm capable of learning any concept class whose members can be expressed as nested differences (for example, c1 – (c2 – (c3 – (c4 – c5)))) of concepts from an intersection-closed class. We show that our algorithms are optimal or nearly optimal with respect to several different criteria. These criteria include: the number of examples needed to produce a good hypothesis with high confidence, the worst case total number of mistakes made, and the expected number of mistakes made in the first t trials.

90 citations


Cites background or methods from "Learnability and the Vapnik-Chervon..."

  • ...Since (a) holds for the Total Recall algorithm, we can apply the following result of (Blumer et al. 1989): THEOREM 2....

    [...]

  • ...A very important combinatorial parameter used to estimate the complexity of learning a concept class is its Vapnik-Chervonenkis dimension (see (Vapnik and Chervonenkis, 1971; Haussler and Welzl, 1987; Pearl, 1978; Blumer et al. 1989))....

    [...]

  • ...…1988; Littlestone, 1988; Blumer, Ehrenfeucht, Haussler & Warmuth, 1989; Haussler, 1989), and learnability of concept classes has been characterized (Blumer et al., 1989) using the Vapnik-Chervonenkis (VC) dimension (Vapnik and Chervonenkis, 1971), no practical algorithms have been found for many…...

    [...]

Proceedings Article
13 Jun 2013
TL;DR: It is proved that active learning provides an exponential improvement over PAC (passive) learning of homogeneous linear separators under nearly log-concave distributions, and a computationally efficient PAC algorithm with optimal sample complexity for such problems is provided.
Abstract: We provide new results concerning label efficient, polynomial time, passive and active learning of linear separators. We prove that active learning provides an exponential improvement over PAC (passive) learning of homogeneouslinear separators under nearly log-concave distributions. Building on this, we provide a computationally efficient PAC algorithm with optimal (up to a constant factor) sample complexity for such problems. This resolves an open question of (Long, 1995, 2003; Bshouty et al., 2009) concerning the sample complexity of efficient PAC algorithms under the uniformdistribution in the unit ball. Moreover,it providesthe first bound for a polynomial-time PAC algorithm that is tight for an interesting infinite class of hypothesis functions under a general and natural class of data-distributions, providing significant progress towards a longstanding open question of (Ehrenfeucht et al., 1989; Blumer et al., 1989). We also provide new bounds for active and passive learning in the case that the data might not be linearly separable, both in the agnostic case and and underthe Tsybakovlow-noisecondition. To derive our results, we provide new structural results for (nearly) log-concave distributions, which might be of independent interest as well.

90 citations


Cites background or methods from "Learnability and the Vapnik-Chervon..."

  • ...These both implycapw∗,D(ǫ) = O(C 1/2 √ d log(1/ǫ))....

    [...]

  • ...…bound can be improved to match the lower bound via a polynomial-time algorithm is been long-standing open question, both for general distributions (Ehrenfeucht et al., 1989; Blumer et al., 1989) and for the case of the uniform distribution in the unit ball (Long, 1995, 2003; Bshouty et al., 2009)....

    [...]

  • ...In this section we consider a variant of the Tsybakov noise conditi n (Mammen and Tsybakov, 1999)....

    [...]

  • ...Keywords: Active learning, PAC learning, ERM, nearly log-concave distributions, Tsybakov lownoise condition, agnostic learning....

    [...]

  • ...(Blumer et al., 1989) achieved polynomial-time learning by finding a consistenthypothesis (i.e., a hypothesis which correctly classifies all training examples); this is a special case of ERM (Vapnik, 1982)....

    [...]

Journal ArticleDOI
TL;DR: Improve the known lower bounds on the number of quantum examples required for ɛ, Δ-PAC learning any concept class of Vapnik-Chervonenkis dimension d over the domain from $$Omega({\frac d n})$$ to $$\Omega(\frac{1}{\epsilon}\log \frac{ 1}{\delta}+d+\frac{\sqrt{d}}{\ep silon})$$, which comes closer to matching
Abstract: In this article we give several new results on the complexity of algorithms that learn Boolean functions from quantum queries and quantum examples.

89 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...Since the lower bound of [11] is known to be nearly optimal for classical PAC learning algorithms (an upper bound of O( 12 log 1 δ + d 2 log 1 2 ) was given by [6]), our new quantum lower bound is not far from being the best possible....

    [...]

  • ...(ii) [6] Any concept class C of VC dimension d can be (2, δ)PAC learned by a classical algorithm with sample complexity O( 12 log 1 δ + d 2 log 1 2 )....

    [...]

  • ...For nontrivial concept classes [6] gave a classical sample complexity lower bound of Ω( 12 log 1 δ )....

    [...]

Proceedings ArticleDOI
24 Oct 1988
TL;DR: The authors consider the problem of predicting (0, 1)-valued functions on R/sup n/ and smaller domains, based on their values on randomly drawn points, and construct prediction strategies that are optimal to within a constant factor for any reasonable class F of target functions.
Abstract: The authors consider the problem of predicting (0, 1)-valued functions on R/sup n/ and smaller domains, based on their values on randomly drawn points. Their model is related to L.G. Valiant's learnability model (1984), but does not require the hypotheses used for prediction to be represented in any specified form. The authors first disregard computational complexity and show how to construct prediction strategies that are optimal to within a constant factor for any reasonable class F of target functions. These prediction strategies use the 1-inclusion graph structure from N. Alon et al.'s work on geometric range queries (1987) to minimize the probability of incorrect prediction. They then turn to computationally efficient algorithms. For indicator functions of axis-parallel rectangles and halfspaces in R/sup n/, they demonstrate how their techniques can be applied to construct computational efficient prediction strategies that are optimal to within a constant factor. They compare the general performance of prediction strategies derived by their method to those derived from existing methods in Valiant's learnability theory. >

89 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations