scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article
01 Jan 2003
TL;DR: It is proved that a natural generalization of these classes, the class of renamable Horn CNF functions, does not have polynomial size certificates of non-membership, thus answering an open question of Feigelson.
Abstract: This paper studies the query complexity of learning classes of expressions in propositional logic from equivalence and membership queries. We give new constructions of polynomial size certificates of non-membership for monotone, unate and Horn CNF functions. Our constructions yield quantitatively different bounds from previous constructions of certificates for these classes. We prove lower bounds on certificate size which show that for some parameter settings the certificates we construct for these classes are exactly optimal. Finally, we also prove that a natural generalization of these classes, the class of renamable Horn CNF functions, does not have polynomial size certificates of non-membership, thus answering an open question of Feigelson.

11 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...Results in [12] show that VC-dimension [13] cannot resolve this question....

    [...]

Journal Article
TL;DR: The results essentially establish that the possession of a suitable exploration policy for collecting the necessary examples is the fundamental obstacle to learning to act in such environments.
Abstract: We formulate a new variant of the problem of planning in an unknown environment, for which we can provide algorithms with reasonable theoretical guarantees in spite of large state spaces and time horizons, partial observability, and complex dynamics. In this variant, an agent is given a collection of example traces produced by a reference policy, which may, for example, capture the agent's past behavior. The agent is (only) asked to find policies that are supported by regularities in the dynamics that are observable on these example traces. We describe an efficient algorithm that uses such "common sense" knowledge reflected in the example traces to construct decision tree policies for goal-oriented factored POMDPs. More precisely, our algorithm (provably) succeeds at finding policy for a given input goal when (1) there is a CNF that is almost always observed satisfied on the traces of the POMDP, capturing a sufficient approximation of its dynamics and (2) for a decision tree policy of bounded complexity, there exist small-space resolution proofs that the goal is achieved on each branch using the aforementioned CNF capturing the "common sense rules." Such a CNF always exists for noisy STRIPS domains, for example. Our results thus essentially establish that the possession of a suitable exploration policy for collecting the necessary examples is the fundamental obstacle to learning to act in such environments.

11 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ..., the AND of the literals that lead to that branch; the former quantity is the VC-dimension (Vapnik and Chervonenkis, 1971) of the class of conjunctions, which characterizes the number of examples needed to estimate their fit (Vapnik and Chervonenkis, 1971; Blumer et al., 1989; Ehrenfeucht et al., 1989)....

    [...]

Proceedings ArticleDOI
04 Dec 2017
TL;DR: A novel parallelisation scheme that simplifies the adaptation of learning algorithms to growing amounts of data as well as growing needs for accurate and confident predictions in critical applications and is able to reduce the runtime of many learning algorithm to polylogarithmic time on quasi-polynomially many processing units.
Abstract: We present a novel parallelisation scheme that simplifies the adaptation of learning algorithms to growing amounts of data as well as growing needs for accurate and confident predictions in critical applications. In contrast to other parallelisation techniques, it can be applied to a broad class of learning algorithms without further mathematical derivations and without writing dedicated code, while at the same time maintaining theoretical performance guarantees. Moreover, our parallelisation scheme is able to reduce the runtime of many learning algorithms to polylogarithmic time on quasi-polynomially many processing units. This is a significant step towards a general answer to an open question on efficient parallelisation of machine learning algorithms in the sense of Nick's Class (NC). The cost of this parallelisation is in the form of a larger sample complexity. Our empirical study confirms the potential of our parallelisation scheme with fixed numbers of processors and instances in realistic application scenarios.

11 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...In this setting, they proved positive and negative learnability results for a number of concept classes that were previously known to be PAC-learnable in polynomial time....

    [...]

  • ...Early theoretical treatments of parallel learning with respect to NC considered probably approximately correct (PAC) [5, 39] concept learning....

    [...]

  • ...Early theoretical treatments of parallel learning with respect to NC considered probably approximately correct (PAC) [5, 38] concept learning....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors provide analytical bounds on convergence rates for a class of hydrologic models and derive a complexity measure based on the Vapnik-Chervonenkis (VC) generalization theory.
Abstract: We provide analytical bounds on convergence rates for a class of hydrologic models and consequently derive a complexity measure based on the Vapnik-Chervonenkis (VC) generalization theory. The class of hydrologic models is a spatially explicit interconnected set of linear reservoirs with the aim of representing globally nonlinear hydrologic behavior by locally linear models. Here, by convergence rate, we mean convergence of the empirical risk to the expected risk. The derived measure of complexity measures a model's propensity to overfit data. We explore how data finiteness can affect model selection for this class of hydrologic model and provide theoretical results on how model performance on a finite sample converges to its expected performance as data size approaches infinity. These bounds can then be used for model selection, as the bounds provide a tradeoff between model complexity and model performance on finite data. The convergence bounds for the considered hydrologic models depend on the magnitude of their parameters, which are the recession parameters of constituting linear reservoirs. Further, the complexity of hydrologic models not only varies with the magnitude of their parameters but also depends on the network structure of the models (in terms of the spatial heterogeneity of parameters and the nature of hydrologic connectivity). © IWA Publishing 2012.

11 citations

01 Nov 1992
TL;DR: It is proved that an algorithm of Rivest, Meyer, Kleitman, Winklmann and Spencer for searching in an n-element list using a comparison oracle that lies E times requires at most O(log n + E) comparisons, improving the best previously known bound.
Abstract: We study sorting and searching using a comparison oracle that ``lies.'''' First, we prove that an algorithm of Rivest, Meyer, Kleitman, Winklmann and Spencer for searching in an n-element list using a comparison oracle that lies E times requires at most O(log n + E) comparisons, improving the best previously known bound of log n + E log log n + O(E log E). A lower bound, easily obtained from their results, establishes that the number of comparisons used by their algorithm is within a constant factor of optimal. We apply their search algorithm to obtain an algorithm for sorting an n element list with E lies that requires at most O(n log n + En) comparisons, improving on the algorithm of Lakshmanan, Ravikumar and Ganesan, which required at most O(n log n + En + E^2) comparisons. A lower bound proved by Lakshmanan, Ravikumar and Ganesan establishes that the number of comparisons used by our sorting algorithm is optimal to within a constant factor.

11 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations