scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This paper addresses the problem of computing the minimum number and placement of sensors so that the localization uncertainty at every point in the workspace is less than a given threshold and presents a solution framework based on integer linear programming and an approximation algorithm with a constant factor performance guarantee.
Abstract: Robots operating in a workspace can localize themselves by querying nodes of a sensor-network deployed in the same workspace. This paper addresses the problem of computing the minimum number and placement of sensors so that the localization uncertainty at every point in the workspace is less than a given threshold. We focus on triangulation-based state estimation, where measurements from two sensors must be combined for an estimate. This problem is NP-hard in its most general from. For the general version, we present a solution framework based on integer linear programming and demonstrate its application in a fire-tower placement task. Next, we study the special case of bearing-only localization and present an approximation algorithm with a constant factor performance guarantee.

100 citations

Proceedings ArticleDOI
06 Jul 2001
TL;DR: It is shown that any s-term DNF over n variables can be computed by a polynomial threshold function of degree O(n 1/3} \log s, and this upper bound matches the longstanding lower bound given by Minsky and Papert in 1968.
Abstract: Using techniques from learning theory, we show that any s-term DNF over n variables can be computed by a polynomial threshold function of degree O(n^{1/3} \log s). This upper bound matches, up to a logarithmic factor, the longstanding lower bound given by Minsky and Papert in their 1968 book {\em Perceptrons}. As a consequence of this upper bound we obtain the fastest known algorithm for learning polynomial size DNF, one of the central problems in computational learning theory.

100 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...The problem of learning a linear threshold function over f0; 1g can be formulated as a linear programming problem and thus can be solved in poly(n) time in both the PAC model of learning from random examples and in the model of exact learning from equivalence queries [10, 31]....

    [...]

Journal ArticleDOI
TL;DR: A new proof of a result due to Vapnik is given, and its implications for the theory of PAC learnability are discussed, with particular reference to the learnability of functions taking values in a countable set.

100 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...Theorem 3.1 Let H be a hypothesis space of {0, 1}-valued functions defined on an input space X....

    [...]

  • ...An application to the theory of artificial neural networks is then given....

    [...]

  • ...…possibility of classification errors during training, and to allow for ill-defined or stochastic concepts, the theory has been extended [4] to discuss not the learnability of functions from X to {0, 1} with an underlying distribution µ, but instead the learnability of probability distributions…...

    [...]

  • ...The number of input nodes will be denoted s + 1 and the number of output nodes t....

    [...]

Book
13 Aug 2008
TL;DR: This wide-ranging, thoroughly detailed volume is self-contained and intended for researchers and graduate students, and will prove an invaluable reference tool.
Abstract: The maturing of the field of data mining has brought about an increased level of mathematical sophistication. Such disciplines like topology, combinatorics, partially ordered sets and their associated algebraic structures (lattices and Boolean algebras), and metric spaces are increasingly applied in data mining research. This book presents these mathematical foundations of data mining integrated with applications to provide the reader with a comprehensive reference. Mathematics is presented in a thorough and rigorous manner offering a detailed explanation of each topic, with applications to data mining such as frequent item sets, clustering, decision trees also being discussed. More than 400 exercises are included and they form an integral part of the material. Some of the exercises are in reality supplemental material and their solutions are included. The reader is assumed to have a knowledge of elementary analysis. Features and topics: Study of functions and relations Applications are provided throughout Presents graphs and hypergraphs Covers partially ordered sets, lattices and Boolean algebras Finite partially ordered sets Focuses on metric spaces Includes combinatorics Discusses the theory of the Vapnik-Chervonenkis dimension of collections of sets This wide-ranging, thoroughly detailed volume is self-contained and intended for researchers and graduate students, and will prove an invaluable reference tool.

99 citations


Additional excerpts

  • ...Its main interest for data mining is related to one of the basic models of machine learning, the probabilistic approximately correct learning paradigm as was shown in [16]....

    [...]

Journal ArticleDOI
03 Dec 1996
TL;DR: It is shown that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and a new type of upper bound for the VC-dimension of computational models with analog noise is proved.
Abstract: We introduce a model for noise-robust analog computations with discrete time that is flexible enough to cover the most important concrete cases, such as computations in noisy analog neural nets and networks of noisy spiking neurons. We show that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and we also prove a new type of upper bound for the VC-dimension of computational models with analog noise.

99 citations


Cites methods from "Learnability and the Vapnik-Chervon..."

  • ...chosen exampleshx; g.x/i for g by a learning algorithm that uses functions fromF as hypotheses (see Haussler, 1992; Vapnik & Chervonenkis, 1971; Blumer, Ehrenfeucht, Haussler, & Warmuth, 1989; Maass, 1995)....

    [...]

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations