scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Journal Article
TL;DR: This paper studies α-CoAgnostic learnability of classes of boolean formulas and finds the first constant lower bounds for decision lists, exclusive-or, halfspaces (over the boolean domain), 2- term DNF and 2-term multivariate polynomials.
Abstract: This paper studies α-CoAgnostic learnability of classes of boolean formulas. To α-CoAgnostic learn C from H, the learner seeks a hypothesis h ∈ H that agrees (rather than disagrees as in Agnostic learning) within a factor a of the best agreement of any f E C. Although 1-CoAgnostic learning is equivalent to Agnostic learning, this is not true for α-CoAgnostic learning for < α < 1. It is known that α-CoAgnostic learning algorithms are equivalent to α-approximation algorithms for maximum agreement problems. Many studies have been done on maximum agreement problems, for classes such as monomials, monotone monomials, antimonotone monomials, halfspaces and balls. We study these problems further and some extensions of them. For the above classes we improve the best previously known factors a for the hardness of α-CoAgnostic learning. We also find the first constant lower bounds for decision lists, exclusive-or, halfspaces (over the boolean domain), 2-term DNF and 2-term multivariate polynomials.

9 citations


Cites result from "Learnability and the Vapnik-Chervon..."

  • ...By the VC dimension Theorem [7] for S of size poly(1/ , 1/ , VCD(Hn)), any function f ∈ Hn satisfies with probability 1 − , ∣∣∣ Pr (x,y)∈DX×{0,1} [f (x) = y] − Pr (x,y)∈US [f (x) = y] ∣∣∣ 2 ....

    [...]

  • ...If R ⊂ A then by the VC-dimension Theorem [7], with probability at least 1 2 a hypothesis h consistent with R agrees with (1 − ) of the points of A....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors define a measure of testing complexity known as VCP-dimension, which is similar to the Vapnik?Chervonenkis dimension, and apply it to classes of programs, where all programs in a class share the same syntactic structure.
Abstract: We examine the complexity of testing different program constructs. We do this by defining a measure of testing complexity known as VCP-dimension, which is similar to the Vapnik?Chervonenkis dimension, and applying it to classes of programs, where all programs in a class share the same syntactic structure. VCP-dimension gives bounds on the number of test points needed to determine that a program is approximately correct, so by studying it for a class of programs we gain insight into the difficulty of testing the program construct represented by the class. We investigate the VCP-dimension of straight line code, if?then?else statements, and for loops. We also compare the VCP-dimension of nested and sequential if?then?else statements as well as that of two types of for loops with embedded if?then?else statements. Finally, we perform an empirical study to estimate the expected complexity of straight line code.

9 citations

Dissertation
01 Jan 1996
TL;DR: Algorithms for handling imperfect data in several projects that range from the theoretical to the practical, and an approach combining theoretical models and practical search heuristics can yield excellent results in a real application of learning from imperfect data.
Abstract: This thesis explores several problems of learning with noisy or incomplete data. Most machine learning applications need to infer correct conclusions from available information, although some data may be incorrect and other important data may be missing. In this thesis, we describe algorithms for handling imperfect data in several projects that range from the theoretical to the practical. In Chapter 2 we present new formal models of learning with a teacher who makes mistakes or fails to answer some questions, and we show that learning can succeed in these models. We first consider learning with a "randomly fallible teacher" who is unable to answer a random subset of the learner's questions, and we present a probabilistic algorithm for learning monotone DNF formulas in this model. We then introduce a learning model in which queries on "borderline" examples may receive incorrect answers. We describe efficient algorithms for learning intersections of halfspaces and subclasses of DNF formulas in this new model. Our results in Chapter 3 show how teams of learners can work together to learn graphs in the absence of key information that distinguishes nodes. On a graph with indistinguishable nodes, a robot cannot tell if it is placed on a node that it has previously seen. We describe a probabilistic polynomial-time algorithm for two cooperating robots to learn any strongly-connected directed graph, even graphs that would most likely require exponential time to explore by walking randomly. We also present a random-walk algorithm that is faster for a special class of graphs. In Chapter 4 we examine the application of machine learning techniques and algorithm design to a real problem in molecular biology: building large-scale human gene maps using the new technique of radiation hybrid mapping. We represent uncertainty about noise in the data with a hidden Markov model. We introduce new search methods for finding good maps, and we use these methods to build the first radiation hybrid map of the entire human genome. Our work demonstrates that an approach combining theoretical models and practical search heuristics can yield excellent results in a real application of learning from imperfect data. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

9 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...However,one can use the PAC model to approximate an equivalence query without formallyspecifying the hypothesis h. Blumer, Ehrenfeucht, Haussler and Warmuth [27] provedthat any hypothesis consistent with a labeled random sample of size log 1= + log jCj !is a PAC-hypothesis; i.e., with probability at least 1 , the hypothesis is -good....

    [...]

  • ...Blumer, Ehrenfeucht, Haussler andWarmuth [28] have shown that a sample of size polynomial in the VC-dimension of the hypothesisclass is su cient for PAC-learning, so the number of examples needed is polynomial in n. 2.3 Unreliable Boundary Queries 45that query is \positive", then mark ~xneg....

    [...]

  • ...172 Bibliography[57] David Haussler, Michael Kearns, Nick Littlestone, and Manfred K. Warmuth....

    [...]

  • ...[58] David Haussler, Nick Littlestone, and Manfred K. Warmuth....

    [...]

  • ...[86] Leonard Pitt and Manfred K. Warmuth....

    [...]

Book ChapterDOI
01 Jan 2011
TL;DR: Computer models of language acquisition must begin and end as an integral part of the empirical study of child language, and efforts in language research are no ­exception.
Abstract: All models strive to represent reality, and efforts in language research are no ­exception. Computational models of language acquisition must begin and end as an integral part of the empirical study of child language.

9 citations

Journal ArticleDOI
TL;DR: This dissertation aims to analyze statistical databases from a new perspective using Probably Approximately Correct (PAC) learning theory which attempts to discover the true function of a database by learning from examples.
Abstract: With the rapid development of information technology, massive data collection is relatively easier and cheaper than ever before. Thus, the efficient and safe exchange of information becomes the renewed focus of database management as a pervasive issue. The challenge we face today is to provide users with reliable and useful data while protecting the privacy of confidential information contained in the database. Our research concentrates on statistical databases, which usually store a large number of data records and are open to the public where users are allowed to ask only limited types of queries, such as Sum, Count and Mean. Responses for those queries are aggregate statistics that intends to prevent disclosing the identity of a unique record in the database. My dissertation aims to analyze these problems from a new perspective using Probably Approximately Correct (PAC) learning theory which attempts to discover the true function by learning from examples. Different from traditional methods from which database administrators apply security methods to protect the privacy of statistical databases, we regard the true database as the target concept that an adversary tries to discover using a limited number of queries, in the presence of some systematic perturbations of the true answer. We extend previous work and classify a new data perturbation method---the variable data perturbation which protects the database by adding random noises to the confidential field. This method uses a parametrically driven algorithm that can be viewed as generating random perturbations by some (unknown) discrete distribution with known parameters, such as the mean and standard deviation. The bounds we derive for this new method shows how much protection is necessary to prevent the adversary from discovering the database with high probability at small error. Put in PAC learning terms we derive bounds on the amount of error an adversary makes given a general perturbation scheme, number of queries and a confidence level.

9 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...Blumer et al. (1989) proved that there exists a class that cannot be efficiently learned by SQ, but is actually efficiently learnable....

    [...]

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations