scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
DissertationDOI
01 Jan 1996
TL;DR: The sample complexity for agnostic learning is determined based on properties of the function class such as the pseudo-dimension and the fat-shattering function and it is shown that for certain function classes, the sample complexity can be worse than the sample complex for learning with additive noise if the authors are restricted to hypotheses from the same class.
Abstract: This thesis is concerned with some theoretical aspects of supervised learning of real-valued functions. We study a formal model of learning called agnostic learning. The agnostic learning model assumes a joint probability distribution on the observations (inputs and outputs) and requires the learning algorithm to produce an hypothesis with performance close to that of the best function within a specified class of functions. It is a very general model of learning which includes function learning, learning with additive noise and learning the best approximation in a class of functions as special cases. Within the agnostic learning model, we concentrate on learning functions which can be well approximated by single hidden layer neural networks. Artificial neural networks are often used as black box models for modelling phenomena for which very little prior knowledge is available. Agnostic learning is a natural model for such learning problems. The class of single hidden layer neural networks possesses many interesting properties, which we explore in this thesis, within the agnostic learning model. Two main aspects of learning studied here are the amount of information required (the sample complexity) and the amount of computation required (computational complexity) for agnostic learning. We determine the sample complexity for agnostic learning based on properties of the function class such as the pseudo-dimension and the fat-shattering function and show that for certain function classes, if the closure of the function class is not convex, the sample complexity for agnostic learning (with squared loss) can be worse than the sample complexity for learning with additive noise if we are restricted to hypotheses from the same class. We also show that if the closure of the function class is convex, then the sample complexity bound is similar to that for learning with noise. This motivates learning convex hulls of non-convex function classes. For many function classes, the convex hull can be represented by single hidden layer neural networks with an unbounded number of hidden units and a bound on the sum of the absolute values of

10 citations


Additional excerpts

  • ...in (Vapnik 1982, Blumer, Ehrenfeucht, Haussler & Warmuth 1989 )....

    [...]

Journal ArticleDOI
TL;DR: Evidence is provided that there is no polynomial-time optimal mistake bound learning algorithm and that the VC-dimension decision problem is polynomially reducible to the K -decision problem.
Abstract: This paper provides evidence that there is no polynomial-time optimal mistake bound learning algorithm. This conclusion is reached via several reductions as follows. Littlestone (1988, Math. Learning 2 , 285–318) has introduced a combinatorial function K from classes to integers and has shown that if a subroutine computing K is given, one can construct a polynomial-time optimal MB learning algorithm. We establish the reverse reduction. That is, given an optimal MB learning algorithm as a subroutine, one can compute K in polynomial time. Our result combines with Littlestone's to establish that the two tasks above have the same time complexity up to a polynomial. Next, we show that the VC-dimension decision problem is polynomially reducible to the K -decision problem. Papadimitriou and Yannakakis [PY93] have provided a strong evidence that the VC-dimension decision problem is not in P. Therefore, it is very unlikely that there is a polynomial-time optimal mistake bound learning algorithm

10 citations

Posted Content
TL;DR: In this article, the authors introduce the idea of learning programs through play, where a program induction system (the learner) is given a set of tasks and initial background knowledge, before solving the tasks, the learner enters an unsupervised playing stage where it creates its own tasks to solve, tries to solve them, and saves any solutions (programs) to the background knowledge.
Abstract: Children learn though play. We introduce the analogous idea of learning programs through play. In this approach, a program induction system (the learner) is given a set of tasks and initial background knowledge. Before solving the tasks, the learner enters an unsupervised playing stage where it creates its own tasks to solve, tries to solve them, and saves any solutions (programs) to the background knowledge. After the playing stage is finished, the learner enters the supervised building stage where it tries to solve the user-supplied tasks and can reuse solutions learnt whilst playing. The idea is that playing allows the learner to discover reusable general programs on its own which can then help solve the user-supplied tasks. We claim that playing can improve learning performance. We show that playing can reduce the textual complexity of target concepts which in turn reduces the sample complexity of a learner. We implement our idea in Playgol, a new inductive logic programming system. We experimentally test our claim on two domains: robot planning and real-world string transformations. Our experimental results suggest that playing can substantially improve learning performance. We think that the idea of playing (or, more verbosely, unsupervised bootstrapping for supervised program induction) is an important contribution to the problem of developing program induction approaches that self-discover BK.

9 citations

Dissertation
01 Jan 2003
TL;DR: This thesis investigates properties that are and are not testable with sublinear query complexity of property testing as applied to images and shows that testing properties defined by 2CNF formulas is equivalent, with respect to the number of required queries, to several other function and graph testing problems.
Abstract: Property testers are algorithms that distinguish inputs with a given property from those that are far from satisfying the property. Far means that many characters of the input must be changed before the property arises in it. Property testing was introduced by Rubinfeld and Sudan in the context of linearity testing and first studied in a variety of other contexts by Goldreich, Goldwasser and Ron. The query complexity of a property tester is the number of input characters it reads. This thesis is a detailed investigation of properties that are and are not testable with sublinear query complexity. We begin by characterizing properties of strings over the binary alphabet in terms of their formula complexity. Every such property can be represented by a CNF formula. We show that properties of n-bit strings defined by 2CNF formulas are testable with O( n ) queries, whereas there are 3CNF formulas for which the corresponding properties require Ω(n) queries, even for adaptive tests. We show that testing properties defined by 2CNF formulas is equivalent, with respect to the number of required queries, to several other function and graph testing problems. These problems include: testing whether Boolean functions over general partial orders are close to monotone, testing whether a set of vertices is close to one that is a vertex cover of a specific graph, and testing whether a set of vertices is close to a clique. Testing properties that are defined in terms of monotonicity has been extensively investigated in the context of the monotonicity of a sequence of integers and the monotonicity of a function over the m-dimensional hypercube {1,…,a}m. We study the query complexity of monotonicity testing of both Boolean and integer functions over general partial orders. We show upper and lower bounds for the general problem and for specific partial orders. A few of our intermediate results are of independent interest. (1) If strings with a property form a vector space, adaptive 2-sided error tests for the property have no more power than non-adaptive 1-sided error tests. (2) Random LDPC codes with linear distance and constant rate are not locally testable. (3) There exist graphs with many edge-disjoint induced matchings of linear size. In the final part of the thesis, we initiate an investigation of property testing as applied to images. We study visual properties of discretized images represented by n x n matrices of binary pixel values. We obtain algorithms with the number of queries independent of n for several basic properties: being a half-plane, connectedness and convexity. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

9 citations

Posted Content
TL;DR: It is proved that $(\varepsilon, \delta)$-differential privacy implies an on-average generalization bound for multi-database learning algorithms which further leads to a high-probability bound for any learning algorithm.
Abstract: This paper studies the relationship between generalization and privacy preservation in iterative learning algorithms by two sequential steps. We first establish an alignment between generalization and privacy preservation for any learning algorithm. We prove that $(\varepsilon, \delta)$-differential privacy implies an on-average generalization bound for multi-database learning algorithms which further leads to a high-probability bound for any learning algorithm. This high-probability bound also implies a PAC-learnable guarantee for differentially private learning algorithms. We then investigate how the iterative nature shared by most learning algorithms influence privacy preservation and further generalization. Three composition theorems are proposed to approximate the differential privacy of any iterative algorithm through the differential privacy of its every iteration. By integrating the above two steps, we eventually deliver generalization bounds for iterative learning algorithms, which suggest one can simultaneously enhance privacy preservation and generalization. Our results are strictly tighter than the existing works. Particularly, our generalization bounds do not rely on the model size which is prohibitively large in deep learning. This sheds light to understanding the generalizability of deep learning. These results apply to a wide spectrum of learning algorithms. In this paper, we apply them to stochastic gradient Langevin dynamics and agnostic federated learning as examples.

9 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...Existing generalization bounds are mainly obtained from three stems: (1) concentration inequalities derive many high-probability generalization bounds based on the hypothesis complexity, such as VC dimension [8, 64], Rademacher complexity [35, 34, 6], and covering number [15, 27]....

    [...]

  • ...[8] Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred KWarmuth....

    [...]

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations