scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Posted Content
TL;DR: Sauer's Lemma is explained, which involves the VC dimension and is used to prove the equivalence of a concept class being distribution-free PAC learnable and it having finite VC dimension, and the construction of a new function class from a collection of function classes.
Abstract: We begin this report by describing the Probably Approximately Correct (PAC) model for learning a concept class, consisting of subsets of a domain, and a function class, consisting of functions from the domain to the unit interval. Two combinatorial parameters, the Vapnik-Chervonenkis (VC) dimension and its generalization, the Fat Shattering dimension of scale e, are explained and a few examples of their calculations are given with proofs. We then explain Sauer's Lemma, which involves the VC dimension and is used to prove the equivalence of a concept class being distribution-free PAC learnable and it having finite VC dimension. As the main new result of our research, we explore the construction of a new function class, obtained by forming compositions with a continuous logic connective, a uniformly continuous function from the unit hypercube to the unit interval, from a collection of function classes. Vidyasagar had proved that such a composition function class has finite Fat Shattering dimension of all scales if the classes in the original collection do; however, no estimates of the dimension were known. Using results by Mendelson-Vershynin and Talagrand, we bound the Fat Shattering dimension of scale e of this new function class in terms of the Fat Shattering dimensions of the collection's classes. We conclude this report by providing a few open questions and future research topics involving the PAC learning model.

9 citations

Posted Content
TL;DR: In this paper, the authors obtained both random and explicit constructions to prove that the corresponding saturation number is independent of the number of nodes in the smallest maximal family with VC-dimension at most 2.
Abstract: The well-known Sauer lemma states that a family $\mathcal{F}\subseteq 2^{[n]}$ of VC-dimension at most $d$ has size at most $\sum_{i=0}^d\binom{n}{i}$. We obtain both random and explicit constructions to prove that the corresponding saturation number, i.e., the size of the smallest maximal family with VC-dimension $d\ge 2$, is at most $4^{d+1}$, and thus is independent of $n$.

9 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...This notion plays a central role in statistical learning theory [4, 21], discrete and computational geometry [16] and several other areas of mathematics [11, 15]....

    [...]

Proceedings Article
25 Jun 2016
TL;DR: A Probably Approximately Correct (PAC) framework for anomaly detection based on the identification of rare patterns is introduced and sample complexity results that relate the complexity of the pattern space to the data requirements needed for PAC guarantees are developed.
Abstract: Anomaly detection is a fundamental problem for which a wide variety of algorithms have been developed. However, compared to supervised learning, there has been very little work aimed at understanding the sample complexity of anomaly detection. In this paper, we take a step in this direction by introducing a Probably Approximately Correct (PAC) framework for anomaly detection based on the identification of rare patterns. In analogy with the PAC framework for supervised learning, we develop sample complexity results that relate the complexity of the pattern space to the data requirements needed for PAC guarantees. We instantiate the general result for a number of pattern spaces, some of which are implicit in current state-of-the-art anomaly detectors. Finally, we design a new simple anomaly detection algorithm motivated by our analysis and show experimentally on several benchmark problems that it is competitive with a state-of-the-art detector using the same pattern space.

9 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...A loose bound can be found by noting that stripes are a special case of the more general pattern space of intersections of half spaces and then applying the general result for bounding the VC-dimension of intersections [3], which gives an upper bound on the VC-dimension of stripes of 4 log(6)(d+ 1) = O(d)....

    [...]

  • ...The VC-dimension of the space of axis parallel hyper rectangles in < is 2d [3]....

    [...]

Book ChapterDOI
20 Oct 1992
TL;DR: This paper proposes a learning method of neural networks based on the regularization method and estimates its generalization capability, which is defined as a difference between the expected risk accomplished by the learning and the truly minimum expected risk.
Abstract: In this paper, we propose a learning method of neural networks based on the regularization method and analyze its generalization capability. In learning from examples, training samples are independently drawn from some unknown probability distribution. The goal of learning is minimizing the expected risk for future test samples, which are also drawn from the same distribution. The problem can be reduced to estimating the probability distribution with only samples, but it is generally ill-posed. In order to solve it stably, we use the regularization method. Regularization learning can be done in practice by increasing samples by adding appropriate amount of noise to the training samples. We estimate its generalization error, which is defined as a difference between the expected risk accomplished by the learning and the truly minimum expected risk. Assume p-dimensional density function is s-times differentiable for any variable. We show the mean square of the generalization error of regularization learning is given as Dn−2s/(2ss+p) where n is the number of samples and D is a constant dependent on the complexity of the neural network and the difficulty of the problem.

9 citations


Cites methods from "Learnability and the Vapnik-Chervon..."

  • ...We show the mean square of the generalization error of regularization learning is given as Dn 2s=(2s+p) , where n is the number of samples and D is a constant dependent on the complexity of the neural network and the di culty of the problem....

    [...]

Journal Article
TL;DR: In this paper, it was shown that any ground state of a ground state can be used to simulate any quantum circuit of fixed polynomial size, such that any quantum state of the ground state is equivalent in power to untrusted quantum advice combined with trusted classical advice.
Abstract: We prove the following surprising result: given any quantum state $\rho$ on $n$ qubits, there exists a local Hamiltonian $H$ on $\operatorname*{poly}(n)$ qubits (e.g., a sum of two-qubit interactions), such that any ground state of $H$ can be used to simulate $\rho$ on all quantum circuits of fixed polynomial size. In terms of complexity classes, this implies that ${BQP/qpoly}\subseteq{QMA/poly}$, which supersedes the previous result of Aaronson that ${BQP/qpoly}\subseteq {PP/poly}$. Indeed, we can exactly characterize quantum advice as equivalent in power to untrusted quantum advice combined with trusted classical advice. Proving our main result requires combining a large number of previous tools---including a result of Alon et al. on learning of real-valued concept classes, a result of Aaronson on the learnability of quantum states, and a result of Aharonov and Regev on “${QMA}_{{+}}$ super-verifiers''---and also creating some new ones. The main new tool is a so-called majority-certificates lemma, which...

9 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations