scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
David Krieg1
TL;DR: For any d ∈ N and e ∈ ( 0, 1 ), a point set in the d-dimensional unit cube [0, 1 ] d that intersects every axis-aligned box of volume greater than e is known as mentioned in this paper.

27 citations

Journal ArticleDOI
TL;DR: An approach to modeling the average case behavior of learning algorithms as a function of the number of training examples is presented and an algorithm that combines empirical and explanation-based learning is applied.
Abstract: We present an approach to modeling the average case behavior of learning algorithms. Our motivation is to predict the expected accuracy of learning algorithms as a function of the number of training examples. We apply this framework to a purely empirical learning algorithm, (the one-sided algorithm for pure conjunctive concepts), and to an algorithm that combines empirical and explanation-based learning. The model is used to gain insight into the behavior of these algorithms on a series of problems. Finally, we evaluate how well the average case model performs when the training examples violate the assumptions of the model.

27 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...There are a variety of reasons for the gap between the the observed accuracy and the equation (Blumer et al., 1989) on the PAC model....

    [...]

  • ...The curve from (Blumer et al., 1989) is for δ = 0.05 ------------------------------------- Insert Figure 1 About Here -------------------------------------...

    [...]

Journal ArticleDOI
TL;DR: An algorithm is presented for learning the interval of possible times during which a response to an action can take place, and was implemented on a physical robot for the domains of visual self-recognition and auditory social-partner recognition.
Abstract: By learning a range of possible times over which the effect of an action can take place, a robot can reason more effectively about causal and contingent relationships in the world. An algorithm is presented for learning the interval of possible times during which a response to an action can take place. The algorithm was implemented on a physical robot for the domains of visual self-recognition and auditory social-partner recognition. The environment model assumes that natural environments generate Poisson distributions of random events at all scales. A linear-time algorithm called Poisson threshold learning can generate a threshold T that provides an arbitrarily small rate of background events λ (T), if such a threshold exists for the specified error rate.

26 citations


Cites background or methods from "Learnability and the Vapnik-Chervon..."

  • ...Using principles of PAC-learning (Blumer et al. 1989), this number of samples is expected to have an upper bound that is polynomial regardless of the distribution....

    [...]

  • ...Using principles of PAC-learning [4], this number of samples is expected to have an upper bound that is polynomial regardless of the distribution....

    [...]

  • ...The number of samples required to achieve an error rate on the true positives was shown to be much smaller in practice than the upper bound suggested by the VC dimension of the problem (Blumer et al. 1989)....

    [...]

  • ...) This would entail, by a theorem in [4], that 13 ǫ (2 ln 1 ǫ +ln 1 δ ) trials are sufficient to learn the correct window with probability...

    [...]

  • ...It is straightforward to prove that the Vapnik-Chervonenkis dimension, or VC-dimension, of this problem is 2, and that therefore, the number of examples required to learn this interval with probability at least 1 − δ and error at most is at most (13/ )(2 ln 1/ + ln 1/δ) (Blumer et al. 1989)....

    [...]

Proceedings ArticleDOI
24 Jul 1998
TL;DR: This paper devise new holdout and cross-validation estimators for the case where real-valued functions are used as classifiers, and analyse theoretically the accuracy of these.
Abstract: This paper concerns the use of real-valued functions for binary classification problems. Previous work in this area has concentrated on using as an error estimate the ‘resubstitution’ error (that is, the empirical error of a classifier on the training sample) or its derivatives. However, in practice, cross-validation and related techniques are more popular. Here, we devise new holdout and cross-validation estimators for the case where real-valued functions are used as classifiers, and we analyse theoretically the accuracy of these.

26 citations

Posted Content
TL;DR: This survey summarizes coreset constructions in a retrospective way, that aims to unified and simplify the state-of-the-art of streaming, distributed and dynamic data.
Abstract: In optimization or machine learning problems we are given a set of items, usually points in some metric space, and the goal is to minimize or maximize an objective function over some space of candidate solutions. For example, in clustering problems, the input is a set of points in some metric space, and a common goal is to compute a set of centers in some other space (points, lines) that will minimize the sum of distances to these points. In database queries, we may need to compute such a some for a specific query set of $k$ centers. However, traditional algorithms cannot handle modern systems that require parallel real-time computations of infinite distributed streams from sensors such as GPS, audio or video that arrive to a cloud, or networks of weaker devices such as smartphones or robots. Core-set is a "small data" summarization of the input "big data", where every possible query has approximately the same answer on both data sets. Generic techniques enable efficient coreset \changed{maintenance} of streaming, distributed and dynamic data. Traditional algorithms can then be applied on these coresets to maintain the approximated optimal solutions. The challenge is to design coresets with provable tradeoff between their size and approximation error. This survey summarizes such constructions in a retrospective way, that aims to unified and simplify the state-of-the-art.

26 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations