scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI
21 Nov 1998
TL;DR: The focus of this paper is on the design of efficient algorithms and not on a structural theory of parameterized complexity, and the emphasis will be laid on two exemplifying issues: Vertex Cover and MaxSat problems.
Abstract: Recent time has seen quite some progress in the development of exponential time algorithms for NP-hard problems, where the base of the exponential term is fairly small. These developments are also tightly related to the theory of fixed parameter tractability. In this incomplete survey, we explain some basic techniques in the design of efficient fixed parameter algorithms, discuss deficiencies of parameterized complexity theory, and try to point out some future research challenges. The focus of this paper is on the design of efficient algorithms and not on a structural theory of parameterized complexity. Moreover, our emphasis will be laid on two exemplifying issues: Vertex Cover and MaxSat problems.

24 citations

01 Jan 2003
TL;DR: The idea that “intelligence is the ability of comprehension” is formed understanding this last term in a formal way: explanatory compression.
Abstract: After a brief discussion about the counter-intuitive results of a first approach like “intelligence as the ability of universal compression” we present a formal definition of exception-free description. Then we introduce a variant E of algorithmic complexity that we name intensional complexity with the only additional property that it allows no ‘exceptions’, making formal this deep notion in the very theory. Once defined this variant and a time-weighted version Et that we name explanatory complexity we retake the analogy and formulate the idea that “intelligence is the ability of comprehension” understanding this last term in a formal way: explanatory compression. Finally, we devise a test based on k-incomprehensible strings and we present an effective algorithm for generating them.

24 citations

Posted Content
TL;DR: This dissertation establishes a strong connection between offline convex optimization problems and statistical learning problems and shows that for a large class of high dimensional optimization problems, MD is in fact near optimal even for convex optimized problems.
Abstract: In this dissertation we study statistical and online learning problems from an optimization viewpoint.The dissertation is divided into two parts : I. We first consider the question of learnability for statistical learning problems in the general learning setting. The question of learnability is well studied and fully characterized for binary classification and for real valued supervised learning problems using the theory of uniform convergence. However we show that for the general learning setting uniform convergence theory fails to characterize learnability. To fill this void we use stability of learning algorithms to fully characterize statistical learnability in the general setting. Next we consider the problem of online learning. Unlike the statistical learning framework there is a dearth of generic tools that can be used to establish learnability and rates for online learning problems in general. We provide online analogs to classical tools from statistical learning theory like Rademacher complexity, covering numbers, etc. We further use these tools to fully characterize learnability for online supervised learning problems. II. In the second part, for general classes of convex learning problems, we provide appropriate mirror descent (MD) updates for online and statistical learning of these problems. Further, we show that the the MD is near optimal for online convex learning and for most cases, is also near optimal for statistical convex learning. We next consider the problem of convex optimization and show that oracle complexity can be lower bounded by the so called fat-shattering dimension of the associated linear class. Thus we establish a strong connection between offline convex optimization problems and statistical learning problems. We also show that for a large class of high dimensional optimization problems, MD is in fact near optimal even for convex optimization.

24 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...2) for all h ∈ H converge uniformly to the population risk ([9, 10])....

    [...]

Journal ArticleDOI
TL;DR: An abstract model of exact learning via queries that can be instantiated to all the query learning models currently in use, while being closer to them than previous unifying attempts is introduced.

24 citations

Journal ArticleDOI
TL;DR: A bounded version of the finite injury priority method in recursion theory is developed to study the learnability of unions of rectangles over the domain {0, …, n − 1}d with only equivalence queries.

24 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations