scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
01 May 1996
TL;DR: This study derives the practical decision tree learner Rank based on the Findminprotocol of Ehrenfeucht and Haussler, a general testbed for all inductive learners using attribute representation of data, not only for decision tree le arners.
Abstract: Decision tree learning is an important field of machine learning. In this study we examine both formal and practical aspects of decision tree learning. We aim at answe ri g to two important needs: The need for better motivated decision tree learners and an e vironment facilitating experimentation with inductive learning algorithms. As res ults we obtain new practical tools and useful techniques for decision tree learning. First, we derive the practical decision tree learner Rankbased on theFindminprotocol of Ehrenfeucht and Haussler. The motivation for the changes introduced to the method comes from empirical experience, but we prove the correctness of the modi fications in the probably approximately correct learning framework. The algorithm is enhanced by extending it to operate in the multiclass situations, making it capabl e of working within the incremental setting, and providing noise tolerance into it. Toge ther these modifications entail practicability through a formal development process, which constitutes an important technique for decision tree learner design. The other tool that comes out of this work is TELA, a general testbed for all inductive learners using attribute representation of data, not only for decision tree le arners. This system guides and assists its user in taking new algoritms to his disposal, opera ting them in an easy fashion, designing and executing useful tests with the algorithms, a nd in interpreting the outcome of the tests. We present the design rationale, current composit ion, and future development directions of TELA. Moreover, we reflect on the experiences that have been gathered in the initial usage of the system. The tools that come about are evaluated and validated in empirical tests ov er many real-world application domains. Several successful inductive algorithms a re contrasted with theRankalgorithm in experiments that are carried out using TELA. These experiments let us evaluate the success of the new decision tree learner with re spect to its established equivalents and validate the utility of the developed testbed. The tests prove successful in both respects: Rankattains the same overall level of prediction accuracy as C4.5, which is generally considered to be one of the best empirical decision tree l ea n rs, andTELA eases the execution of the experiments substantially.

12 citations

Proceedings Article
01 Mar 2019
TL;DR: In this paper, the authors proposed a PTAS that computes a (1 + ϵ)-approximation to the k-mean problem in time O(n \log n) for any constant approximation error ϵ ∈ (0, 1) and constant integers $k, d \geq 1).
Abstract: The input to the \emph{$k$-mean for lines} problem is a set $L$ of $n$ lines in $\mathbb{R}^d$, and the goal is to compute a set of $k$ centers (points) in $\mathbb{R}^d$ that minimizes the sum of squared distances over every line in $L$ and its nearest center. This is a straightforward generalization of the $k$-mean problem where the input is a set of $n$ points instead of lines. We suggest the first PTAS that computes a $(1+\epsilon)$-approximation to this problem in time $O(n \log n)$ for any constant approximation error $\epsilon \in (0, 1)$, and constant integers $k, d \geq 1$. This is by proving that there is always a weighted subset (called coreset) of $dk^{O(k)}\log (n)/\epsilon^2$ lines in $L$ that approximates the sum of squared distances from $L$ to \emph{any} given set of $k$ points. Using traditional merge-and-reduce technique, this coreset implies results for a streaming set (possibly infinite) of lines to $M$ machines in one pass (e.g. cloud) using memory, update time and communication that is near-logarithmic in $n$, as well as deletion of any line but using linear space. These results generalized for other distance functions such as $k$-median (sum of distances) or ignoring farthest $m$ lines from the given centers to handle outliers. Experimental results on 10 machines on Amazon EC2 cloud show that the algorithm performs well in practice. Open source code for all the algorithms and experiments is also provided.

12 citations

Book ChapterDOI
14 Mar 2002
TL;DR: This work proves upper bounds for combinatorial parameters of finite relational structures, related to the complexity of learning a definable set, and bound simply positive learnability results for the PAC and equivalence query learnability of a Definable set over these structures.
Abstract: We prove upper bounds for combinatorial parameters of finite relational structures, related to the complexity of learning a definable set. We show that monadic second order (MSO) formulas with parameters have bounded VC-dimension over structures of bounded clique-width, and first-order formulas with parameters have bounded VC-dimension over structures of bounded local clique-width (this includes planar graphs). We also show that MSO formulas of a fixed size have bounded strong consistency dimension over MSO formulas of a fixed larger size, for colored trees. These bound simply positive learnability results for the PAC and equivalence query learnability of a definable set over these structures. The proofs are based on bounds for related definability problems for tree automata.

12 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...Theorem 1 (Blumer, Ehrenfeucht, Haussler, Warmuth [ 8 ], Vapnik, Chervonenkis [40]) Every concept class is PAC-learnable with sample size...

    [...]

Journal ArticleDOI
TL;DR: In the quantum one-way model, a lower bound on the distributional communication complexity, under product distributions, of a function f is provided, in terms of the well studied complexity measure of f referred to as the rectangle bound or the corruption bound of f.

12 citations

Proceedings Article
15 Jul 2020
TL;DR: In this article, the Littlestone dimension of a class of boolean functions and a composed class H' that is derived from H using some arbitrary aggregation rule are upper bounded in terms of that of H. The improved bounds on the sample complexity of private learning are derived algorithmically via transforming a private learner for the original class H to a private algorithm that learns the class H'.
Abstract: Let H be a class of boolean functions and consider a composed class H' that is derived from H using some arbitrary aggregation rule (for example, H' may be the class of all 3-wise majority-votes of functions in H). We upper bound the Littlestone dimension of H' in terms of that of H. As a corollary, we derive closure properties for online learning and private PAC learning. The derived bounds on the Littlestone dimension exhibit an undesirable exponential dependence. For private learning, we prove close to optimal bounds that circumvents this suboptimal dependency. The improved bounds on the sample complexity of private learning are derived algorithmically via transforming a private learner for the original class H to a private learner for the composed class H'. Using the same ideas we show that any (proper or improper) private algorithm that learns a class of functions H in the realizable case (i.e., when the examples are labeled by some function in the class) can be transformed to a private algorithm that learns the class H in the agnostic case.

12 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations