scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Book
01 Apr 1999
TL;DR: A new analysis of the generalization error of the hypothesis which minimizes the empirical error within a finite hypothes is language is presented and an information theoretic approach which does not require the assumption that empirical error rates in distinct cross validation folds are independent estimates is pursued.
Abstract: Machine learning algorithms search a space of possible hypo t eses and estimate the error of each hypotheses using a sample. Most often, the goal of classifica tion tasks is to find a hypothesis with a low true (or generalization) misclassification probabili ty (or error rate); however, only the sample (or empirical) error rate can actually be measured and minim ized. The true error rate of the returned hypothesis is unknown but can, for instance, be estimated us ing cross validation, and very general worst-case bounds can be given. This doctoral dissertation ddresses a compound of questions on error assessment and the intimately related selection of a “ good” hypothesis language, or learning algorithm, for a given problem. In the first part of this thesis, I present a new analysis of the generalization error of the hypothesis which minimizes the empirical error within a finite hypothes is language. I present a solution which characterizes the generalization error of the apparently b est hypothesis in terms of the distribution of error rates of hypotheses in the hypothesis language. The distribution of error rates can, for any given problem, be estimated efficiently from the sample. Eff ectively, this analysis predicts how good the outcome of a learning algorithm would be without the lear ning algorithm actually having to be invoked. This immediately leads to an efficient algorithm fo r the selection of a good hypothesis language (or “model”). The analysis predicts (and thus expl ains) the shape of learning curves with a very high accuracy and thus contributes to a better underst anding of the nature of over-fitting. I study the behavior of the model selection algorithm empiric ally (in particular, in comparison to cross validation) using both artificial problems and a large scale text categorization problem. In the next step, I study in which situations performing auto matic model selection is actually beneficial; in particular, I study Occam algorithms and cross va lidation. Model selection techniques such as tree pruning, weight decay, or cross validation, are empl oyed by virtually all “practical” learners and are generally believed to enhance the performance of lea rning algorithms. However, I show that this belief is equivalent to an assumption on the distributi on of problems which the learning algorithm is exposed to. I specify these distributional assumptions a nd quantify the benefit of Occam algorithms and cross validations in these situations. When the distrib utional assumptions fail, cross-validation based model selection i creasesthe generalization error of the returned hypothesis on aver age. When several distinct learners are assessed with respect to a particular problem (or one learner is assessed repeatedly with distinct parameter settings), an effect arises which is very similar to overfitting that occurs during error-minimization processes. T he lowest observed error rate is an optimistic estimate of the corresponding generalization error. I quan tify this bias. In particular, I study the bias which is imposed by repeated invocations of a learner with di stinct parameter settings when n-fold cross validation is used to estimate the error rate. I pursue an information theoretic approach which does not require the assumption that empirical error rates m asured in distinct cross validation folds are independent estimates. I discuss the implications of th ese results on the results of empirical studies which have been carried out in the past and propose an experim ental setting which leads to almost unbiased results. Finally, I address complexity issues of model selection. In model selection based learning, the learning algorithm is restricted to a (small) model, chosen by the model selection algorithm. By contrast, in the boosting setting, the hypothesis is allowed to grow dynamically, often until the hypothesis is fitted to the data. By giving new worst-case time bounds for the AdaBoost algorithm I show that in many cases the restriction to small sets of hypotheses cause s the high complexity of learning

47 citations

Journal Article
TL;DR: Lower bounds are given showing that even for very simple concept classes, the sample cost of private multi-learning must grow polynomially in k, and some multi-learners are given that require fewer samples than the basic strategy.
Abstract: We investigate the direct-sum problem in the context of differentially private PAC learning: What is the sample complexity of solving $k$ learning tasks simultaneously under differential privacy, and how does this cost compare to that of solving $k$ learning tasks without privacy? In our setting, an individual example consists of a domain element $x$ labeled by $k$ unknown concepts $(c_1,\ldots,c_k)$. The goal of a multi-learner is to output $k$ hypotheses $(h_1,\ldots,h_k)$ that generalize the input examples. Without concern for privacy, the sample complexity needed to simultaneously learn $k$ concepts is essentially the same as needed for learning a single concept. Under differential privacy, the basic strategy of learning each hypothesis independently yields sample complexity that grows polynomially with $k$. For some concept classes, we give multi-learners that require fewer samples than the basic strategy. Unfortunately, however, we also give lower bounds showing that even for very simple concept classes, the sample cost of private multi-learning must grow polynomially in $k$.

47 citations

Proceedings ArticleDOI
11 Jan 2004
TL;DR: In this article, the authors consider a model for monitoring the connectivity of a network subject to node or edge failures, and they show that for any graph G, there is an (e, k)-detection set of size bounded by a polynomial in k and e, independent of the size of G. They also show that detection set bounds can be made considerably stronger when parameterized by these connectivity values.
Abstract: We consider a model for monitoring the connectivity of a network subject to node or edge failures. In particular, we are concerned with detecting (e, k)-failures: events in which an adversary deletes up to network elements (nodes or edges), after which there are two sets of nodes A and B, each at least an e fraction of the network, that are disconnected from one another. We say that a set D of nodes is an (e k)-detection set if, for any (e k)-failure of the network, some two nodes in D are no longer able to communicate; in this way, D "witnesses" any such failure. Recent results show that for any graph G, there is an is (e k)-detection set of size bounded by a polynomial in k and e, independent of the size of G.In this paper, we expose some relationships between bounds on detection sets and the edge-connectivity λ and node-connectivity κ of the underlying graph. Specifically, we show that detection set bounds can be made considerably stronger when parameterized by these connectivity values. We show that for an adversary that can delete κλ edges, there is always a detection set of size O((κ/e) log (1/e)) which can be found by random sampling. Moreover, an (e, l for node failures, we develop a novel approach for working with the much more complex set of all minimum node-cuts of a graph.

47 citations

Journal ArticleDOI
TL;DR: The singular value decomposition of this matrix is used to determine the optimal margins of embeddings of the concept classes of singletons and of half intervals in homogeneous Euclidean half spaces and to prove the corresponding best possible upper bounds on the margin.
Abstract: Concept classes can canonically be represented by matrices with entries 1 and −1 We use the singular value decomposition of this matrix to determine the optimal margins of embeddings of the concept classes of singletons and of half intervals in homogeneous Euclidean half spaces For these concept classes the singular value decomposition can be used to construct optimal embeddings and also to prove the corresponding best possible upper bounds on the margin We show that the optimal margin for embedding n singletons is \frac{n}{3n-4} and that the optimal margin for half intervals over l1,…,nr is \frac{\pi}{2 \ln n} + \Theta (\frac{1}{(\ln n)^2}) For the upper bounds on the margins we generalize a bound by Forster (2001) We also determine the optimal margin of some concept classes defined by circulant matrices up to a small constant factor, and we discuss the concept classes of monomials to point out limitations of our approach

47 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...A small VC-dimension means that a concept class can be learned with a small sample (Vapnik & Chervonenkis, 1971; Blumer et al., 1989; Kearns & Vazirani, 1994, Theorem 3.3). The success of maximal margin classifiers raises the question which concept classes can be embedded in half spaces with a large margin. Another motivation for studying the margins of embeddings of concept classes is discussed in Forster et al. (2001). There a close connection between margins and the bounded error model of probabilistic communication complexity is shown....

    [...]

Journal Article
TL;DR: In this paper, the authors study the theoretical advantages of active learning over passive learning and prove that, in noise-free classifier learning for VC classes, any passive learning algorithm can be transformed into an active learning algorithm with asymptotic strictly superior label complexity for all nontrivial target functions and distributions.
Abstract: We study the theoretical advantages of active learning over passive learning. Specifically, we prove that, in noise-free classifier learning for VC classes, any passive learning algorithm can be transformed into an active learning algorithm with asymptotically strictly superior label complexity for all nontrivial target functions and distributions. We further provide a general characterization of the magnitudes of these improvements in terms of a novel generalization of the disagreement coefficient. We also extend these results to active learning in the presence of label noise, and find that even under broad classes of noise distributions, we can typically guarantee strict improvements over the known results for passive learning.

47 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations