scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Posted Content
TL;DR: A large scale empirical characterization of generalization error and model size growth as training sets grow is presented and it is shown that model size scales sublinearly with data size.
Abstract: Deep learning (DL) creates impactful advances following a virtuous recipe: model architecture search, creating large training data sets, and scaling computation. It is widely believed that growing training sets and models should improve accuracy and result in better products. As DL application domains grow, we would like a deeper understanding of the relationships between training set size, computational scale, and model accuracy improvements to advance the state-of-the-art. This paper presents a large scale empirical characterization of generalization error and model size growth as training sets grow. We introduce a methodology for this measurement and test four machine learning domains: machine translation, language modeling, image processing, and speech recognition. Our empirical results show power-law generalization error scaling across a breadth of factors, resulting in power-law exponents---the "steepness" of the learning curve---yet to be explained by theoretical work. Further, model improvements only shift the error but do not appear to affect the power-law exponent. We also show that model size scales sublinearly with data size. These scaling relationships have significant implications on deep learning research, practice, and systems. They can assist model debugging, setting accuracy targets, and decisions about data set growth. They can also guide computing system design and underscore the importance of continued computational scaling.

377 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...Early follow-on research tightens the bounds by relating sample complexity to generalization error through the Vapnik-Chervonenkis (VC) dimension of the target concept class (Ehrenfeucht et al. (1989); Blumer et al. (1989); Haussler et al. (1996))....

    [...]

Journal ArticleDOI
TL;DR: It is shown that M(k/n,V) >= (cn/(k+d)log(n/k) for some constant c so that any two distinct vectors in W differ on at least k indices.

371 citations

Journal ArticleDOI
TL;DR: The results in these domains support the claim that NGE theory can be used to create compact representations with excellent predictive accuracy.
Abstract: This paper presents a theory of learning called nested generalized exemplar (NGE) theory, in which learning is accomplished by storing objects in Euclidean n-space, En, as hyperrectangles. The hyperrectangles may be nested inside one another to arbitrary depth. In contrast to generalization processes that replace symbolic formulae by more general formulae, the NGE algorithm modifies hyperrectangles by growing and reshaping them in a well-defined fashion. The axes of these hyperrectangles are defined by the variables measured for each example. Each variable can have any range on the real lines thus the theory is not restricted to symbolic or binary values. This paper describes some advantages and disadvantages of NGE theory, positions it as a form of exemplar-based learning, and compares it to other inductive learning theories. An implementation has been tested in three different domains, for which results are presented below: prediction of breast cancer, classification of iris flowers, and prediction of survival times for heart attack patients. The results in these domains support the claim that NGE theory can be used to create compact representations with excellent predictive accuracy.

370 citations

Journal ArticleDOI
TL;DR: It will be shown that the data-driven approaches should not replace, but rather complement, traditional design techniques based on mathematical models in future wireless communication networks.
Abstract: This paper deals with the use of emerging deep learning techniques in future wireless communication networks. It will be shown that the data-driven approaches should not replace, but rather complement, traditional design techniques based on mathematical models. Extensive motivation is given for why deep learning based on artificial neural networks will be an indispensable tool for the design and operation of future wireless communication networks, and our vision of how artificial neural networks should be integrated into the architecture of future wireless communication networks is presented. A thorough description of deep learning methodologies is provided, starting with the general machine learning paradigm, followed by a more in-depth discussion about deep learning and artificial neural networks, covering the most widely used artificial neural network architectures and their training methods. Deep learning will also be connected to other major learning frameworks, such as reinforcement learning and transfer learning. A thorough survey of the literature on deep learning for wireless communication networks is provided, followed by a detailed description of several novel case studies wherein the use of deep learning proves extremely useful for network design. For each case study, it will be shown how the use of (even approximate) mathematical models can significantly reduce the amount of live data that needs to be acquired/measured to implement the data-driven approaches. Finally, concluding remarks describe those that, in our opinion, are the major directions for future research in this field.

366 citations

Book
01 Jan 1999
TL;DR: In this paper, the authors present a survey of the general set systems for geometric discrepancy bounds in a more general setting, with a focus on the L 1-discrepancy for halfplanes.
Abstract: 1. Introduction 1.1 Discrepancy for Rectangles and Uniform Distribution 1.2 Geometric Discrepancy in a More General Setting 1.3 Combinatorial Discrepancy 1.4 On Applications and Connections 2. Low-Discrepancy Sets for Axis-Parallel Boxes 2.1 Sets with Good Worst-Case Discrepancy 2.2 Sets with Good Average Discrepancy 2.3 More Constructions: b-ary Nets 2.4 Scrambled Nets and Their Average Discrepancy 2.5 More Constructions: Lattice Sets 3. Upper Bounds in the Lebesgue-Measure Setting 3.1 Circular Discs: a Probabilistic Construction 3.2 A Surprise for the L 1-Discrepancy for Halfplanes 4. Combinatorial Discrepancy 4.1 Basic Upper Bounds for General Set Systems 4.2 Matrices, Lower Bounds, and Eigenvalues 4.3 Linear Discrepancy and More Lower Bounds 4.4 On Set Systems with Very Small Discrepancy 4.5 The Partial Coloring Method 4.6 The Entropy Method 5. VC-Dimension and Discrepancy 5.1 Discrepancy and Shatter Functions 5.2 Set Systems of Bounded VC-Dimension 5.3 Packing Lemma 5.4 Matchings with Low Crossing Number 5.5 Primal Shatter Function and Partial Colorings 6. Lower Bounds 6.1 Axis-Parallel Rectangles: L 2-Discrepancy 6.2 Axis-Parallel Rectangles: the Tight Bound 6.3 A Reduction: Squares from Rectangles 6.4 Halfplanes: the Combinatorial Discrepancy 6.5 Combinatorial Discrepancy for Halfplanes Revisited 6.6 Halfplanes: the Lebesgue-Measure Discrepancy 6.7 A Glimpse of Positive Definite Functions 7. More Lower Bounds and the Fourier Transform 7.1 Arbitrarily Rotated Squares 7.2 Axis-Parallel Cubes 7.3 An Excursion to Euclidean Ramsey Theory A. Tables of Selected Discrepancy Bounds Bibliography Index Hints

352 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations