scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Posted Content
TL;DR: A learning setting in which unlabeled data is free, and the cost of a label depends on its value, which is not known in advance, and a general competitive approach for learning with outcome-dependent costs is proposed.
Abstract: We propose a learning setting in which unlabeled data is free, and the cost of a label depends on its value, which is not known in advance. We study binary classification in an extreme case, where the algorithm only pays for negative labels. Our motivation are applications such as fraud detection, in which investigating an honest transaction should be avoided if possible. We term the setting auditing, and consider the auditing complexity of an algorithm: the number of negative labels the algorithm requires in order to learn a hypothesis with low relative error. We design auditing algorithms for simple hypothesis classes (thresholds and rectangles), and show that with these algorithms, the auditing complexity can be significantly lower than the active label complexity. We also discuss a general competitive approach for auditing and possible modifications to the framework.

10 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...This hypothesis class, first introduced in Blumer et al. (1989), has been studied extensively in different regimes (Kearns, 1998; Long and Tan, 1998), including active learning (Hanneke, 2007b)....

    [...]

Proceedings ArticleDOI
15 Aug 1991
TL;DR: Two model perceptrons, with weights that are constrained to be discrete, that exhibit sudden learning are discussed, and a general classification of generalization curves in models of realizable rules is proposed.
Abstract: Learning from examples in feedforward neural networks is studied using equilibrium statistical mechanics. Such an analysis is valid for stochastic learning algorithms that lead to a Gibbs distribution in the network weight space. Two simple approximations to the exact theory are presented: the high temperature limit and the annealed approximation. Within these approximations, we study models of perceptron learning of realizable target rules. In each model, the target rule is perfectly realizable because it is given by another perceptron of identical architecture. We focus on the generalization curve , i.e. the average generalization error as a function of the number of examples. For continuously varying weights learning is known to be gradual, with generalization curves that asymptotically obey inverse power laws. Here we discuss two model perceptrons, with weights that are constrained to be discrete, that exhibit sudden learning. For a linear output, there is a first-order transition occurring at low temperatures, from a state of poor generalization to a state of good generalization. Beyond the transition, the generalization error decays exponentially to zero. For a boolean output, the first order transition is to perfect generalization at all temperatures. Monte Carlo simulations confirm that these approximate analytical results are quantitatively accurate at high temperatures and qualitatively correct at low temperatures. For unrealizable rules, however, the annealed approximation breaks down in general. Finally, we propose a general classification of generalization curves in models of realizable rules.

10 citations

Journal ArticleDOI
TL;DR: In this paper , a physics-guided regularization procedure that enhances the generalisation ability of a neural network (PGNN) by implementing monotonic loss constraints to the objective function due to specialist prior knowledge of the problem domain is presented.
Abstract: Machine learning offers the potential to enable probabilistic-based approaches to engineering design and risk mitigation. Application of such approaches in the field of blast protection engineering would allow for holistic and efficient strategies to protect people and structures subjected to the effects of an explosion. To achieve this, fast-running engineering models that provide accurate predictions of blast loading are required. This paper presents a novel application of a physics-guided regularisation procedure that enhances the generalisation ability of a neural network (PGNN) by implementing monotonic loss constraints to the objective function due to specialist prior knowledge of the problem domain. The PGNN is developed for prediction of specific impulse loading distributions on a rigid target following close-in detonation of a spherical mass of high explosive. The results are compared to those from a traditional neural network (NN) architecture and stress-tested through various data holdout approaches to evaluate its generalisation ability. In total the results show five statistically significant performance premiums, with four of these being achieved by the PGNN. This indicates that the proposed methodology can be used to improve the accuracy and physical consistency of machine learning approaches for blast load prediction.

10 citations

01 Jan 1994
TL;DR: It is shown that any statistical query algorithm can be simulated in the PAC model with malicious errors in such a way that the resultant PAC algorithm has a roughly optimal tolerable malicious error rate and sample complexity.
Abstract: The statistical query learning model can be viewed as a tool for creating (or demonstrating the existence of) noise-tolerant learning algorithms in the PAC model. The complexity of a statistical query algorithm, in conjunction with the complexity of simulating SQ algorithms in the PAC model with noise, determine the complexity of the noise-tolerant PAC algorithms produced. Although roughly optimal upper bounds have been shown for the complexity of statistical query learning, the corresponding noisetolerant PAC algorithms are not optimal due to ine cient simulations. In this paper we provide both improved simulations and a new variant of the statistical query model in order to overcome these ine ciencies. We improve the time complexity of the classi cation noise simulation of statistical query algorithms. Our new simulation has a roughly optimal dependence on the noise rate. We also derive a simpler proof that statistical queries can be simulated in the presence of classi cation noise. This proof makes fewer assumptions on the queries themselves and therefore allows one to simulate more general types of queries. We also de ne a new variant of the statistical query model based on relative error, and we show that this variant is more natural and strictly more powerful than the standard additive error model. We demonstrate e cient PAC simulations for algorithms in this new model and give general upper bounds on both learning with relative error statistical queries and PAC simulation. We show that any statistical query algorithm can be simulated in the PAC model with malicious errors in such a way that the resultant PAC algorithm has a roughly optimal tolerable malicious error rate and sample complexity. Finally, we generalize the types of queries allowed in the statistical query model. We discuss the advantages of allowing these generalized queries and show that our results on improved simulations also hold for these queries. This paper is available from the Center for Research in Computing Technology, Division of Applied Sciences, Harvard University as technical report TR-17-94. Author was supported by Air Force Contract F49620-92-J-0466. Part of this research was conducted while the author was at MIT and supported by DARPA Contract N00014-87-K-825 and by NSF Grant CCR-89-14428. Author's net address: jaa@das.harvard.edu y Author was supported by an NDSEG Doctoral Fellowship and by NSF Grant CCR-92-00884. Author's net address: sed@das.harvard.edu

10 citations


Cites background or methods from "Learnability and the Vapnik-Chervon..."

  • ...However, the -dependence of the general bound on sample complexity for PAC learning is ~ (1= ) [6, 8]....

    [...]

  • ...In fact, the PAC learning algorithms obtained by simulating SQ algorithms in the absence of noise are ine cient when compared to the tight bounds known for noise-free PAC learning [6, 8]....

    [...]

Posted Content
16 Oct 2020
TL;DR: A new framework for reasoning about generalization in deep learning is proposed, and empirical evidence that this gap between worlds can be small in realistic deep learning settings, in particular supervised image classification is given.
Abstract: We propose a new framework for reasoning about generalization in deep learning. The core idea is to couple the Real World, where optimizers take stochastic gradient steps on the empirical loss, to an Ideal World, where optimizers take steps on the population loss. This leads to an alternate decomposition of test error into: (1) the Ideal World test error plus (2) the gap between the two worlds. If the gap (2) is universally small, this reduces the problem of generalization in offline learning to the problem of optimization in online learning. We then give empirical evidence that this gap between worlds can be small in realistic deep learning settings, in particular supervised image classification. For example, CNNs generalize better than MLPs on image distributions in the Real World, but this is "because" they optimize faster on the population loss in the Ideal World. This suggests our framework is a useful tool for understanding generalization in deep learning, and lays a foundation for future research in the area.

10 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...…framework of generalization decomposes the test error of a model ft as: TestError(ft) = TrainError(ft) + [TestError(ft)− TrainError(ft)]︸ ︷︷ ︸ Generalization gap (1) and studies each part separately (e.g. Blumer et al. (1989); Shalev-Shwartz and Ben-David (2014); Vapnik and Chervonenkis (1971))....

    [...]

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations