scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, it was shown that quantum states, despite being exponentially long vectors, are still "reasonable" in a learning theory sense, and that one can predict the outcomes of most measurements drawn from an arbitrary probability distribution using only a number of sample measurements that grows linearly with n.
Abstract: Traditional quantum state tomography requires a number of measurements that grows exponentially with the number of qubits n . But using ideas from computational learning theory, we show that one can do exponentially better in a statistical setting. In particular, to predict the outcomes of most measurements drawn from an arbitrary probability distribution, one needs only a number of sample measurements that grows linearly with n . This theorem has the conceptual implication that quantum states, despite being exponentially long vectors, are nevertheless ‘reasonable’ in a learning theory sense. The theorem also has two applications to quantum computing: first, a new simulation of quantum one-way communication protocols and second, the use of trusted classical advice to verify untrusted quantum advice.

151 citations

Journal ArticleDOI
TL;DR: In this article, the complexity of computing and learning for multiplicative neural networks is investigated and upper and lower bounds on the Vapnik-Chervonenkis (VC) dimension and the pseudodimension for various types of networks with multiplicative units are derived.
Abstract: In a great variety of neuron models, neural inputs are combined using the summing operation. We introduce the concept of multiplicative neural networks that contain units that multiply their inputs instead of summing them and thus allow inputs to interact nonlinearly. The class of multiplicative neural networks comprises such widely known and well-studied network types as higher-order networks and product unit networks.We investigate the complexity of computing and learning for multiplicative neural networks. In particular, we derive upper and lower bounds on the Vapnik-Chervonenkis (VC) dimension and the pseudodimension for various types of networks with multiplicative units. As the most general case, we consider feedforward networks consisting of product and sigmoidal units, showing that their pseudo-dimension is bounded from above by a polynomial with the same order of magnitude as the currently best-known bound for purely sigmoidal networks. Moreover, we show that this bound holds even when the unit type, product or sigmoidal, may be learned. Crucial for these results are calculations of solution set components bounds for new network classes. As to lower bounds, we construct product unit networks of fixed depth with superlinear VC dimension.For sigmoidal networks of higher order, we establish polynomial bounds that, in contrast to previous results, do not involve any restriction of the network order. We further consider various classes of higher-order units, also known as sigma-pi units, that are characterized by connectivity constraints. In terms of these, we derive some asymptotically tight bounds.Multiplication plays an important role in both neural modeling of biological behavior and computing and learning with artificial neural networks. We briefly survey research in biology and in applications where multiplication is considered an essential computational element. The results we present here provide new tools for assessing the impact of multiplication on the computational power and the learning capabilities of neural networks.

151 citations

Journal Article
TL;DR: This theorem has the conceptual implication that quantum states, despite being exponentially long vectors, are nevertheless ‘reasonable’ in a learning theory sense and has two applications to quantum computing: first, a new simulation of quantum one-way communication protocols and second, the use of trusted classical advice to verify untrusted quantum advice.
Abstract: Traditional quantum state tomography requires a number of measurements that grows exponentially with the number of qubits n . But using ideas from computational learning theory, we show that one can do exponentially better in a statistical setting. In particular, to predict the outcomes of most measurements drawn from an arbitrary probability distribution, one needs only a number of sample measurements that grows linearly with n . This theorem has the conceptual implication that quantum states, despite being exponentially long vectors, are nevertheless ‘reasonable’ in a learning theory sense. The theorem also has two applications to quantum computing: first, a new simulation of quantum one-way communication protocols and second, the use of trusted classical advice to verify untrusted quantum advice.

149 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...[12], which is stated in terms of a parameter called the VC dimension....

    [...]

  • ...[12] showed that even if C is infinite, one can upper-bound the number of data points needed for learning in terms of a combinatorial parameter of C called the VC (Vapnik-Chervonenkis) dimension....

    [...]

Journal ArticleDOI
TL;DR: Quasi Monte Carlo (QMC) as mentioned in this paper is an alternative to Monte Carlo, where random points are replaced with low-discrepancy sequences, and the advantage is that QMC estimates usually converge faster than their Monte Carlo counterparts.
Abstract: So far, the algorithms we have discussed rely on Monte Carlo, that is, on averages of random variables. QMC (quasi-Monte Carlo) is an alternative to Monte Carlo where random points are replaced with low-discrepancy sequences. The advantage is that QMC estimates usually converge faster than their Monte Carlo counterparts.

149 citations

Journal Article
TL;DR: A model of machine learning in which the concept to be learned may exhibit uncertain or probabilistic behavior is investigated, and an underlying theory of learning p-concepts is developed in detail.
Abstract: A model of machine learning in which the concept to be learned may exhibit uncertain or probabilistic behavior is investigated. Such probabilistic concepts (or p-concepts) may arise in situations such as weather prediction, where the measured variables and their accuracy are insufficient to determine the outcome with certainty. It is required that learning algorithms be both efficient and general in the sense that they perform well for a wide class of p-concepts and for any distribution over the domain. Many efficient algorithms for learning natural classes of p-concepts are given, and an underlying theory of learning p-concepts is developed in detail. >

148 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations