scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: Positive PAC-learning results for the nonmonotonic inductive logic programming setting are presented and first-order range-restricted clausal theories that consist of clauses with up to k literals of size at most j each are shown to be PAC-learnable with one-sided error from positive examples only.

164 citations

Proceedings ArticleDOI
01 Dec 1989
TL;DR: An analysis of a conversion to improve the performance of on-line learning algorithms in a batch setting, using a version of Chernoff bounds applied to supermartingales, that shows that for some target classes the converted algorithm will be asymptotically optimal.
Abstract: We contrast on-line and batch settings for concept learning, and describe an on-line learning model in which no probabilistic assumptions are made. We briefly mention some of our recent results pertaining to on-line learning algorithms developed using this model. We then turn to the main topic, which is an analysis of a conversion to improve the performance of on-line learning algorithms in a batch setting. For the batch setting we use the PAC-learning model. The conversion is straightforward, consisting of running the given on-line algorithm, collecting the hypotheses it uses for making predictions, and then choosing the hypothesis among them that does the best in a subsequent hypothesis testing phase. We have developed an analysis, using a version of Chernoff bounds applied to supermartingales, that shows that for some target classes the converted algorithm will be asymptotically optimal.

164 citations

Journal ArticleDOI
TL;DR: In this article, a general notion of universal consistency of nonparametric estimators is introduced that applies to regression estimation, conditional median estimation, curve fitting, pattern recognition, and learning concepts.
Abstract: A general notion of universal consistency of nonparametric estimators is introduced that applies to regression estimation, conditional median estimation, curve fitting, pattern recognition, and learning concepts. General methods for proving consistency of estimators based on minimizing the empirical error are shown. In particular, distribution-free almost sure consistency of neural network estimates and generalized linear estimators is established. >

162 citations

Journal ArticleDOI
08 Nov 1998
TL;DR: Upper and lower bounds on the VC dimension and pseudodimension of feedforward neural networks composed of piecewise polynomial activation functions are computed and it is shown that if the number of layers is fixed, then theVC dimension and pseudo-dimension grow as W log W, where W is thenumber of parameters in the network.
Abstract: We compute upper and lower bounds on the VC dimension and pseudodimension of feedforward neural networks composed of piecewise polynomial activation functions. We show that if the number of layers is fixed, then the VC dimension and pseudo-dimension grow as W log W, where W is the number of parameters in the network. This result stands in opposition to the case where the number of layers is unbounded, in which case the VC dimension and pseudo-dimension grow as W2 . We combine our results with recently established approximation error rates and determine error bounds for the problem of regression estimation by piecewise polynomial networks with unbounded weights.

160 citations

Proceedings ArticleDOI
01 Jul 1992
TL;DR: It is shown that a neuron cannot efficently find its probably almost optimal adjustment (unless RP = NP) and neither heuristical learning nor learning by sigmoidal neurons with a constant reject-rate is efficiently possible.
Abstract: We investigate the problem of learning concepts by presenting labeled and randomly chosen training–examples to single neurons. It is well-known that linear halfspaces are learnable by the method of linear programming. The corresponding (Mc-Culloch-Pitts) neurons are therefore efficiently trainable to learn an unknown halfspace from examples. We want to analyze how fast the learning performance degrades when the representational power of the neuron is overstrained, i.e., if more complex concepts than just halfspaces are allowed. We show that a neuron cannot efficently find its probably almost optimal adjustment (unless RP = NP). If the weights and the threshold of the neuron have a fixed constant bound on their coding length, the situation is even worse: There is in general no polynomial time training method which bounds the resulting prediction error of the neuron by k.opt for a fixed constant k (unless RP = NP). Other variants of learning more complex concepts than halfspaces by single neurons are also investigated. We show that neither heuristical learning nor learning by sigmoidal neurons with a constant reject-rate is efficiently possible (unless RP = NP).

158 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations