scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
LiMin Fu1
TL;DR: This paper addresses a yet-to-be-answered question: Why can expert networks generalize more effectively from a finite number of training instances than multilayered perceptrons?
Abstract: A major development in knowledge-based neural networks is the integration of symbolic expert rule-based knowledge into neural networks, resulting in so-called rule-based neural (or connectionist) networks. An expert network here refers to a particular construct in which the uncertainty management model of symbolic expert systems is mapped into the activation function of the neural network. This paper addresses a yet-to-be-answered question: Why can expert networks generalize more effectively from a finite number of training instances than multilayered perceptrons? It formally shows that expert networks reduce generalization dimensionality and require relatively small sample sizes for correct generalization.

16 citations

Journal ArticleDOI
TL;DR: A polynomial time algorithm is given that PAC learns any nonoverlapping perceptron network using examples and membership queries, and is able to identify both the architecture and the weight values necessary to represent the function to be learned.
Abstract: We investigate, within the PAC learning model, the problem of learning nonoverlapping perceptron networks (also known as read-once formulas over a weighted threshold basis). These are loop-free neural nets in which each node has only one outgoing weight. We give a polynomial time algorithm that PAC learns any nonoverlapping perceptron network using examples and membership queries. The algorithm is able to identify both the architecture and the weight values necessary to represent the function to be learned. Our results shed some light on the effect of the overlap on the complexity of learning in neural networks.

16 citations


Cites methods from "Learnability and the Vapnik-Chervon..."

  • ...We adopt Valiant's formalization of this intuitive notion into what is known as the Probably Approximate Correct (PAC), or Distribution Free, model of learning (Valiant, 1984; Blumer et al., 1989)....

    [...]

Posted Content
TL;DR: It is argued that it is natural in predictive PAC to condition not on the past observations but on the mixture component of the sample path, and a novel PAC generalization bound for mixtures of learnable processes with a generalization error that is not worse than that of each mixture component.
Abstract: We informally call a stochastic process learnable if it admits a generalization error approaching zero in probability for any concept class with finite VC-dimension (IID processes are the simplest example). A mixture of learnable processes need not be learnable itself, and certainly its generalization error need not decay at the same rate. In this paper, we argue that it is natural in predictive PAC to condition not on the past observations but on the mixture component of the sample path. This definition not only matches what a realistic learner might demand, but also allows us to sidestep several otherwise grave problems in learning from dependent data. In particular, we give a novel PAC generalization bound for mixtures of learnable processes with a generalization error that is not worse than that of each mixture component. We also provide a characterization of mixtures of absolutely regular ($\beta$-mixing) processes, of independent probability-theoretic interest.

16 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...PROOF: SinceH is PAC-learnable, it must necessarily have a finite VC-dimen sion [24]....

    [...]

01 Jan 1994
TL;DR: It is shown that one can efficiently learn a weight which has an error rate less than ¢ with a probability more than 1-6 such that the size of pairs in qualitative distance information is polynomilally bounded in the dimension, n, and the inverses of e and 6, andThe running time isPolynomially bounded inthe size of pair.
Abstract: This paper discusses a mathematical analysis for learning weights in a similarity function. Although there are many works on theoretical analyses of casebased reasoning systems (Aha et al. 1991; Albert Aha 1991; Langley & Iba 1993; Janke & Lange 1993), none has yet theoretically analyzed methods of producing a proper similarity function in accordance with a tendency of cases which many people have already proposed and empirically analyzed (Stanfill & Waltz 1986; Cardie 1993; Aha 1989; Callan ct al. 1991). In this paper, as the first step, we provide a PAC learning framework for weights with qualitative distance information. Qualitative distance information in this paper represents how a case is similar to another case. We give a mathematical analysis for learning weights from this information. In this setting, we show that we can efficiently learn a weight which has an error rate less than ¢ with a probability more than 1-6 such that the size of pairs in qualitative distance information is polynomilally bounded in the dimension, n, and the inverses of e and 6, and the running time is polynomially bounded in the size of pairs.

16 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...According to the result in (Blumer et al. 1989) for learning half-spaces separated by a hyperplane, there exists a learning algorithm which satisfies the following conditions for every distribution P~ on [0, c~) n and every ¢ and 6 in the range of (0, 1), 1....

    [...]

  • ...Since the VC dimension of this problem is n + 1, according to Theorem 2.1 in (Blumer et al. 1989) , the number of required points N is at most 4 2 8(n+ 1) 1og213) (i) max( logs 5' ~ and any algorithm which produces consistent values of w and d with the following constraints: for everyxEX +,w.x<d,…...

    [...]

Book ChapterDOI
18 Jul 2006
TL;DR: The Lixto project is an ongoing research effort in the area of Web data extraction that aims to develop a logic-based extraction language and a tool to visually define extraction programs from sample Web pages.
Abstract: The Lixto project is an ongoing research effort in the area of Web data extraction. Whereas the project originally started out with the idea to develop a logic-based extraction language and a tool to visually define extraction programs from sample Web pages, the scope of the project has been extended over time. Today, new issues such as employing learning algorithms for the definition of extraction programs, automatically extracting data from Web pages featuring a table-centric visual appearance, and extracting from alternative document formats such as PDF are being investigated.

16 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...Document pre-processing: In general, the first step in understanding a document is to segment it into blocks that can be said to be atomic, i.e. to represent one distinct logical entity in the document’s structure....

    [...]

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations