Learnability and the Vapnik-Chervonenkis dimension
Citations
100 citations
[...]
100 citations
Cites background from "Learnability and the Vapnik-Chervon..."
...The problem of learning a linear threshold function over f0; 1g can be formulated as a linear programming problem and thus can be solved in poly(n) time in both the PAC model of learning from random examples and in the model of exact learning from equivalence queries [10, 31]....
[...]
100 citations
Cites background from "Learnability and the Vapnik-Chervon..."
...Theorem 3.1 Let H be a hypothesis space of {0, 1}-valued functions defined on an input space X....
[...]
...An application to the theory of artificial neural networks is then given....
[...]
...…possibility of classification errors during training, and to allow for ill-defined or stochastic concepts, the theory has been extended [4] to discuss not the learnability of functions from X to {0, 1} with an underlying distribution µ, but instead the learnability of probability distributions…...
[...]
...The number of input nodes will be denoted s + 1 and the number of output nodes t....
[...]
99 citations
Additional excerpts
...Its main interest for data mining is related to one of the basic models of machine learning, the probabilistic approximately correct learning paradigm as was shown in [16]....
[...]
99 citations
Cites methods from "Learnability and the Vapnik-Chervon..."
...chosen exampleshx; g.x/i for g by a learning algorithm that uses functions fromF as hypotheses (see Haussler, 1992; Vapnik & Chervonenkis, 1971; Blumer, Ehrenfeucht, Haussler, & Warmuth, 1989; Maass, 1995)....
[...]
References
42,654 citations
40,020 citations
17,939 citations
14,948 citations
13,647 citations