Learnability and the Vapnik-Chervonenkis dimension
Citations
92 citations
91 citations
Cites background or methods from "Learnability and the Vapnik-Chervon..."
...Our characterization is in terms of shatter coefficients—a notion that generalizes VC-dimension; we show that this characterization sometimes yields tighter lower bounds....
[...]
...I2δ ( f )) in terms of the VC-dimension of the function matrix....
[...]
...Using the famous result of Blumer et al. [6] connecting PAC learning and VC-dimension, Kremer et al. also proved an upper bound on the one-way rectangular communication Proceedings of the 17th IEEE Annual Conference on Computational Complexity (CCC 02) 1093-0159/02 $17.00 © 2002 IEEE complexity in terms of VC-dimension....
[...]
...(1) A tight characterization of multi-party one-way communication complexity for product distributions in terms of VC-dimension and shatter coefficients; (2) An equivalence of multi-party one-way and simultaneous communication models for product distributions; (3) A suite of lower bounds for specific functions in the simultaneous communication model, most notably an optimal lower bound for the multi-party set disjointness problem of Alon et al. [2] and for the generalized addressing function problem of Babai et al. [3] for arbitrary groups....
[...]
...Using the connection between PAC learning and VC-dimension [6], Kremer et al....
[...]
91 citations
Cites background from "Learnability and the Vapnik-Chervon..."
...We show that the complexity of these techniques is related to well-studied notions in learning theory such as the Vapnik–Chervonenkis dimension [12] and the teaching dimension [20]....
[...]
...[12] have shown that the VC dimension of a concept class characterizes the number of examples required for learning any concept in the class under the distributionfree or probably approximately correct (PAC)model of Valiant [64]....
[...]
91 citations
Cites background or methods from "Learnability and the Vapnik-Chervon..."
...From Lemma 3, we have jF optj = ( q s= log s): Thus, from Lemma 4 there is an ( ; j; k)-Occam net nder for Rs. 2 By Theorem 3.2.4 in [Blumer et al 1989], we may generalize Theorem 7 and prove the following: Theorem 8 Let C be a concept class with nite VC dimension d, let Cs = f Ss i=1 ci j ci 2 C;…...
[...]
...The rest of the proof follows immediately from Theorem 3.2.4 in [Blumer et al 1989]....
[...]
...Proof : The proof is a simple application of Theorem 3.2.1 in [Blumer et al 89]....
[...]
...The next problem deals with determining if a neural net is optimal....
[...]
...Proof : There is a well-known simple greedy algorithm for Rs, which is optimal within a relative factor of ln jSj + 1 (see, for example, [Blumer et al 89])....
[...]
90 citations
References
42,654 citations
40,020 citations
17,939 citations
14,948 citations
13,647 citations