Learnability and the Vapnik-Chervonenkis dimension
Citations
10 citations
Cites background from "Learnability and the Vapnik-Chervon..."
...The upper bound of the VC dimension gives non-asymptotic, distribution-independent evaluations both for the convergence rates, and for the sample complexity in the style of computational learning theory [ 3 ]....
[...]
10 citations
10 citations
10 citations
10 citations
Cites background or methods from "Learnability and the Vapnik-Chervon..."
...[9] that says if the algorithm efficiently finds a hypothesis from C that is consistent with U , which is drawn iid from fixed distribution D and is of size at least...
[...]
...We define the level2 intervals by taking the union of consecutive triples of the level-3 intervals: {[0, 1), [2, 3), [4, 5)}, {[6, 7), [8, 9), [10, 11)}, and {[12, 13), [14, 15), [16, 17)}....
[...]
...[9] show that, if m is sufficiently large, then a consistent hypothesis h will...
[...]
...It is known to be NP-hard [9, 10] to find a smallest set of rectangles to cover a set of points in Rd even for d = 2....
[...]
...It is known to be NP-hard [9, 10] to find a smallest set of rectangles to cover a set of points in R even for d = 2....
[...]
References
42,654 citations
40,020 citations
17,939 citations
14,948 citations
13,647 citations