Learnability and the Vapnik-Chervonenkis dimension
Citations
14 citations
Cites methods from "Learnability and the Vapnik-Chervon..."
...On the one hand, compared with neural network classifier, which is based on the learning algorithm of empirical risk minimization [20] and easy to fall into a local optimal solution, SVM is related to the statistical theory of VC dimension theory [21] and structural risk minimization [22] theory....
[...]
14 citations
Cites background from "Learnability and the Vapnik-Chervon..."
...THEOREM 3.2 (VAPNIK [9], BLUMER et a1 [ 7 ]) Let T be a well-behaved class of functions f : R" + {+l, -1) and let 0 < y 5 1, 6 < 1 and 0 <: E. Let Tk be a sequence of k ezamples drawn independently according to the distribution P' on R" x {+l, -1) and let P be the probability that there is some junction j E 3 which disagrees with at most a fraction (1 - 7)~ of the ezamples in Tk but has error greater...
[...]
...We first require an environment X which in this paper is always R". A concept class C and an hypothesis space H are both defined as sets of subsets of X. Any element of C or H must be a Bore1 set and we may also require that C and H are well-behaved as defined in [ 7 ]; these requirements are discussed in full in [SI where in particular we show that all spaces used in this paper are well-behaved where necessary....
[...]
14 citations
14 citations
14 citations
References
42,654 citations
40,020 citations
17,939 citations
14,948 citations
13,647 citations