Training v -support vector regression: theory and algorithms
Citations
40,826 citations
Cites background from "Training v -support vector regressi..."
...and regression can be seen in (Chang and Lin, 2001, Section 4) and ( Chang and Lin, 2002 ),...
[...]
1,467 citations
Cites background from "Training v -support vector regressi..."
...Most algorithms for SVR [19, 11, 20, 21] require that the training samples be delivered in a single batch....
[...]
644 citations
Additional excerpts
...…KKT conditions for OL-SVR can be rewritten as oLD oai ¼ Xl j¼1 Qijðaj a j Þ þ e yi þ f di þ ui ¼ 0; oLD oa i ¼ Xl j¼1 Q ijðaj a j Þ þ eþ yi f d i þ u i ¼ 0; dð Þi P 0; d ð Þ i a ð Þ i ¼ 0; uð Þi P 0; u ð Þ i ða ð Þ i CÞ ¼ 0; ð3Þ where f in (3) is equal to b in (1) at optimality (Chang & Lin, 2002)....
[...]
461 citations
418 citations
Cites methods from "Training v -support vector regressi..."
...The original version of this theorem is proved in Chang and Lin (2002)....
[...]
...Theorem 1 (Chang & Lin, 2002)....
[...]
...For example, Chang and Lin (2001, 2002) gave SMO algorithm and implementation for training ϵ-SVR....
[...]
...Chang and Lin (2002) proposed a recognized SMO-type algorithm specially designed for batch ν-SVR training, which is implemented in C++ as a part of the LIBSVM software package (Chang & Lin, 2001)....
[...]
References
40,826 citations
26,531 citations
"Training v -support vector regressi..." refers background in this paper
...This formulation is different from the original -SVR (Vapnik, 1998):...
[...]
...This formulation is different from the original -SVR (Vapnik, 1998): (P ) min 1 2 wTw + C l l∑ i=1 (ξi + ξ∗i ) (wTφ(xi) + b) − yi ≤ + ξi, (1.2) yi − (wTφ(xi) + b) ≤ + ξ∗i , ξi, ξ ∗ i ≥ 0, i = 1, . . . , l....
[...]
5,506 citations
Additional excerpts
...As it is difficult to select an appropriate , Schölkopf et al. (1999) introduced a new parameter ν that lets one control the number of support vectors and training errors....
[...]
5,350 citations
"Training v -support vector regressi..." refers methods in this paper
...The decomposition method was first proposed for SVM classification (Osuna, Freund, & Girosi, 1997; Joachims, 1998; Platt, 1998)....
[...]
...Following the idea of sequential minimal optimization (SMO) by Platt (1998), we use only two elements as the working set in each iteration....
[...]
5,019 citations