Sparse bayesian learning and the relevance vector machine
Citations
11,357 citations
Cites background or methods from "Sparse bayesian learning and the re..."
...relevance vectors Empirically it is often observed that the number of relevance vectors is smaller than the number of support vectors on the same problem [Tipping, 2001]....
[...]
...In this section we present a different kind of analysis within the probably approximately correct (PAC) PAC framework due to Valiant [1984]. Seeger [2002; 2003] has presented a PACBayesian analysis of generalization in Gaussian process classifiers and we get to this in a number of stages; we first present an introduction to the PAC framework (section 7....
[...]
...Although usually not presented as such, the relevance vector machine (RVM) introduced by Tipping [2001] is actually a special case of a Gaussian process....
[...]
...The original RVM algorithm [Tipping, 2001] was not able to exploit the sparsity very effectively during model fitting as it was initialized with all of the αis set to finite values, meaning that all of the basis functions contributed to the model....
[...]
6,999 citations
5,318 citations
4,866 citations
3,421 citations
References
40,147 citations
26,531 citations
"Sparse bayesian learning and the re..." refers background or methods in this paper
...The function sinc(x) = sin(x)=x has been a popular choice to illustrate support vector regression (Vapnik et al., 1997; Vapnik, 1998), where in place of the classi cation margin, the -insensitive region is introduced, a `tube' of around the function within which errors are not penalised....
[...]
...We consider functions of a type corresponding to those implemented by another sparse linearly-parameterised model, the support vector machine (SVM) (Boser et al., 1992; Vapnik, 1998; Sch olkopf et al., 1999a)....
[...]
19,056 citations
14,948 citations
13,647 citations