Pattern Recognition and Machine Learning
Citations
230 citations
Cites background from "Pattern Recognition and Machine Lea..."
...If the likelihood function [24] is also Gaussian (which effectively assumes that the observed forces f i are the true forces subject to Gaussian noise of variance σ 2 n ) then the resulting posterior distribution f(ρ|D), conditional on the data, will also be a Gaussian process f(ρ|D) ∼ GP(f̂(ρ|D),Ĉ(ρ,ρ ′))....
[...]
...It is easy to check that standard kernels such as the squared exponential [24] or the overlap integral of atomic configuration [35] do not possess the covariance property (7)....
[...]
..., Gaussian process (GP) regression [22,23] or neural networks [24]....
[...]
230 citations
Cites methods from "Pattern Recognition and Machine Lea..."
...The classification is performed with 3 different classification approaches: k-nearest neighbors (k-NN) [35] for k = 7 chosen experimentally, naive Bayes classifier with kernel density estimate [36, 37, 38], and decision trees [39]....
[...]
229 citations
Cites background from "Pattern Recognition and Machine Lea..."
...For that, one may draw from the wealth of pattern recognition techniques in machine learning [23], and from the growing set of labeled data and corresponding models on the web....
[...]
229 citations
Cites methods from "Pattern Recognition and Machine Lea..."
...We use Support Vector Machines (SVMs) [1], which are a popular supervised learner for tasks such as this, as a classifier....
[...]
228 citations
Cites background or methods from "Pattern Recognition and Machine Lea..."
..., classification problem [3], which is more commonly referred to as clustering problem in ML literature....
[...]
...Results after (b) first E step; (c) first M step; (d) 2 complete EM iterations; (e) 5 complete EM iteratons; and (f) 20 complete EM iterations [3]....
[...]
...The choice of a kernel function is often determined by the designer’s knowledge of the problem domain [3]....
[...]
...The EM algorithm is a two-step iterative procedure comprising of expectation (E) and maximization (M) steps [3]....
[...]