Pattern Recognition and Machine Learning
Citations
147 citations
Cites background or methods from "Pattern Recognition and Machine Lea..."
...The Fisher Criterion (Christopher, 2006) is defined as the ratio of the between-class variance to the within-class variance:...
[...]
...The Fisher Criterion (Christopher, 2006) is defined as the ratio of the between-class variance to the within-class variance: JðWÞ ¼ ðm0 m1Þ 2 S20 þ S 2 1 where m0 and m1 are means of classes while S 2 0 and S 2 1 are the scatters of the classes....
[...]
...Fisher Linear Discriminant or linear classifier (Christopher, 2006; Fisher, 1936; Fukunaga, 1990; McLachlan, 2004) utilizes dimension reduction method to find the best (D-1)-dimensional hyperplane(s) which can divide a D-dimensional space into two or more subspaces....
[...]
...The modification in the Fisher Criterion which makes it profitable is to apply weighted average for both classes where the weights are defined as total available usable limits on each credit card....
[...]
147 citations
147 citations
Cites methods from "Pattern Recognition and Machine Lea..."
...Recall the max-sum message update rules (Bishop, 2006):...
[...]
...Recall the max-sum message update rules (Bishop, 2006): µx→ f (x) = ∑ {l| fl∈ne(x)\ f } µ fl→x(x), (2.4) µ f →x(x) = max x1,...,xM f (x, x1, . . . , xm) + ∑ {m|xm∈ne( f )\x} µxm→ f (xm) , (2.5) where the notation ne(x)\ f is used to indicate the set of variable node x’s neighbors excluding…...
[...]
...We compare the CAP algorithm to capacitated k-medoids (CKM)—a variant of k-medoids (KM; Bishop, 2006) that was adapted to greedily account for the cluster size limit.2 The measure we use to compare the algorithms is the 1See supporting online material for Frey and Dueck (2007), equations S2a to S8,…...
[...]
...We compare the CAP algorithm to capacitated k-medoids (CKM)—a variant of k-medoids (KM; Bishop, 2006) that was adapted to greedily account for the cluster size limit....
[...]
147 citations
Cites methods from "Pattern Recognition and Machine Lea..."
...– Cross validation: To evaluate the performance of machine-learning classifiers, k-fold cross validation is usually used in machine-learning experiments [17]....
[...]
147 citations