Pattern Recognition and Machine Learning
Citations
149 citations
149 citations
Cites methods from "Pattern Recognition and Machine Lea..."
...For the backward contribution, we leverage the backpropagation algorithm [6], which clearly discloses how the outputs of neurons in layer l+1 indirectly influence the outputs of the neurons in layer l....
[...]
...To address this issue, we first tried to cluster the neurons using the popular K-Means clustering algorithm [6] and only show the clusters whose neurons highly contribute to the output of the selected neurons (Fig....
[...]
...According to the backpropagation algorithm [6], the output al+1 k of the neuron n l+1 k in layer l + 1 has a backward contribution on the gradient gi j of weight wi j....
[...]
[...]
149 citations
Cites background from "Pattern Recognition and Machine Lea..."
...The Bayesian Motivation or a Motivation for Bayes Finally, there exists yet another approach to statistics, called Bayesian inference, which is very popular in, for example, the field of machine learning [Bishop, 2006]....
[...]
149 citations
Cites methods from "Pattern Recognition and Machine Lea..."
...For each feature space dimensionality, a linear discriminant analysis (LDA) classifier (Bishop, 2006) was trained for each candidate task and feature selection method....
[...]
...Each trial was 15 s long, consisting of a 5-s preparation period during which a visual cue was displayed to indicate the required task for the trial; a 5-s task period during which the participant performed the required task; and a 5-s cool-down period before the next trial began....
[...]
149 citations
Cites methods from "Pattern Recognition and Machine Lea..."
...Given a segment from time s to t and a policy π, CHAMP approximates the logarithm of the policy evidence for that segment via the Bayesian information criterion (BIC) [4] as:...
[...]
...The BIC is a well-known approximation that avoids marginalizing over the policy parameters and provides a principled penalty against complex policies by assuming a Gaussian posterior around the estimated parameters θ̂....
[...]
...Given a segment from time s to t and a policy π, CHAMP approximates the logarithm of the policy evidence for that segment via the Bayesian information criterion (BIC) [4] as: logL(s, t, π) ≈ log p(zs+1:t|π, θ̂)− 1 2 kπ log(t− s), (9) where kπ is the number of parameters of policy π and θ̂ are estimated parameters for policy π....
[...]