Semi-Supervised Learning with Deep Generative Models
Citations
[...]
38,208 citations
5,711 citations
4,888 citations
Cites background from "Semi-Supervised Learning with Deep ..."
...Such factorised representations were previously achieved only for controlled subsets of natural images such as faces under different illumination conditions and characters in different font styles [29] or handwritten digits and house numbers [17]....
[...]
3,332 citations
3,125 citations
References
20,769 citations
"Semi-Supervised Learning with Deep ..." refers background or methods in this paper
...…semi-supervised learning by utilising an explicit model of the data density, building upon recent advances in deep generative models and scalable variational inference, namely auto-encoding variational Bayes and stochastic backpropagation (Kingma and Welling, 2014; Rezende et al., 2014)....
[...]
...This optimisation can be done jointly, without resort to the variational EM algorithm, by using deterministic reparameterisations of the expectations in the objective function, combined with Monte Carlo approximation – referred to in previous work as stochastic gradient variational Bayes (SGVB) (Kingma and Welling, 2014) or as stochastic backpropagation (Rezende et al....
[...]
...In both cases, exact inference will be intractable, but we exploit recent advances in variational inference (Kingma and Welling, 2014; Rezende et al., 2014) to efficiently obtain accurate posterior distributions for latent variables as well as to perform efficient parameter learning....
[...]
...We construct the approximate posterior distribution qφ(·) as an inference or recognition model, which has become a popular approach for efficient variational inference (Dayan, 2000; Kingma and Welling, 2014; Rezende et al., 2014; Stuhlmüller et al., 2013)....
[...]
...In this paper we answer this question by developing probabilistic models for inductive and transductive semi-supervised learning by utilising an explicit model of the data density, building upon recent advances in deep generative models and scalable variational inference (Kingma and Welling, 2014; Rezende et al., 2014)....
[...]
7,244 citations
5,311 citations
"Semi-Supervised Learning with Deep ..." refers background or methods in this paper
...We also show a similar visualisation for the street view house numbers (SVHN) data set (Netzer et al., 2011), which consists of more than 70,000 images of house numbers, in figure 3 (top)....
[...]
...Figure 1 shows these analogical fantasies for the MNIST and SVHN datasets (Netzer et al., 2011)....
[...]
4,189 citations
"Semi-Supervised Learning with Deep ..." refers background in this paper
...Existing generative approaches based on models such as Gaussian mixture or hidden Markov models (Zhu, 2006), have not been very successful due to the need for a large number of mixtures components or states to perform well....
[...]
...Existing generative approaches based on models such as Gaussian mixture or hidden Markov models (Zhu, 2006), have not been very successful due to the limited capacity and the need for many states to perform well....
[...]
3,908 citations
"Semi-Supervised Learning with Deep ..." refers background or methods in this paper
...It would be desirable to have a single principled loss function similar to (Blum et al., 2004) or (Zhu et al., 2003)....
[...]
...Graph-based methods are amongst the most popular and aim to construct a graph connecting similar observations with label information propagating through the graph from labelled to unlabelled nodes by finding the minimum energy (MAP) configuration (Blum et al., 2004; Zhu et al., 2003)....
[...]
...Graph-based methods are amongst the most popular and aim to construct a graph connecting similar observations; label information propagates through the graph from labelled to unlabelled nodes by finding the minimum energy (MAP) configuration (Blum et al., 2004; Zhu et al., 2003)....
[...]