Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
Citations
1,491 citations
1,462 citations
1,419 citations
1,403 citations
Cites background or methods from "Unsupervised Representation Learnin..."
...As the last two pictures in Figure 10 show, training a discriminative network from scratch (from pixel to class label [29]) yields significantly worse results....
[...]
...A possible solution is to couple our model with the learning of object class [29] so the local statistics is better conditioned....
[...]
...In additional we also replace the Sigmoid function and the binary cross entropy criteria from [29] by a max margin criteria (Hinge loss)....
[...]
...The key idea is to precompute the inversion of the network by fitting a strided(1) convolutional network [31,29] to the inversion process, which operates purely in a feed-forward fashion....
[...]
...[29] we use batch normalization (BN) and leaky ReLU (LReLU) to improve the training of D....
[...]
1,397 citations
References
49,639 citations
"Unsupervised Representation Learnin..." refers methods in this paper
...We use Imagenet-1k (Deng et al., 2009) as a source of natural images for unsupervised training....
[...]
...We trained DCGANs on three datasets, Large-scale Scene Understanding (LSUN) (Yu et al., 2015), Imagenet-1k and a newly assembled Faces dataset....
[...]
...To evaluate the quality of the representations learned by DCGANs for supervised tasks, we train on Imagenet-1k and then use the discriminator’s convolutional features from all layers, maxpooling each layers representation to produce a 4 × 4 spatial grid....
[...]
38,211 citations
23,486 citations
17,184 citations
12,783 citations
"Unsupervised Representation Learnin..." refers background or methods in this paper
...Deep belief networks (Lee et al., 2009) have also been shown to work well in learning hierarchical representations....
[...]
...Previous work has demonstrated that supervised training of CNNs on large image datasets results in very powerful learned features (Zeiler & Fergus, 2014)....
[...]
...(Zeiler & Fergus, 2014) showed that by using deconvolutions and filtering the maximal activations, one can find the approximate purpose of each convolution filter in the network....
[...]