Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
Citations
11,958 citations
Cites background or methods from "Unsupervised Representation Learnin..."
...We adapt our generator and discriminator architectures from those in [41]....
[...]
...Fortunately, this is exactly what is done by the recently proposed Generative Adversarial Networks (GANs) [22, 12, 41, 49, 59]....
[...]
11,682 citations
11,127 citations
6,909 citations
Cites methods from "Unsupervised Representation Learnin..."
...…gradient methods, such as AdaGrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), Adam (Kingma & Ba, 2014) and most recently AMSGrad (Reddi et al., 2018) have become a default method of choice for training feed-forward and recurrent neural networks (Xu et al., 2015; Radford et al., 2015)....
[...]
..., 2018) have become a default method of choice for training feed-forward and recurrent neural networks (Xu et al., 2015; Radford et al., 2015)....
[...]
5,782 citations
Cites background or methods from "Unsupervised Representation Learnin..."
...The DCGAN [91] architecture was proposed to expand on the internal complexity of the generator and discriminator networks....
[...]
...Amongst these new architectures, DCGANs, Progressively Growing GANs, CycleGANs, and Conditional GANs seem to have the most application potential in Data Augmentation....
[...]
...The DCGAN was tested to generate results on the LSUN interior bedroom image dataset, each image being 64 × 64 × 3, for a total of 12,288 pixels, (compared to 784 in MNIST)....
[...]
...architectures, the use of super-resolution networks such as SRGAN could be an effective technique for improving the quality of outputs from a DCGAN [91] model....
[...]
...After using classical augmentations to achieve 78.6% sensitivity and 88.4% specificity, they observed an increase to 85.7% sensitivity and 92.4% specificity once they added the DCGAN-generated samples....
[...]
References
210 citations
128 citations
80 citations
38 citations
"Unsupervised Representation Learnin..." refers background in this paper
...The performance of DCGANs is still less than that of Exemplar CNNs (Dosovitskiy et al., 2015), a technique which trains normal discriminative CNNs in an unsupervised fashion to differentiate between specifically chosen, aggressively augmented, exemplar samples from the source dataset....
[...]
...Additionally, we found leaving the momentum term β1 at the suggested value of 0.9 resulted in training oscillation and instability while reducing it to 0.5 helped stabilize training....
[...]
9 citations
"Unsupervised Representation Learnin..." refers methods in this paper
...Similarly, using a gradient descent on the inputs lets us inspect the ideal image that activates certain subsets of filters (Mordvintsev et al.)....
[...]
...The resulting code layer activations are then binarized via thresholding the ReLU activation which has been shown to be an effective information preserving technique (Srivastava et al., 2014) and provides a convenient form of semantic-hashing, allowing for linear time de-duplication ....
[...]