Open AccessProceedings Article
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
Reads0
Chats0
TLDR
Deep convolutional generative adversarial networks (DCGANs) as discussed by the authors learn a hierarchy of representations from object parts to scenes in both the generator and discriminator for unsupervised learning.Abstract:
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.read more
Citations
More filters
Proceedings ArticleDOI
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
Christian Ledig,Lucas Theis,Ferenc Huszar,Jose Caballero,Andrew Cunningham,Alejandro Acosta,Andrew Peter Aitken,Alykhan Tejani,Johannes Totz,Zehan Wang,Wenzhe Shi +10 more
TL;DR: SRGAN as mentioned in this paper proposes a perceptual loss function which consists of an adversarial loss and a content loss, which pushes the solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images.
Book ChapterDOI
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
TL;DR: In this paper, the authors combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image style transfer, where a feedforward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
Posted Content
GANs Trained by a Two Time-Scale Update Rule Converge to a Nash Equilibrium
Martin Heusel,Hubert Ramsauer,Thomas Unterthiner,Bernhard Nessler,Günter Klambauer,Sepp Hochreiter +5 more
TL;DR: In this article, a two time-scale update rule (TTUR) was proposed for training GANs with stochastic gradient descent on arbitrary GAN loss functions, which has an individual learning rate for both the discriminator and the generator.
Posted Content
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
TL;DR: This work presents an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples, and introduces a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).
Proceedings ArticleDOI
Context Encoders: Feature Learning by Inpainting
TL;DR: It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.