Open AccessPosted Content
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
Reads0
Chats0
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.Abstract:
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.read more
Citations
More filters
Posted Content
Linear Time Sinkhorn Divergences using Positive Features
Meyer Scetbon,Marco Cuturi +1 more
TL;DR: In this article, the authors proposed to use ground costs of the form $c(x,y)=-\log\dotp{\varphi(x)}{\varphi (y)}$ where $\varphi$ is a map from the ground space onto the positive orthant with $r\ll n.
Posted Content
Synthesising clinically realistic Chest X-rays using Generative Adversarial Networks
TL;DR: It is demonstrated that the latent space can be optimised to produce images of a particular class despite unconditional training, with the model producing related features and complications for the class of interest.
Proceedings ArticleDOI
Image Generation of Trichomonas Vaginitis Based on Mode Margin Generative Adversarial Networks
TL;DR: A new backbone Generative Adversarial Networks (GAN) is designed and a model mapping ratio term is added to it to increase the modes of the generated image, which effectively alleviates the model collapse phenomenon.
Proceedings ArticleDOI
Evolutionary Algorithm based Encoder Decoder Network Design for Semantic Inpainting and Noise Vector Mapping in Generative Adversarial Network
Ankit. B. Saradagi,Jeyakumar. G +1 more
TL;DR: In this paper, the authors proposed two new analysis with Evolutionary Algorithms (EA) 1) Hyper parameter optimization of Encoder Decoder Model for the task of Image Inpainting and 2) Addressing the input noise vector of the Generative Adversarial Network model.
References
More filters
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI
Generative Adversarial Nets
Ian Goodfellow,Jean Pouget-Abadie,Mehdi Mirza,Bing Xu,David Warde-Farley,Sherjil Ozair,Aaron Courville,Yoshua Bengio +7 more
TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Posted Content
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: In this article, the adaptive estimates of lower-order moments are used for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimate of lowerorder moments.
Posted Content
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Book ChapterDOI
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler,Rob Fergus +1 more
TL;DR: A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.