scispace - formally typeset
Open AccessProceedings Article

MGAN: Training Generative Adversarial Nets with Multiple Generators

Reads0
Chats0
TLDR
A new approach to train the Generative Adversarial Nets with a mixture of generators to overcome the mode collapsing problem, and develops theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generator’ distributions and the empirical data distribution is minimal, whilst the JSD among generators' distributions is maximal, hence effectively avoiding the mode collapse problem.
Abstract
We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem The main intuition is to employ multiple generators, instead of using a single one as in the original GAN The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model We term our method Mixture Generative Adversarial Nets (MGAN) We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Evolutionary Generative Adversarial Networks

TL;DR: E-GAN as mentioned in this paper proposes an evolutionary GAN to evolve a population of generators to play the adversarial game with the discriminator, where different adversarial training objectives are employed as mutation operations and each individual generator is updated based on these mutations.
Proceedings ArticleDOI

AutoGAN: Neural Architecture Search for Generative Adversarial Networks

TL;DR: This paper presents the first preliminary study on introducing the NAS algorithm to generative adversarial networks (GANs), dubbed AutoGAN, and discovers architectures that achieve highly competitive performance compared to current state-of-the-art hand-crafted GANs.
Posted Content

A Comprehensive Survey of Neural Architecture Search: Challenges and Solutions

TL;DR: This survey provides a new perspective on the NAS starting with an overview of the characteristics of the earliest NAS algorithms, summarizing the problems in these earlyNAS algorithms, and then giving solutions for subsequent related research work.
Book ChapterDOI

PassGAN: A Deep Learning Approach for Password Guessing

TL;DR: HashCat and John the Ripper as mentioned in this paper can expand password dictionaries using password generation rules, such as concatenation of words (e.g., “password123456”) and leet speak.
Proceedings ArticleDOI

All in One Bad Weather Removal Using Architectural Search

TL;DR: This paper proposes a method that can handle multiple bad weather degradations: rain, fog, snow and adherent raindrops using a single network and designs a novel adversarial learning scheme that only backpropagates the loss of a degradation type to the respective task-specific encoder.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Related Papers (5)