Semantic Image Synthesis With Spatially-Adaptive Normalization
Taesung Park,Ming-Yu Liu,Ting-Chun Wang,Jun-Yan Zhu +3 more
- pp 2337-2346
Reads0
Chats0
TLDR
S spatially-adaptive normalization is proposed, a simple but effective layer for synthesizing photorealistic images given an input semantic layout that allows users to easily control the style and content of image synthesis results as well as create multi-modal results.Abstract:
We propose spatially-adaptive normalization, a simple but effective layer for synthesizing photorealistic images given an input semantic layout. Previous methods directly feed the semantic layout as input to the network, forcing the network to memorize the information throughout all the layers. Instead, we propose using the input layout for modulating the activations in normalization layers through a spatially-adaptive, learned affine transformation. Experiments on several challenging datasets demonstrate the superiority of our method compared to existing approaches, regarding both visual fidelity and alignment with input layouts. Finally, our model allows users to easily control the style and content of image synthesis results as well as create multi-modal results. Code is available upon publication.read more
Citations
More filters
Proceedings ArticleDOI
Adaptive Convolutions for Structure-Aware Style Transfer
TL;DR: In this paper, the authors propose Adaptive Convolutions (AdaConv), a generic extension of AdaIN, to allow for the simultaneous transfer of both statistical and structural styles in real time.
Journal ArticleDOI
Generative Adversarial Networks for Image Augmentation in Agriculture: A Systematic Review
TL;DR: An overview of the evolution of GAN architectures followed by a systematic review of their application to agriculture can be found in this article , involving various vision tasks for plant health, weeds, fruits, aquaculture, animal farming, plant phenotyping as well as postharvest detection of fruit defects.
Proceedings ArticleDOI
TrustMAE: A Noise-Resilient Defect Classification Framework using Memory-Augmented Auto-Encoders with Trust Regions
TL;DR: Li et al. as discussed by the authors proposed a trust-region memory updating scheme to keep the noises away from the memory slots, which can reconstruct defect-free images and identify the defective regions using a perceptual distance network.
Proceedings Article
One-shot Face Reenactment.
TL;DR: This work proposes a novel one-shot face reenactment learning framework that achieves superior transfer fidelity as well as identity preserving capability than alternatives and achieves competitive results to those using a set of target images.
Proceedings ArticleDOI
On Feature Normalization and Data Augmentation
TL;DR: In this paper, the authors propose Moment Exchange, an implicit data augmentation method that encourages the model to utilize the moment information also for recognition models, by replacing the moments of the learned features of one training image by those of another, and interpolating the target labels.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
Generative Adversarial Nets
Ian Goodfellow,Jean Pouget-Abadie,Mehdi Mirza,Bing Xu,David Warde-Farley,Sherjil Ozair,Aaron Courville,Yoshua Bengio +7 more
TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.