Open AccessProceedings Article
Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization.
Xun Huang,Serge Belongie +1 more
Reads0
Chats0
TLDR
In this article, adaptive instance normalization (AdaIN) is proposed to align the mean and variance of the content features with those of the style features, which enables arbitrary style transfer in real-time.Abstract:
Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.read more
Citations
More filters
Posted Content
Bayesian Hypernetworks
TL;DR: In practice, Bayesian hypernets provide a better defense against adversarial examples than dropout, and also exhibit competitive performance on a suite of tasks which evaluate model uncertainty, including regularization, active learning, and anomaly detection.
Posted Content
ACNe: Attentive Context Normalization for Robust Permutation-Equivariant Learning
TL;DR: This paper shows how to normalize the feature maps with weights that are estimated within the network, excluding outliers from this normalization, and uses this mechanism to leverage two types of attention: local and global – by combining them, the method is able to find the essential data points in high-dimensional space in order to solve a given task.
Posted Content
Interactive Sketch & Fill: Multiclass Sketch-to-Image Translation
Arnab Ghosh,Richard Zhang,Puneet K. Dokania,Oliver Wang,Alexei A. Efros,Philip H. S. Torr,Eli Shechtman +6 more
TL;DR: An interactive GAN-based sketch-to-image translation method that helps novice users easily create images of simple objects and introduces a gating-based approach for class conditioning, which allows for distinct classes without feature mixing, from a single generator network.
Proceedings ArticleDOI
WarpGAN: Automatic Caricature Generation
TL;DR: WarpGAN as mentioned in this paper learns to automatically predict a set of control points that can warp the photo into a caricature, while preserving identity and allowing customization of the generated caricatures by controlling the exaggeration extent and the visual styles.
Proceedings ArticleDOI
VQVC+: One-shot voice conversion by vector quantization and U-Net architecture
TL;DR: To further improve audio quality, the U-Net architecture is used within an auto-encoder-based VC system and the VQ-based method, which quantizes the latent vectors, can serve the purpose.
References
More filters
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI
Generative Adversarial Nets
Ian Goodfellow,Jean Pouget-Abadie,Mehdi Mirza,Bing Xu,David Warde-Farley,Sherjil Ozair,Aaron Courville,Yoshua Bengio +7 more
TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Book ChapterDOI
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,James Hays,Pietro Perona,Deva Ramanan,Piotr Dollár,C. Lawrence Zitnick +7 more
TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.