scispace - formally typeset
Open AccessProceedings Article

Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization.

Reads0
Chats0
TLDR
In this article, adaptive instance normalization (AdaIN) is proposed to align the mean and variance of the content features with those of the style features, which enables arbitrary style transfer in real-time.
Abstract
Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Frequency Domain Image Translation: More Photo-realistic, Better Identity-preserving

TL;DR: A novel frequency domain image translation (FDIT) framework, exploiting frequency information for enhancing the image generation process, and effectively preserves the identity of the source image, and produces photo-realistic images.
Posted Content

Unsupervised BatchNorm Adaptation (UBNA): A Domain Adaptation Method for Semantic Segmentation Without Using Source Domain Representations.

TL;DR: This paper presents the novel Unsupervised BatchNorm Adaptation (UBNA) method, which adapts a given pre-trained model to an unseen target domain without using any source domain representations, and which can be applied in an online setting or using just a few unlabeled images from the target domain in a few-shot manner.
Posted Content

Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction

TL;DR: This work proposes the discrete residual flow network (DRF-Net), a convolutional neural network for human motion prediction that captures the uncertainty inherent in long-range motion forecasting and effectively captures multimodal posteriors over future human motion by predicting and updating a discretized distribution over spatial locations.
Journal ArticleDOI

The Synthesis of Unpaired Underwater Images Using a Multistyle Generative Adversarial Network

TL;DR: A trainable end-to-end system of an underwater multistyle generative adversarial network (UMGAN) that takes advantage of a cycle-consistent adversarialnetwork (CycleGAN) and conditional generative adversary networks to solve the issues associated with the lack of underwater ground truth.
Proceedings ArticleDOI

Semantic Attribute Matching Networks

TL;DR: Zhang et al. as discussed by the authors proposed a semantic attribute matching loss based on the matching similarity between an attribute transferred source feature and a warped target feature to synthesize attribute transferred images using the learned correspondences.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Book ChapterDOI

Microsoft COCO: Common Objects in Context

TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.
Related Papers (5)