scispace - formally typeset
Open AccessProceedings ArticleDOI

DLOW: Domain Flow for Adaptation and Generalization

TLDR
A domain flow generation model to bridge two different domains by generating a continuous sequence of intermediate domains flowing from one domain to the other and demonstrating the effectiveness of the model for both cross-domain semantic segmentation and the style generalization tasks on benchmark datasets is presented.
Abstract
In this work, we present a domain flow generation(DLOW) model to bridge two different domains by generating a continuous sequence of intermediate domains flowing from one domain to the other. The benefits of our DLOW model are two-fold. First, it is able to transfer source images into different styles in the intermediate domains. The transferred images smoothly bridge the gap between source and target domains, thus easing the domain adaptation task. Second, when multiple target domains are provided for training, our DLOW model is also able to generate new styles of images that are unseen in the training data. We implement our DLOW model based on CycleGAN. A domainness variable is introduced to guide the model to generate the desired intermediate domain images. In the inference phase, a flow of various styles of images can be obtained by varying the domainness variable. We demonstrate the effectiveness of our model for both cross-domain semantic segmentation and the style generalization tasks on benchmark datasets. Our implementation is available at https://github.com/ETHRuiGong/DLOW .

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Confidence Regularized Self-Training

TL;DR: Zhou et al. as discussed by the authors proposed a confidence regularized self-training (CRST) framework, which treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization.
Posted Content

Confidence Regularized Self-Training.

TL;DR: A confidence regularized self-training (CRST) framework, formulated as regularizedSelf-training, that treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization and proposes two types of confidence regularization: label regularization (LR) and modelRegularization (MR).
Proceedings ArticleDOI

FDA: Fourier Domain Adaptation for Semantic Segmentation

TL;DR: A simple method for unsupervised domain adaptation, whereby the discrepancy between the source and target distributions is reduced by swapping the low-frequency spectrum of one with the other, which results indicate that even simple procedures can discount nuisance variability in the data that more sophisticated methods struggle to learn away.
Posted Content

FDA: Fourier Domain Adaptation for Semantic Segmentation

TL;DR: In this paper, the Fourier Transform and its inverse is used for unsupervised domain adaptation, whereby the discrepancy between the source and target distributions is reduced by swapping the low-frequency spectrum of one with the other.
Proceedings ArticleDOI

Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation

TL;DR: ProDA as mentioned in this paper aligns the prototypical assignments based on relative feature distances for two different views of the same target, producing a more compact target feature space and distilling the already learned knowledge to a self-supervised pretrained model further boosts the performance.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Proceedings ArticleDOI

Image-to-Image Translation with Conditional Adversarial Networks

TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Journal ArticleDOI

DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

TL;DR: This work addresses the task of semantic image segmentation with Deep Learning and proposes atrous spatial pyramid pooling (ASPP), which is proposed to robustly segment objects at multiple scales, and improves the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models.
Proceedings ArticleDOI

Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks

TL;DR: CycleGAN as discussed by the authors learns a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss.
Related Papers (5)