scispace - formally typeset
Open AccessPosted Content

DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation

TLDR
DAformer as mentioned in this paper proposes a Transformer encoder and a multi-level context-aware feature fusion decoder for unsupervised domain adaptation (UDA), which is enabled by three crucial training strategies to stabilize the training and avoid overfitting.
Abstract
As acquiring pixel-wise annotations of real-world images for semantic segmentation is a costly process, a model can instead be trained with more accessible synthetic data and adapted to real images without requiring their annotations. This process is studied in unsupervised domain adaptation (UDA). Even though a large number of methods propose new adaptation strategies, they are mostly based on outdated network architectures. As the influence of recent network architectures has not been systematically studied, we first benchmark different network architectures for UDA and then propose a novel UDA method, DAFormer, based on the benchmark results. The DAFormer network consists of a Transformer encoder and a multi-level context-aware feature fusion decoder. It is enabled by three simple but crucial training strategies to stabilize the training and to avoid overfitting DAFormer to the source domain: While the Rare Class Sampling on the source domain improves the quality of pseudo-labels by mitigating the confirmation bias of self-training towards common classes, the Thing-Class ImageNet Feature Distance and a learning rate warmup promote feature transfer from ImageNet pretraining. DAFormer significantly improves the state-of-the-art performance by 10.8 mIoU for GTA->Cityscapes and 5.4 mIoU for Synthia->Cityscapes and enables learning even difficult classes such as train, bus, and truck well. The implementation is available at https://github.com/lhoyer/DAFormer.

read more

Citations
More filters
Journal ArticleDOI

SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation

TL;DR: SePiCo as discussed by the authors employs the category centroids of the entire source domain or a single source image to guide the learning of discriminative features, making significant progress on both synthetic-toreal and daytime-to-night adaptation scenarios.
Journal ArticleDOI

ResiDualGAN: Resize-Residual DualGAN for Cross-Domain Remote Sensing Images Semantic Segmentation

TL;DR: ResiDualGAN is proposed for RS images translation, where a resizer module is used for addressing the scale discrepancy of RS datasets, and a residual connection is usedfor strengthening the stability of real-to-real images translation and improving the performance in cross-domain semantic segmentation tasks.
Journal ArticleDOI

Context-Aware Mixup for Domain Adaptive Semantic Segmentation

TL;DR: Zhang et al. as mentioned in this paper proposed a context-aware mixup framework for domain adaptive semantic segmentation, which exploits the important clue of context-dependency as explicit prior knowledge in a fully end-to-end trainable manner.
Journal ArticleDOI

HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation

TL;DR: HRDA as discussed by the authors proposes a multi-resolution training approach for semantic segmentation that combines the strengths of small high resolution crops to preserve fine segmentation details and large low-resolution crops to capture long-range context dependencies with a learned scale attention.
Proceedings ArticleDOI

OpenEarthMap: A Benchmark Dataset for Global High-Resolution Land Cover Mapping

TL;DR: OpenEarthMap as mentioned in this paper is a benchmark dataset for global high-resolution land cover mapping, consisting of 2.2 million segments of 5000 aerial and satellite images covering 97 regions from 44 countries across 6 continents, with manually annotated 8-class land cover labels at a 0.25-0.5m ground sampling distance.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Attention is All you Need

TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.