scispace - formally typeset
Open AccessProceedings ArticleDOI

Domain Adaptive Knowledge Distillation for Driving Scene Semantic Segmentation

Reads0
Chats0
TLDR
In this article, a multi-level distillation strategy is proposed to effectively distil knowledge at different levels, and a novel cross entropy loss is introduced to leverage pseudo labels from the teacher.
Abstract
Practical autonomous driving systems face two crucial challenges: memory constraints and domain gap issues. In this paper, we present a novel approach to learn domain adaptive knowledge in models with limited memory, thus bestowing the model with the ability to deal with these issues in a comprehensive manner. We term this as “Domain Adaptive Knowledge Distillation ” and address the same in the context of unsupervised domain-adaptive semantic segmentation by proposing a multi-level distillation strategy to effectively distil knowledge at different levels. Further, we introduce a novel cross entropy loss that leverages pseudo labels from the teacher. These pseudo teacher labels play a multifaceted role towards: (i) knowledge distillation from the teacher network to the student network & (ii) serving as a proxy for the ground truth for target domain images, where the problem is completely unsupervised. We introduce four paradigms for distilling domain adaptive knowledge and carry out extensive experiments and ablation studies on real-to-real as well as synthetic-to-real scenarios. Our experiments demonstrate the profound success of our proposed method.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Confidence-and-Refinement Adaptation Model for Cross-Domain Semantic Segmentation

TL;DR: A novel multi-level UDA model named Confidence-and-Refinement Adaptation Model (CRAM), which contains a confidence-aware entropy alignment (CEA) module and a style feature alignment (SFA) module, which achieves comparable performance with the existing state-of-the-art works with advantages in simplicity and convergence speed.
Journal ArticleDOI

Robust Semantic Segmentation With Multi-Teacher Knowledge Distillation

TL;DR: In this paper, a multi-teacher knowledge distillation (KD) framework was proposed to address time-consuming annotation task in semantic segmentation, through which one teacher trained on a single dataset could be leveraged for annotating unlabeled data.
Proceedings ArticleDOI

Multi-Domain Incremental Learning for Semantic Segmentation

TL;DR: In this paper , the authors propose a dynamic architecture that assigns universally shared, domain-invariant parameters to capture homogeneous semantic features present in all domains, while dedicated domain-specific parameters learn the statistics of each domain.
Posted Content

Multi-Domain Incremental Learning for Semantic Segmentation

TL;DR: In this paper, the authors propose a dynamic architecture that assigns universally shared, domain-invariant parameters to capture homogeneous semantic features present in all domains, while dedicated domain-specific parameters learn the statistics of each domain.
Book ChapterDOI

Neural Network Compression Through Shunt Connections and Knowledge Distillation for Semantic Segmentation Problems

TL;DR: Shunt connections are used in this article for MobileNet compression and segmentation tasks on the Cityscapes dataset, on which they achieve compression by 28% while observing a 3.52 drop in mIoU.
References
More filters
Patent

Training constrained deconvolutional networks for road scene semantic segmentation

TL;DR: In this article, a source deconvolutional network is adaptively trained to perform semantic segmentation, and the same image data and the measured outputs of the source decoder are then used to train a target decoder.
Posted Content

Pseudo-Labeling Curriculum for Unsupervised Domain Adaptation

TL;DR: Zhang et al. as discussed by the authors proposed a pseudo-label curriculum based on a density-based clustering algorithm to learn target discriminative representations, which can progressively improve the capability of the network to generate pseudo-labels.
Related Papers (5)