scispace - formally typeset
Open AccessPosted Content

Domain Adaptive Knowledge Distillation for Driving Scene Semantic Segmentation

Reads0
Chats0
TLDR
This paper presents a novel approach to learn domain adaptive knowledge in models with limited memory, thus bestowing the model with the ability to deal with these issues in a comprehensive manner and introduces a novel cross entropy loss that leverages pseudo labels from the teacher.
Abstract
Practical autonomous driving systems face two crucial challenges: memory constraints and domain gap issues. In this paper, we present a novel approach to learn domain adaptive knowledge in models with limited memory, thus bestowing the model with the ability to deal with these issues in a comprehensive manner. We term this as "Domain Adaptive Knowledge Distillation" and address the same in the context of unsupervised domain-adaptive semantic segmentation by proposing a multi-level distillation strategy to effectively distil knowledge at different levels. Further, we introduce a novel cross entropy loss that leverages pseudo labels from the teacher. These pseudo teacher labels play a multifaceted role towards: (i) knowledge distillation from the teacher network to the student network & (ii) serving as a proxy for the ground truth for target domain images, where the problem is completely unsupervised. We introduce four paradigms for distilling domain adaptive knowledge and carry out extensive experiments and ablation studies on real-to-real as well as synthetic-to-real scenarios. Our experiments demonstrate the profound success of our proposed method.

read more

Citations
More filters

UM-Adapt: Unsupervised Multi-Task Adaptation Using Adversarial Cross-Task Distillation

TL;DR: This paper proposes UM-Adapt - a unified framework to effectively perform unsupervised domain adaptation for spatially-structured prediction tasks, simultaneously maintaining a balanced performance across individual tasks in a multi-task setting.
Journal ArticleDOI

Adaptive Perspective Distillation for Semantic Segmentation

TL;DR: Adaptive Perspective Distillation (APD) is proposed that creates an adaptive local perspective for each individual training sample that extracts detailed contextual information from each training sample specifically, mining more details from the teacher and thus achieving better knowledge distillation results on the student.
Journal ArticleDOI

DistillAdapt: Source-Free Active Visual Domain Adaptation

TL;DR: The source-free approach, DistillAdapt, results in an improvement of 0 .
Journal ArticleDOI

Entropy-weighted reconstruction adversary and curriculum pseudo labeling for domain adaptation in semantic segmentation

TL;DR: In this article , an entropy-weighted adversarial framework is designed to enhance the discriminativeness and transferability of the presented model to the target domain via an autoencoder-based discriminator.
Proceedings ArticleDOI

SALAD : Source-free Active Label-Agnostic Domain Adaptation for Classification, Segmentation and Detection

TL;DR: The source-free approach, SALAD, is presented, which results in an improvement of 0.5% − 31 .
References
More filters
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Posted Content

Distilling the Knowledge in a Neural Network

TL;DR: This work shows that it can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model and introduces a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse.
Journal ArticleDOI

DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

TL;DR: This work addresses the task of semantic image segmentation with Deep Learning and proposes atrous spatial pyramid pooling (ASPP), which is proposed to robustly segment objects at multiple scales, and improves the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models.
Proceedings ArticleDOI

Pyramid Scene Parsing Network

TL;DR: This paper exploits the capability of global context information by different-region-based context aggregation through the pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet) to produce good quality results on the scene parsing task.
Related Papers (5)