scispace - formally typeset
Open AccessProceedings ArticleDOI

Mind the Class Weight Bias: Weighted Maximum Mean Discrepancy for Unsupervised Domain Adaptation

TLDR
In this article, a weighted maximum mean discrepancy (MMD) model is proposed to exploit the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable.
Abstract
In domain adaptation, maximum mean discrepancy (MMD) has been widely adopted as a discrepancy metric between the distributions of source and target domains. However, existing MMD-based domain adaptation methods generally ignore the changes of class prior distributions, i.e., class weight bias across domains. This remains an open problem but ubiquitous for domain adaptation, which can be caused by changes in sample selection criteria and application scenarios. We show that MMD cannot account for class weight bias and results in degraded domain adaptation performance. To address this issue, a weighted MMD model is proposed in this paper. Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable. To account for it, our proposed weighted MMD model is defined by introducing an auxiliary weight for each class in the source domain, and a classification EM algorithm is suggested by alternating between assigning the pseudo-labels, estimating auxiliary weights and updating model parameters. Extensive experiments demonstrate the superiority of our weighted MMD over conventional MMD for domain adaptation.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

A Comprehensive Survey on Transfer Learning

TL;DR: Transfer learning aims to improve the performance of target learners on target domains by transferring the knowledge contained in different but related source domains as discussed by the authors, in which the dependence on a large number of target-domain data can be reduced for constructing target learners.
Proceedings ArticleDOI

ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation

TL;DR: This work proposes two novel, complementary methods using (i) entropy loss and (ii) adversarial loss respectively for unsupervised domain adaptation in semantic segmentation with losses based on the entropy of the pixel-wise predictions.
Book ChapterDOI

Generalizing A Person Retrieval Model Hetero- and Homogeneously

TL;DR: A Hetero-Homogeneous Learning (HHL) method is introduced that enforces two properties simultaneously: camera invariance, learned via positive pairs formed by unlabeled target images and their camera style transferred counterparts and domain connectedness, implemented by homogeneous learning.
Proceedings ArticleDOI

Invariance Matters: Exemplar Memory for Domain Adaptive Person Re-Identification

TL;DR: Zhun et al. as discussed by the authors proposed an exemplar memory to store features of the target domain and accommodate the three invariance properties, i.e., exemplar-invariance, camera invariance, and neighborhood invariance.
Proceedings ArticleDOI

Image to Image Translation for Domain Adaptation

TL;DR: This work proposes the novel use of the recently proposed unpaired image-to-image translation framework to constrain the features extracted by the backbone encoder network, and applies it to domain adaptation between MNIST, USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in classification tasks, and also between GTA5 and Cityscapes datasets for a segmentation task.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Related Papers (5)