scispace - formally typeset
Proceedings ArticleDOI

Open Set Domain Adaptation

Reads0
Chats0
TLDR
This work learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset.
Abstract
When the training and the test data belong to different domains, the accuracy of an object classifier is significantly reduced. Therefore, several algorithms have been proposed in the last years to diminish the so called domain shift between datasets. However, all available evaluation protocols for domain adaptation describe a closed set recognition task, where both domains, namely source and target, contain exactly the same object classes. In this work, we also explore the field of domain adaptation in open sets, which is a more realistic scenario where only a few categories of interest are shared between source and target data. Therefore, we propose a method that fits in both closed and open set scenarios. The approach learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset. A thorough evaluation shows that our approach outperforms the state-of-the-art.

read more

Citations
More filters
Posted Content

Reducing Network Agnostophobia

TL;DR: In this paper, Entropic Open-Set and Objectosphere losses are proposed to maximize entropy for unknown inputs while increasing separation in deep feature space by modifying magnitudes of known and unknown samples.
Journal ArticleDOI

Disjoint Label Space Transfer Learning with Common Factorised Space

TL;DR: A unified approach is presented to transfer learning that addresses several source and target domain label-space and annotation assumptions with a single model that outperforms alternatives in both unsupervised and semi-supervised settings.
Journal ArticleDOI

Dual-Refinement: Joint Label and Feature Refinement for Unsupervised Domain Adaptive Person Re-Identification

TL;DR: Zhang et al. as discussed by the authors proposed a dual-refinement method that jointly refines pseudo labels at the off-line clustering phase and features at the on-line training phase to boost the label purity and feature discriminability in the target domain for more reliable re-ID.
Proceedings Article

Progressive Graph Learning for Open-Set Domain Adaptation

TL;DR: This paper introduces an end-to-end Progressive Graph Learning (PGL) framework where a graph neural network with episodic training is integrated to suppress underlying conditional shift and adversarial learning is adopted to close the gap between the source and target distributions.
Journal ArticleDOI

Domain Generalization: A Survey

TL;DR: Domain generalization (DG) aims to achieve OOD generalization by using only source data for model learning as mentioned in this paper , which is a capability natural to humans yet challenging for machines to reproduce.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

LIBSVM: A library for support vector machines

TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Proceedings Article

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

TL;DR: DeCAF as discussed by the authors is an open-source implementation of these deep convolutional activation features, along with all associated network parameters, to enable vision researchers to conduct experimentation with deep representations across a range of visual concept learning paradigms.
Posted Content

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

TL;DR: DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.
Posted Content

Learning Transferable Features with Deep Adaptation Networks

TL;DR: A new Deep Adaptation Network (DAN) architecture is proposed, which generalizes deep convolutional neural network to the domain adaptation scenario and can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding.
Related Papers (5)