scispace - formally typeset
Proceedings ArticleDOI

Open Set Domain Adaptation

Reads0
Chats0
TLDR
This work learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset.
Abstract
When the training and the test data belong to different domains, the accuracy of an object classifier is significantly reduced. Therefore, several algorithms have been proposed in the last years to diminish the so called domain shift between datasets. However, all available evaluation protocols for domain adaptation describe a closed set recognition task, where both domains, namely source and target, contain exactly the same object classes. In this work, we also explore the field of domain adaptation in open sets, which is a more realistic scenario where only a few categories of interest are shared between source and target data. Therefore, we propose a method that fits in both closed and open set scenarios. The approach learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset. A thorough evaluation shows that our approach outperforms the state-of-the-art.

read more

Citations
More filters
Posted Content

Deep Domain Generalization with Feature-norm Network.

TL;DR: In this article, an end-to-end feature-norm network (FNN) was proposed to tackle the problem of training with multiple source domains with the aim of generalizing to new domains at test time without an adaptation step.
Journal ArticleDOI

Extending Partial Domain Adaptation Algorithms to the Open-Set Setting

TL;DR: It is shown that the effectiveness of ANN methods utilized in the PDA setting is hindered by outlier target instances, and an adaptation for effective OSDA is proposed.
Journal ArticleDOI

Towards adaptive unknown authentication for universal domain adaptation by classifier paradox

TL;DR: In this paper , a composite classifier is jointly designed with two types of predictors, a multi-class predictor classifies samples to one of the multiple source classes, while a binary one-vs-all predictor further verifies the prediction by MC predictor.
Book ChapterDOI

Unknown-Oriented Learning for Open Set Domain Adaptation

TL;DR: Zhang et al. as discussed by the authors proposed a novel Unknown-Oriented Learning (UOL) framework for open set domain adaptation (OSDA), which is composed of three stages: true unknown excavation, false unknown suppression and known alignment.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

LIBSVM: A library for support vector machines

TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Proceedings Article

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

TL;DR: DeCAF as discussed by the authors is an open-source implementation of these deep convolutional activation features, along with all associated network parameters, to enable vision researchers to conduct experimentation with deep representations across a range of visual concept learning paradigms.
Posted Content

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

TL;DR: DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.
Posted Content

Learning Transferable Features with Deep Adaptation Networks

TL;DR: A new Deep Adaptation Network (DAN) architecture is proposed, which generalizes deep convolutional neural network to the domain adaptation scenario and can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding.
Related Papers (5)