scispace - formally typeset
Proceedings ArticleDOI

Open Set Domain Adaptation

Reads0
Chats0
TLDR
This work learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset.
Abstract
When the training and the test data belong to different domains, the accuracy of an object classifier is significantly reduced. Therefore, several algorithms have been proposed in the last years to diminish the so called domain shift between datasets. However, all available evaluation protocols for domain adaptation describe a closed set recognition task, where both domains, namely source and target, contain exactly the same object classes. In this work, we also explore the field of domain adaptation in open sets, which is a more realistic scenario where only a few categories of interest are shared between source and target data. Therefore, we propose a method that fits in both closed and open set scenarios. The approach learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset. A thorough evaluation shows that our approach outperforms the state-of-the-art.

read more

Citations
More filters
Journal ArticleDOI

Unsupervised Domain Adaptation by Multi-Loss Gap Minimization Learning for Person Re-Identification

TL;DR: A multi-loss gap minimization learning (MGML) approach for UDA person ReID, which introduces the part model to learn discriminative patch features and design a Patch-based Part Ignoring (PPI) loss to select reliable instances for the efficient learning of the part models.
Posted Content

Exploiting Images for Video Recognition with Hierarchical Generative Adversarial Networks

TL;DR: In this paper, a Hierarchical Generative Adversarial Network (HiGAN) is proposed to enhance recognition in videos by transferring knowledge from images (i.e., source domain).
Journal ArticleDOI

Self-Labeling Framework for Novel Category Discovery over Domains

TL;DR: Li et al. as discussed by the authors propose a self-labeling framework to cluster all target samples, including those in the ''unknown'' categories, and train the network to learn the representations of target samples via self-supervised learning.
Journal ArticleDOI

Relation Matters: Foreground-Aware Graph-Based Relational Reasoning for Domain Adaptive Object Detection

TL;DR: A new and general framework for DAOD is proposed, named Foreground-aware Graph-based Relational Reasoning (FGRR), which incorporates graph structures into the detection pipeline to explicitly model the intra- and inter-domain foreground object relations on both pixel and semantic spaces, thereby endowing the DAOD model with the capability of relational reasoning beyond the popular alignment-based paradigm.
Journal ArticleDOI

Universal multi-Source domain adaptation for image classification

TL;DR: Zhang et al. as discussed by the authors proposed a universal multi-source adaptation network (UMAN) to solve the domain adaptation problem without increasing the complexity of the model in various UMDA settings.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

LIBSVM: A library for support vector machines

TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Proceedings Article

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

TL;DR: DeCAF as discussed by the authors is an open-source implementation of these deep convolutional activation features, along with all associated network parameters, to enable vision researchers to conduct experimentation with deep representations across a range of visual concept learning paradigms.
Posted Content

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

TL;DR: DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.
Posted Content

Learning Transferable Features with Deep Adaptation Networks

TL;DR: A new Deep Adaptation Network (DAN) architecture is proposed, which generalizes deep convolutional neural network to the domain adaptation scenario and can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding.
Related Papers (5)