scispace - formally typeset
Open AccessProceedings ArticleDOI

Domain Consensus Clustering for Universal Domain Adaptation

TLDR
Wang et al. as discussed by the authors proposed Domain Consensus Clustering (DCC), which exploits the domain consensus knowledge to discover discriminative clusters on both common samples and private ones.
Abstract
In this paper, we investigate Universal Domain Adaptation (UniDA) problem, which aims to transfer the knowledge from source to target under unaligned label space. The main challenge of UniDA lies in how to separate common classes (i.e., classes shared across domains), from private classes (i.e., classes only exist in one domain). Previous works treat the private samples in the target as one generic class but ignore their intrinsic structure. Consequently, the resulting representations are not compact enough in the latent space and can be easily confused with common samples. To better exploit the intrinsic structure of the target domain, we propose Domain Consensus Clustering (DCC), which exploits the domain consensus knowledge to discover discriminative clusters on both common samples and private ones. Specifically, we draw the domain consensus knowledge from two aspects to facilitate the clustering and the private class discovery, i.e., the semantic-level consensus, which identifies the cycle-consistent clusters as the common classes, and the sample-level consensus, which utilizes the cross-domain classification agreement to determine the number of clusters and discover the private classes. Based on DCC, we are able to separate the private classes from the common ones, and differentiate the private classes themselves. Finally, we apply a class-aware alignment technique on identified common samples to minimize the distribution shift, and a prototypical regularizer to inspire discriminative target clusters. Experiments on four benchmarks demonstrate DCC significantly outperforms previous state-of-the-arts.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Exploring Domain-Invariant Parameters for Source Free Domain Adaptation

TL;DR: The Domain-Invariant Parameter Exploring (DIPE) approach is devised to capture such domain-invariant parameters in the source model to generate domain- Invariant representations and successfully exceeds the current state-of-the-art models on many domain adaptation datasets.
Journal ArticleDOI

Controlled Generation of Unseen Faults for Partial and OpenSet&Partial Domain Adaptation

TL;DR: A new framework for Partial and Open-Partial domain adaptation based on generating distinct fault signatures with a Wasserstein GAN is proposed, which is especially suited in extreme domain adaption settings that are particularly relevant in the context of complex and safety-critical systems.
Journal ArticleDOI

Evidential Neighborhood Contrastive Learning for Universal Domain Adaptation

TL;DR: A novel evidential neighborhood contrastive learning framework called TNT is proposed, which significantly outperforms previous state-of-the-art UniDA methods and proposes a new domain alignment principle: semantically consistent samples should be geometrically adjacent to each other, whether within or across domains.
Journal ArticleDOI

Controlled generation of unseen faults for Partial and Open-Partial domain adaptation

TL;DR: In this paper , the authors proposed a new framework for partial and open-partial domain adaptation based on generating distinct fault signatures with a Wasserstein GAN, which is suited for domain adaptation tasks with extreme label space discrepancies.
Proceedings ArticleDOI

Geometric Anchor Correspondence Mining with Uncertainty Modeling for Universal Domain Adaptation

TL;DR: A Geometric anchor-guided Adversarial and conTrastive learning framework with uncErtainty modeling called GATE, which significantly out-performs previous state-of-the-art UniDA methods.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Proceedings ArticleDOI

Densely Connected Convolutional Networks

TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Related Papers (5)