scispace - formally typeset
Open AccessProceedings Article

Domain Generalization via Invariant Feature Representation

TLDR
In this paper, a kernel-based optimization algorithm is proposed to learn an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables.
Abstract
This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Deep Sets

TL;DR: The main theorem characterizes the permutation invariant objective functions and provides a family of functions to which any permutation covariant objective function must belong, which enables the design of a deep network architecture that can operate on sets and which can be deployed on a variety of scenarios including both unsupervised and supervised learning tasks.
Proceedings ArticleDOI

Domain Generalization with Adversarial Feature Learning

TL;DR: This paper presents a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization, and proposed an algorithm to jointly train different components of the proposed framework.
Posted Content

Meta-Learning in Neural Networks: A Survey

TL;DR: A new taxonomy is proposed that provides a more comprehensive breakdown of the space of meta-learning methods today, including few-shot learning, reinforcement learning and architecture search, and promising applications and successes.
Proceedings ArticleDOI

Unified Deep Supervised Domain Adaptation and Generalization

TL;DR: This work provides a unified framework for addressing the problem of visual supervised domain adaptation and generalization with deep models by reverting to point-wise surrogates of distribution distances and similarities by exploiting the Siamese architecture.
Proceedings ArticleDOI

Deeper, Broader and Artier Domain Generalization

TL;DR: In this article, a low-rank parameterized CNN model is proposed for domain generalization, which can learn from multiple training domains and extract a domain-agnostic model that can then be applied to an unseen domain.
References
More filters
Journal ArticleDOI

A Survey on Transfer Learning

TL;DR: The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Journal ArticleDOI

Nonlinear component analysis as a kernel eigenvalue problem

TL;DR: A new method for performing a nonlinear form of principal component analysis by the use of integral operator kernel functions is proposed and experimental results on polynomial feature extraction for pattern recognition are presented.
Journal ArticleDOI

Domain Adaptation via Transfer Component Analysis

TL;DR: This work proposes a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation and proposes both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce thedistance between domain distributions by projecting data onto the learned transfer components.
Journal ArticleDOI

A theory of learning from different domains

TL;DR: A classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains and shows how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class.
Journal ArticleDOI

Sliced Inverse Regression for Dimension Reduction

TL;DR: In this article, sliced inverse regression (SIR) is proposed to reduce the dimension of the input variable without going through any parametric or nonparametric model-fitting process.
Related Papers (5)