scispace - formally typeset
Open AccessProceedings ArticleDOI

Deeper, Broader and Artier Domain Generalization

Reads0
Chats0
TLDR
In this article, a low-rank parameterized CNN model is proposed for domain generalization, which can learn from multiple training domains and extract a domain-agnostic model that can then be applied to an unseen domain.
Abstract
The problem of domain generalization is to learn from multiple training domains, and extract a domain-agnostic model that can then be applied to an unseen domain. Domain generalization (DG) has a clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos. Nevertheless, DG methods have primarily been evaluated on photo-only benchmarks focusing on alleviating the dataset bias where both problems of domain distinctiveness and data sparsity can be minimal. We argue that these benchmarks are overly straightforward, and show that simple deep learning baselines perform surprisingly well on them. In this paper, we make two main contributions: Firstly, we build upon the favorable domain shift-robust properties of deep learning methods, and develop a low-rank parameterized CNN model for end-to-end DG learning. Secondly, we develop a DG benchmark dataset covering photo, sketch, cartoon and painting domains. This is both more practically relevant, and harder (bigger domain shift) than existing benchmarks. The results show that our method outperforms existing DG alternatives, and our dataset provides a more significant DG challenge to drive future research.

read more

Citations
More filters
Journal ArticleDOI

Deep visual domain adaptation: A survey

TL;DR: Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data as discussed by the authors, which leverages deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning.
Journal ArticleDOI

Shortcut learning in deep neural networks

TL;DR: A set of recommendations for model interpretation and benchmarking is developed, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications.
Proceedings ArticleDOI

Domain Generalization with Adversarial Feature Learning

TL;DR: This paper presents a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization, and proposed an algorithm to jointly train different components of the proposed framework.
Posted Content

Meta-Learning in Neural Networks: A Survey

TL;DR: A new taxonomy is proposed that provides a more comprehensive breakdown of the space of meta-learning methods today, including few-shot learning, reinforcement learning and architecture search, and promising applications and successes.
Proceedings ArticleDOI

Domain Generalization by Solving Jigsaw Puzzles

TL;DR: This model learns the semantic labels in a supervised fashion, and broadens its understanding of the data by learning from self-supervised signals how to solve a jigsaw puzzle on the same images, which helps the network to learn the concepts of spatial correlation while acting as a regularizer for the classification task.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal Article

Visualizing Data using t-SNE

TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Proceedings Article

How transferable are features in deep neural networks

TL;DR: In this paper, the authors quantify the transferability of features from the first layer to the last layer of a deep neural network and show that transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task and (2) optimization difficulties related to splitting networks between co-adapted neurons.
Journal ArticleDOI

A Multilinear Singular Value Decomposition

TL;DR: There is a strong analogy between several properties of the matrix and the higher-order tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, first-order perturbation effects, etc., are analyzed.
Journal ArticleDOI

Some mathematical notes on three-mode factor analysis

TL;DR: The model for three-mode factor analysis is discussed in terms of newer applications of mathematical processes including a type of matrix process termed the Kronecker product and the definition of combination variables.
Related Papers (5)