scispace - formally typeset
Search or ask a question
Posted Content

Domain Generalization: A Survey

01 Jan 2021-
TL;DR: Domain generalization (DG) as discussed by the authors aims to achieve OOD generalization by using only source data for model learning, while most statistical learning algorithms strongly rely on the i.i.d. assumption on source/target data, while in practice domain shift between source and target is common.
Abstract: Generalization to out-of-distribution (OOD) data is a capability natural to humans yet challenging for machines to reproduce. This is because most statistical learning algorithms strongly rely on the i.i.d.~assumption on source/target data, while in practice domain shift between source and target is common. Domain generalization (DG) aims to achieve OOD generalization by using only source data for model learning. Since first introduced in 2011, research in DG has made great progresses. In particular, intensive research in this topic has led to a broad spectrum of methodologies, e.g., those based on domain alignment, meta-learning, data augmentation, or ensemble learning, just to name a few; and has covered various applications such as object recognition, segmentation, action recognition, and person re-identification. In this paper, for the first time, a comprehensive literature review is provided to summarize the developments in DG in the past decade. Specifically, we first cover the background by formally defining DG and relating it to other research fields like domain adaptation and transfer learning. Second, we conduct a thorough review into existing methods and present a categorization based on their methodologies and motivations. Finally, we conclude this survey with insights and discussions on future research directions.
Citations
More filters
Posted Content
TL;DR: In this article, a data generator is employed to synthesize data from pseudo-novel domains to augment the source domains, which explicitly increases the diversity of available training domains and leads to a more generalizable model.
Abstract: This paper focuses on domain generalization (DG), the task of learning from multiple source domains a model that generalizes well to unseen domains A main challenge for DG is that the available source domains often exhibit limited diversity, hampering the model's ability to learn to generalize We therefore employ a data generator to synthesize data from pseudo-novel domains to augment the source domains This explicitly increases the diversity of available training domains and leads to a more generalizable model To train the generator, we model the distribution divergence between source and synthesized pseudo-novel domains using optimal transport, and maximize the divergence To ensure that semantics are preserved in the synthesized data, we further impose cycle-consistency and classification losses on the generator Our method, L2A-OT (Learning to Augment by Optimal Transport) outperforms current state-of-the-art DG methods on four benchmark datasets

150 citations

Journal ArticleDOI
TL;DR: Extensive experiments show that DAEL improves the state-of-the-art on both problems, often by significant margins.
Abstract: The problem of generalizing deep neural networks from multiple source domains to a target one is studied under two settings: When unlabeled target data is available, it is a multi-source unsupervised domain adaptation (UDA) problem, otherwise a domain generalization (DG) problem. We propose a unified framework termed domain adaptive ensemble learning (DAEL) to address both problems. A DAEL model is composed of a CNN feature extractor shared across domains and multiple classifier heads each trained to specialize in a particular source domain. Each such classifier is an expert to its own domain and a non-expert to others. DAEL aims to learn these experts collaboratively so that when forming an ensemble, they can leverage complementary information from each other to be more effective for an unseen target domain. To this end, each source domain is used in turn as a pseudo-target-domain with its own expert providing supervisory signal to the ensemble of non-experts learned from the other sources. For unlabeled target data under the UDA setting where real expert does not exist, DAEL uses pseudo-label to supervise the ensemble learning. Extensive experiments on three multi-source UDA datasets and two DG datasets show that DAEL improves the state of the art on both problems, often by significant margins. The code is released at \url{this https URL}.

89 citations

Posted Content
TL;DR: A comprehensive survey on the emerging area of multimodal co-learning is provided in this article, along with the important ideas and directions for future work that will be beneficial for the entire research community focusing on this exciting domain.
Abstract: Multimodal deep learning systems which employ multiple modalities like text, image, audio, video, etc., are showing better performance in comparison with individual modalities (i.e., unimodal) systems. Multimodal machine learning involves multiple aspects: representation, translation, alignment, fusion, and co-learning. In the current state of multimodal machine learning, the assumptions are that all modalities are present, aligned, and noiseless during training and testing time. However, in real-world tasks, typically, it is observed that one or more modalities are missing, noisy, lacking annotated data, have unreliable labels, and are scarce in training or testing and or both. This challenge is addressed by a learning paradigm called multimodal co-learning. The modeling of a (resource-poor) modality is aided by exploiting knowledge from another (resource-rich) modality using transfer of knowledge between modalities, including their representations and predictive models. Co-learning being an emerging area, there are no dedicated reviews explicitly focusing on all challenges addressed by co-learning. To that end, in this work, we provide a comprehensive survey on the emerging area of multimodal co-learning that has not been explored in its entirety yet. We review implementations that overcome one or more co-learning challenges without explicitly considering them as co-learning challenges. We present the comprehensive taxonomy of multimodal co-learning based on the challenges addressed by co-learning and associated implementations. The various techniques employed to include the latest ones are reviewed along with some of the applications and datasets. Our final goal is to discuss challenges and perspectives along with the important ideas and directions for future work that we hope to be beneficial for the entire research community focusing on this exciting domain.

32 citations

Journal ArticleDOI
24 Apr 2022-Water
TL;DR: This review offers a cross-section of peer reviewed, critical water-based applications that have been coupled with AI or ML, including chlorination, adsorption, membrane filtration, water-quality-index monitoring,Water- quality-parameter modeling, river-level monitoring, and aquaponics/hydroponics automation/monitoring.
Abstract: Artificial-intelligence methods and machine-learning models have demonstrated their ability to optimize, model, and automate critical water- and wastewater-treatment applications, natural-systems monitoring and management, and water-based agriculture such as hydroponics and aquaponics. In addition to providing computer-assisted aid to complex issues surrounding water chemistry and physical/biological processes, artificial intelligence and machine-learning (AI/ML) applications are anticipated to further optimize water-based applications and decrease capital expenses. This review offers a cross-section of peer reviewed, critical water-based applications that have been coupled with AI or ML, including chlorination, adsorption, membrane filtration, water-quality-index monitoring, water-quality-parameter modeling, river-level monitoring, and aquaponics/hydroponics automation/monitoring. Although success in control, optimization, and modeling has been achieved with the AI methods, ML models, and smart technologies (including the Internet of Things (IoT), sensors, and systems based on these technologies) that are reviewed herein, key challenges and limitations were common and pervasive throughout. Poor data management, low explainability, poor model reproducibility and standardization, as well as a lack of academic transparency are all important hurdles to overcome in order to successfully implement these intelligent applications. Recommendations to aid explainability, data management, reproducibility, and model causality are offered in order to overcome these hurdles and continue the successful implementation of these powerful tools.

30 citations

Posted Content
TL;DR: Model-Based Domain Generalization (MBGD) as mentioned in this paper uses nonconvex duality theory to develop unconstrained relaxations of this statistical problem with tight bounds on the duality gap.
Abstract: Despite remarkable success in a variety of applications, it is well-known that deep learning can fail catastrophically when presented with out-of-distribution data. Toward addressing this challenge, we consider the domain generalization problem, wherein predictors are trained using data drawn from a family of related training domains and then evaluated on a distinct and unseen test domain. We show that under a natural model of data generation and a concomitant invariance condition, the domain generalization problem is equivalent to an infinite-dimensional constrained statistical learning problem; this problem forms the basis of our approach, which we call Model-Based Domain Generalization. Due to the inherent challenges in solving constrained optimization problems in deep learning, we exploit nonconvex duality theory to develop unconstrained relaxations of this statistical problem with tight bounds on the duality gap. Based on this theoretical motivation, we propose a novel domain generalization algorithm with convergence guarantees. In our experiments, we report improvements of up to 30 percentage points over state-of-the-art domain generalization baselines on several benchmarks including ColoredMNIST, Camelyon17-WILDS, FMoW-WILDS, and PACS.

8 citations