scispace - formally typeset
Open AccessPosted Content

Invariant Risk Minimization

Reads0
Chats0
TLDR
This work introduces Invariant Risk Minimization, a learning paradigm to estimate invariant correlations across multiple training distributions and shows how the invariances learned by IRM relate to the causal structures governing the data and enable out-of-distribution generalization.
Abstract
We introduce Invariant Risk Minimization (IRM), a learning paradigm to estimate invariant correlations across multiple training distributions. To achieve this goal, IRM learns a data representation such that the optimal classifier, on top of that data representation, matches for all training distributions. Through theory and experiments, we show how the invariances learned by IRM relate to the causal structures governing the data and enable out-of-distribution generalization.

read more

Citations
More filters
Journal ArticleDOI

SimpleDG: Simple Domain Generalization Baseline without Bells and Whistles

TL;DR: It is verified that ERM is a strong baseline compared to recent state-of-the-art domain generalization methods and proposed SimpleDG which includes sev-eral simple yet effective designs that further boost generalization performance.
Posted Content

Distributionally Robust Recurrent Decoders with Random Network Distillation

TL;DR: This paper proposed a method based on OOD detection with Random Network Distillation to allow an autoregressive language model to automatically disregard OOD context during inference, smoothly transitioning towards a less expressive but more robust model as the data becomes more OOD while retaining its full context capability when operating in-distribution.
Posted Content

Evaluation of Complexity Measures for Deep Learning Generalization in Medical Image Analysis

TL;DR: In this paper, the authors investigated the correlation between 25 complexity measures and the generalization abilities of supervised deep learning classifiers for breast ultrasound images and found that PAC-Bayes flatness-based and path norm-based measures produce the most consistent explanation for the combination of models and data.

Causality-oriented robustness: exploiting general additive interventions

TL;DR: The authors proposed Distributional Robustness via Invariant Gradients (DRIG), a method that exploits general additive interventions in training data for robust predictions against unseen interventions, and naturally interpolates between in-distribution prediction and causality.
Journal ArticleDOI

Learning with Impartiality to Walk on the Pareto Frontier of Fairness, Privacy, and Utility

TL;DR: In this paper , the authors adopt impartiality as a principle: design of ML pipelines should not favor one objective over another, and provide accurate Pareto frontiers that show the inherent trade-offs between the objectives.
References
More filters
Posted Content

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

TL;DR: A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
MonographDOI

Causality: models, reasoning, and inference

TL;DR: The art and science of cause and effect have been studied in the social sciences for a long time as mentioned in this paper, see, e.g., the theory of inferred causation, causal diagrams and the identification of causal effects.
Journal ArticleDOI

Estimating causal effects of treatments in randomized and nonrandomized studies.

TL;DR: A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented in this paper, where the objective is to specify the benefits of randomization in estimating causal effects of treatments.
Book

Introduction to Smooth Manifolds

TL;DR: In this paper, a review of topology, linear algebra, algebraic geometry, and differential equations is presented, along with an overview of the de Rham Theorem and its application in calculus.
Related Papers (5)