Open AccessPosted Content
Invariant Risk Minimization
Reads0
Chats0
TLDR
This work introduces Invariant Risk Minimization, a learning paradigm to estimate invariant correlations across multiple training distributions and shows how the invariances learned by IRM relate to the causal structures governing the data and enable out-of-distribution generalization.Abstract:
We introduce Invariant Risk Minimization (IRM), a learning paradigm to estimate invariant correlations across multiple training distributions. To achieve this goal, IRM learns a data representation such that the optimal classifier, on top of that data representation, matches for all training distributions. Through theory and experiments, we show how the invariances learned by IRM relate to the causal structures governing the data and enable out-of-distribution generalization.read more
Citations
More filters
Posted Content
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
TL;DR: This tutorial article aims to provide the reader with the conceptual tools needed to get started on research on offline reinforcement learning algorithms: reinforcementlearning algorithms that utilize previously collected data, without additional online data collection.
Journal ArticleDOI
Shortcut learning in deep neural networks
Robert Geirhos,Jörn-Henrik Jacobsen,Claudio Michaelis,Richard S. Zemel,Wieland Brendel,Matthias Bethge,Felix A. Wichmann +6 more
TL;DR: A set of recommendations for model interpretation and benchmarking is developed, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications.
Journal ArticleDOI
Toward Causal Representation Learning
Bernhard Schölkopf,Francesco Locatello,Stefan Bauer,Nan Rosemary Ke,Nal Kalchbrenner,Anirudh Goyal,Yoshua Bengio +6 more
TL;DR: The authors reviewed fundamental concepts of causal inference and related them to crucial open problems of machine learning, including transfer and generalization, thereby assaying how causality can contribute to modern machine learning research.
Posted Content
WILDS: A Benchmark of in-the-Wild Distribution Shifts
Pang Wei Koh,Shiori Sagawa,Henrik Marklund,Sang Michael Xie,Marvin Zhang,Akshay Balsubramani,Weihua Hu,Michihiro Yasunaga,Richard Lanas Phillips,Irena Gao,Tony Lee,Etienne David,Ian Stavness,Wei Guo,Berton A. Earnshaw,Imran S. Haque,Sara Beery,Jure Leskovec,Anshul Kundaje,Emma Pierson,Sergey Levine,Chelsea Finn,Percy Liang +22 more
TL;DR: WILDS is presented, a benchmark of in-the-wild distribution shifts spanning diverse data modalities and applications, and is hoped to encourage the development of general-purpose methods that are anchored to real-world distribution shifts and that work well across different applications and problem settings.
Posted Content
Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization.
TL;DR: The results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization, and introduce a stochastic optimization algorithm, with convergence guarantees, to efficiently train group DRO models.
References
More filters
Posted Content
Understanding deep learning requires rethinking generalization
TL;DR: The authors showed that deep neural networks can fit a random labeling of the training data, and that this phenomenon is qualitatively unaffected by explicit regularization, and occurs even if the true images are replaced by completely unstructured random noise.
Journal ArticleDOI
Correlation and Causation
Victor R. Martuza,David A. Kenny +1 more
TL;DR: Causality is the area of statistics that is most commonly misused, and misinterpreted, by nonspecialists as discussed by the authors, who fail to understand that, just because results show a correlation, there is no proof of an underlying causality.
Proceedings ArticleDOI
Unbiased look at dataset bias
Antonio Torralba,Alexei A. Efros +1 more
TL;DR: A comparison study using a set of popular datasets, evaluated based on a number of criteria including: relative data bias, cross-dataset generalization, effects of closed-world assumption, and sample value is presented.
Journal ArticleDOI
Building machines that learn and think like people.
TL;DR: In this article, a review of recent progress in cognitive science suggests that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it.
Proceedings Article
Understanding deep learning requires rethinking generalization.
TL;DR: This article showed that deep neural networks can fit a random labeling of the training data, and that this phenomenon is qualitatively unaffected by explicit regularization, and occurs even if the true images are replaced by completely unstructured random noise.