scispace - formally typeset
Open AccessPosted Content

Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization

TLDR
In this article, the authors empirically show that out-of-distribution performance is strongly correlated with the performance of a wide range of models and distribution shifts and provide a candidate theory based on a Gaussian data model that shows how changes in the data covariance arising from distribution shift can affect the observed correlations.
Abstract
For machine learning systems to be reliable, we must understand their performance in unseen, out-of-distribution environments. In this paper, we empirically show that out-of-distribution performance is strongly correlated with in-distribution performance for a wide range of models and distribution shifts. Specifically, we demonstrate strong correlations between in-distribution and out-of-distribution performance on variants of CIFAR-10 & ImageNet, a synthetic pose estimation task derived from YCB objects, satellite imagery classification in FMoW-WILDS, and wildlife classification in iWildCam-WILDS. The strong correlations hold across model architectures, hyperparameters, training set size, and training duration, and are more precise than what is expected from existing domain adaptation theory. To complete the picture, we also investigate cases where the correlation is weaker, for instance some synthetic distribution shifts from CIFAR-10-C and the tissue classification dataset Camelyon17-WILDS. Finally, we provide a candidate theory based on a Gaussian data model that shows how changes in the data covariance arising from distribution shift can affect the observed correlations.

read more

Citations
More filters
Posted Content

CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP

TL;DR: This article proposed contrastive leave-one-out boost (CLOOB) which replaces the original embedding by retrieved embeddings in the InfoLOOB objective, which stabilizes the Info-Lob objective.
Posted Content

On a Benefit of Mask Language Modeling: Robustness to Simplicity Bias.

TL;DR: The authors theoretically and empirically show that MLM pretraining makes models robust to lexicon-level spurious features, and they also explore the efficacy of pretrained masked language models in causal settings.
Proceedings ArticleDOI

On the Robustness of Reading Comprehension Models to Entity Renaming

TL;DR: Yan, Yang Xiao, Sagnik Mukherjee, Bill Yuchen Lin, Robin Jia, Xiang Ren as mentioned in this paper , 2019 Conference of the Association for Computational Linguistics: Human Language Technologies.
Posted Content

On the Robustness of Reading Comprehension Models to Entity Renaming.

TL;DR: The authors proposed a general and scalable method to replace person names with names from a variety of sources, ranging from common English names to names from other languages to arbitrary strings, and found that this can further improve the robustness of MRC models.
References
More filters
Proceedings ArticleDOI

Pyramid Scene Parsing Network

TL;DR: This paper exploits the capability of global context information by different-region-based context aggregation through the pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet) to produce good quality results on the scene parsing task.
Proceedings Article

Intriguing properties of neural networks

TL;DR: It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
Proceedings ArticleDOI

MobileNetV2: Inverted Residuals and Linear Bottlenecks

TL;DR: MobileNetV2 as mentioned in this paper is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers and intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity.
Proceedings ArticleDOI

Deep contextualized word representations

TL;DR: This paper introduced a new type of deep contextualized word representation that models both complex characteristics of word use (e.g., syntax and semantics), and how these uses vary across linguistic contexts (i.e., to model polysemy).
Related Papers (5)