Open AccessPosted Content
Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization
John J. Miller,Rohan Taori,Aditi Raghunathan,Shiori Sagawa,Pang Wei Koh,Vaishaal Shankar,Percy Liang,Yair Carmon,Ludwig Schmidt +8 more
Reads0
Chats0
TLDR
In this article, the authors empirically show that out-of-distribution performance is strongly correlated with the performance of a wide range of models and distribution shifts and provide a candidate theory based on a Gaussian data model that shows how changes in the data covariance arising from distribution shift can affect the observed correlations.Abstract:
For machine learning systems to be reliable, we must understand their performance in unseen, out-of-distribution environments. In this paper, we empirically show that out-of-distribution performance is strongly correlated with in-distribution performance for a wide range of models and distribution shifts. Specifically, we demonstrate strong correlations between in-distribution and out-of-distribution performance on variants of CIFAR-10 & ImageNet, a synthetic pose estimation task derived from YCB objects, satellite imagery classification in FMoW-WILDS, and wildlife classification in iWildCam-WILDS. The strong correlations hold across model architectures, hyperparameters, training set size, and training duration, and are more precise than what is expected from existing domain adaptation theory. To complete the picture, we also investigate cases where the correlation is weaker, for instance some synthetic distribution shifts from CIFAR-10-C and the tissue classification dataset Camelyon17-WILDS. Finally, we provide a candidate theory based on a Gaussian data model that shows how changes in the data covariance arising from distribution shift can affect the observed correlations.read more
Citations
More filters
Posted Content
CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP
Andreas Fürst,Elisabeth Rumetshofer,Viet Hung Tran,Hubert Ramsauer,Fei Tang,Johannes M. Lehner,David P. Kreil,Michael K Kopp,Günter Klambauer,Angela Bitto-Nemling,Sepp Hochreiter +10 more
TL;DR: This article proposed contrastive leave-one-out boost (CLOOB) which replaces the original embedding by retrieved embeddings in the InfoLOOB objective, which stabilizes the Info-Lob objective.
Posted Content
On a Benefit of Mask Language Modeling: Robustness to Simplicity Bias.
TL;DR: The authors theoretically and empirically show that MLM pretraining makes models robust to lexicon-level spurious features, and they also explore the efficacy of pretrained masked language models in causal settings.
Proceedings ArticleDOI
On the Robustness of Reading Comprehension Models to Entity Renaming
TL;DR: Yan, Yang Xiao, Sagnik Mukherjee, Bill Yuchen Lin, Robin Jia, Xiang Ren as mentioned in this paper , 2019 Conference of the Association for Computational Linguistics: Human Language Technologies.
Posted Content
On the Robustness of Reading Comprehension Models to Entity Renaming.
TL;DR: The authors proposed a general and scalable method to replace person names with names from a variety of sources, ranging from common English names to names from other languages to arbitrary strings, and found that this can further improve the robustness of MRC models.
References
More filters
Proceedings Article
An analysis of single-layer networks in unsupervised feature learning
TL;DR: In this paper, the authors show that the number of hidden nodes in the model may be more important to achieving high performance than the learning algorithm or the depth of the model, and they apply several othe-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to CIFAR, NORB, and STL datasets using only single-layer networks.
Posted Content
Exploring Simple Siamese Representation Learning
Xinlei Chen,Kaiming He +1 more
TL;DR: Surprising empirical results are reported that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders.
Book ChapterDOI
Evasion attacks against machine learning at test time
Battista Biggio,Igino Corona,Davide Maiorca,Blaine Nelson,Nedim Srndic,Pavel Laskov,Giorgio Giacinto,Fabio Roli +7 more
TL;DR: This work presents a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.
Proceedings ArticleDOI
PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization
TL;DR: PoseNet as mentioned in this paper uses a CNN to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation.
Book ChapterDOI
Progressive Neural Architecture Search
Chenxi Liu,Barret Zoph,Maxim Neumann,Jonathon Shlens,Wei Hua,Li-Jia Li,Li Fei-Fei,Li Fei-Fei,Alan L. Yuille,Jonathan Huang,Kevin Murphy +10 more
TL;DR: In this article, a sequential model-based optimization (SMBO) strategy is proposed to search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space.