G
Gabriel Ilharco
Researcher at University of Washington
Publications - 34
Citations - 1475
Gabriel Ilharco is an academic researcher from University of Washington. The author has contributed to research in topics: Computer science & Feature learning. The author has an hindex of 10, co-authored 23 publications receiving 617 citations.
Papers
More filters
Posted Content
Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping
TL;DR: This work investigates how the performance of the best-found model varies as a function of the number of fine-tuning trials, and examines two factors influenced by the choice of random seed: weight initialization and training data order.
Proceedings ArticleDOI
Evaluating Models’ Local Decision Boundaries via Contrast Sets
Matt Gardner,Yoav Artzi,Victoria Basmov,Jonathan Berant,Ben Bogin,Sihao Chen,Pradeep Dasigi,Dheeru Dua,Yanai Elazar,Ananth Gottumukkala,Nitish Gupta,Hannaneh Hajishirzi,Gabriel Ilharco,Daniel Khashabi,Kevin Lin,Jiangming Liu,Nelson F. Liu,Phoebe Mulcaire,Qiang Ning,Sameer Singh,Noah A. Smith,Sanjay Subramanian,Reut Tsarfaty,Eric Wallace,Ally Zhang,Ben Zhou +25 more
TL;DR: A more rigorous annotation paradigm for NLP that helps to close systematic gaps in the test data, and recommends that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets.
Proceedings ArticleDOI
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Mitchell Wortsman,Gabriel Ilharco,Samir Yitzhak Gadre,Rebecca Roelofs,Raphael Gontijo-Lopes,Ari S. Morcos,Hongseok Namkoong,A. Farhadi,Yair Carmon,Simon Kornblith,Ludwig Schmidt +10 more
TL;DR: The model soup approach extends to multiple image classification and natural language processing tasks, improves out-of-distribution performance, and improves zero-shot performance on new downstream tasks.
Posted Content
Evaluating NLP Models via Contrast Sets
Matt Gardner,Yoav Artzi,Victoria Basmova,Jonathan Berant,Ben Bogin,Sihao Chen,Pradeep Dasigi,Dheeru Dua,Yanai Elazar,Ananth Gottumukkala,Nitish Gupta,Hanna Hajishirzi,Gabriel Ilharco,Daniel Khashabi,Kevin Lin,Jiangming Liu,Nelson F. Liu,Phoebe Mulcaire,Qiang Ning,Sameer Singh,Noah A. Smith,Sanjay Subramanian,Reut Tsarfaty,Eric Wallace,Ally Zhang,Ben Zhou +25 more
TL;DR: A new annotation paradigm for NLP is proposed that helps to close systematic gaps in the test data, and it is recommended that after a dataset is constructed, the dataset authors manually perturb the test instances in small but meaningful ways that change the gold label, creating contrast sets.
Journal ArticleDOI
Reproducible scaling laws for contrastive language-image learning
Mehdi Cherti,Romain Beaumont,Ross Wightman,Mitchell Wortsman,Gabriel Ilharco,Cade Gordon,Christoph Schuhmann,Ludwig Schmidt,Jenia Jitsev +8 more
TL;DR: The authors investigate scaling laws for contrastive language-image pre-training (CLIP) with the public LAION dataset and the open-source OpenCLIP repository and identify power law scaling for multiple downstream tasks including zero-shot classification, retrieval, linear probing and end-to-end fine-tuning.