scispace - formally typeset
G

Gabriel Ilharco

Researcher at University of Washington

Publications -  34
Citations -  1475

Gabriel Ilharco is an academic researcher from University of Washington. The author has contributed to research in topics: Computer science & Feature learning. The author has an hindex of 10, co-authored 23 publications receiving 617 citations.

Papers
More filters
Posted Content

Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping

TL;DR: This work investigates how the performance of the best-found model varies as a function of the number of fine-tuning trials, and examines two factors influenced by the choice of random seed: weight initialization and training data order.
Proceedings ArticleDOI

Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time

TL;DR: The model soup approach extends to multiple image classification and natural language processing tasks, improves out-of-distribution performance, and improves zero-shot performance on new downstream tasks.
Posted Content

Evaluating NLP Models via Contrast Sets

TL;DR: A new annotation paradigm for NLP is proposed that helps to close systematic gaps in the test data, and it is recommended that after a dataset is constructed, the dataset authors manually perturb the test instances in small but meaningful ways that change the gold label, creating contrast sets.
Journal ArticleDOI

Reproducible scaling laws for contrastive language-image learning

TL;DR: The authors investigate scaling laws for contrastive language-image pre-training (CLIP) with the public LAION dataset and the open-source OpenCLIP repository and identify power law scaling for multiple downstream tasks including zero-shot classification, retrieval, linear probing and end-to-end fine-tuning.