C
Chris Hallacy
Researcher at OpenAI
Publications - 6
Citations - 674
Chris Hallacy is an academic researcher from OpenAI. The author has contributed to research in topics: Usability & Natural language. The author has an hindex of 3, co-authored 3 publications receiving 486 citations.
Papers
More filters
Posted Content
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford,Jong Wook Kim,Chris Hallacy,Aditya Ramesh,Gabriel Goh,Sandhini Agarwal,Girish Sastry,Amanda Askell,Pamela Mishkin,Jack Clark,Gretchen Krueger,Ilya Sutskever +11 more
TL;DR: In this article, a pre-training task of predicting which caption goes with which image is used to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet.
Posted Content
Scaling Laws for Autoregressive Generative Modeling
Thomas Henighan,Jared Kaplan,Mor Katz,Mark Chen,Christopher Hesse,Jacob Jackson,Heewoo Jun,Tom B. Brown,Prafulla Dhariwal,Scott Gray,Chris Hallacy,Benjamin Mann,Alec Radford,Aditya Ramesh,Nick Ryder,Daniel M. Ziegler,John Schulman,Dario Amodei,Samuel McCandlish +18 more
TL;DR: The case that scaling laws have important implications for neural network performance, including on downstream tasks is strengthened, as empirical scaling laws for the cross-entropy loss are identified.
Journal Article
Text and Code Embeddings by Contrastive Pre-Training
Arvind Neelakantan,Tao Xu,Raul Puri,Alec Radford,Jesse Michael Han,Jerry Tworek,Qiming Yuan,Nikolas Tezak,Jong Wook Kim,Chris Hallacy,Johannes Heidecke,Pranav Shyam,Boris Power,Tyna Eloundou Nekoul,Girish Sastry,Gretchen Krueger,David P. Schnurr,Felipe Petroski Such,K S Hsu,Madeleine Thompson,Tabarak Khan,Toki Sherbakov,Joanne Jang,Peter Welinder,Lilian Weng +24 more
TL;DR: It is shown that contrastive pre-training on unsupervised data at scale leads to high quality vector representations of text and code.
Proceedings Article
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford,Jong Wook Kim,Chris Hallacy,Aditya Ramesh,Gabriel Goh,Sandhini Agarwal,Girish Sastry,Amanda Askell,Pamela Mishkin,Jack Clark,Gretchen Krueger,Ilya Sutskever +11 more
TL;DR: In this paper, a pre-training task of predicting which caption goes with which image is used to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet.
Active Learning is a Strong Baseline for Data Subset Selection
Tom B. Brown,N. C. Ryder,Jared Kaplan,Prafulla Dhariwal,Arvind Neelakantan,Pranav Shyam,Girish Sastry,Alec Radford,Chris Hallacy,Aditya Ramesh,Gabriel Goh,Sandhini Agarwal,Alexey Dosovitskiy,Lucas Beyer,Alexander Kolesnikov,Dirk Weissenborn,Xiaohua Zhai,Sören Mindermann,Jan Markus Brauner +18 more
TL;DR: A simple active learning-based algorithm that outperforms all the current data subset selection algorithms on the benchmark tasks and finds that it is crucial to find a balance between easy- to-classify and hard-to- classify examples when selecting a subset.