J
Jack Clark
Researcher at OpenAI
Publications - 27
Citations - 14001
Jack Clark is an academic researcher from OpenAI. The author has contributed to research in topics: Computer science & Language model. The author has an hindex of 9, co-authored 16 publications receiving 4067 citations.
Papers
More filters
Proceedings Article
Language Models are Few-Shot Learners
Tom B. Brown,Benjamin Mann,Nick Ryder,Melanie Subbiah,Jared Kaplan,Prafulla Dhariwal,Arvind Neelakantan,Pranav Shyam,Girish Sastry,Amanda Askell,Sandhini Agarwal,Ariel Herbert-Voss,Gretchen Krueger,Thomas Henighan,Rewon Child,Aditya Ramesh,Daniel M. Ziegler,Jeffrey Wu,Clemens Winter,Christopher Hesse,Mark Chen,Eric Sigler,Mateusz Litwin,Scott Gray,Benjamin Chess,Jack Clark,Christopher Berner,Samuel McCandlish,Alec Radford,Ilya Sutskever,Dario Amodei +30 more
TL;DR: GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic.
Posted Content
Language Models are Few-Shot Learners
Tom B. Brown,Benjamin Mann,Nick Ryder,Melanie Subbiah,Jared Kaplan,Prafulla Dhariwal,Arvind Neelakantan,Pranav Shyam,Girish Sastry,Amanda Askell,Sandhini Agarwal,Ariel Herbert-Voss,Gretchen Krueger,Thomas Henighan,Rewon Child,Aditya Ramesh,Daniel M. Ziegler,Jeffrey Wu,Clemens Winter,Christopher Hesse,Mark Chen,Eric Sigler,Mateusz Litwin,Scott Gray,Benjamin Chess,Jack Clark,Christopher Berner,Samuel McCandlish,Alec Radford,Ilya Sutskever,Dario Amodei +30 more
TL;DR: This article showed that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches.
Posted ContentDOI
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
Miles Brundage,Shahar Avin,Jack Clark,Helen Toner,Peter Eckersley,Ben Garfinkel,Allan Dafoe,Paul Scharre,Thomas Zeitzoff,Bobby Filar,Hyrum S. Anderson,Heather M. Roff,Gregory C. Allen,Jacob Steinhardt,Carrick Flynn,Seán Ó hÉigeartaigh,Simon Beard,Haydn Belfield,Sebastian Farquhar,Clare Lyle,Rebecca Crootof,Owain Evans,Michael Page,Joanna J. Bryson,Roman V. Yampolskiy,Dario Amodei +25 more
TL;DR: The following organisations are named on the report: Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, Universityof Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI.
Posted Content
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford,Jong Wook Kim,Chris Hallacy,Aditya Ramesh,Gabriel Goh,Sandhini Agarwal,Girish Sastry,Amanda Askell,Pamela Mishkin,Jack Clark,Gretchen Krueger,Ilya Sutskever +11 more
TL;DR: In this article, a pre-training task of predicting which caption goes with which image is used to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet.
Posted Content
Release Strategies and the Social Impacts of Language Models.
Irene Solaiman,Miles Brundage,Jack Clark,Amanda Askell,Ariel Herbert-Voss,Jeffrey Wu,Alec Radford,Jasmine Wang +7 more
TL;DR: This report discusses OpenAI's work related to the release of its GPT-2 language model and discusses staged release, which allows time between model releases to conduct risk and benefit analyses as model sizes increased.