scispace - formally typeset
J

Jeffrey Wu

Researcher at OpenAI

Publications -  11
Citations -  16786

Jeffrey Wu is an academic researcher from OpenAI. The author has contributed to research in topics: Language model & Automatic summarization. The author has an hindex of 8, co-authored 9 publications receiving 3933 citations.

Papers
More filters
Proceedings ArticleDOI

Training language models to follow instructions with human feedback

TL;DR: The results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent and showing improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets.
Posted Content

Scaling Laws for Neural Language Models

TL;DR: Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.
Proceedings Article

Generative Pretraining From Pixels

TL;DR: This work trains a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure, and finds that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and low-data classification.