scispace - formally typeset
R

Rosanne Liu

Researcher at Uber

Publications -  29
Citations -  2566

Rosanne Liu is an academic researcher from Uber . The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 12, co-authored 24 publications receiving 1346 citations.

Papers
More filters
Proceedings Article

An intriguing failing of convolutional neural networks and the CoordConv solution

TL;DR: CoordConv as discussed by the authors proposes to give convolution access to its own input coordinates through the use of extra coordinate channels, allowing networks to learn either complete translation invariance or varying degrees of translation dependence, as required by the end task.
Journal Article

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

Aarohi Srivastava, +439 more
- 09 Jun 2022 - 
TL;DR: Evaluation of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters finds that model performance and calibration both improve with scale, but are poor in absolute terms.
Posted Content

Plug and Play Language Models: A Simple Approach to Controlled Text Generation

TL;DR: The Plug and Play Language Model (PPLM) for controllable language generation is proposed, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM.
Proceedings Article

Plug and Play Language Models: A Simple Approach to Controlled Text Generation

TL;DR: The Plug and Play Language Model (PPLM) as mentioned in this paper combines a pre-trained transformer-based language model with one or more simple attribute classifiers that guide text generation without any further training of the transformer.
Posted Content

Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

TL;DR: This paper studies the three critical components of the Lottery Ticket algorithm, showing that each may be varied significantly without impacting the overall results, and shows why setting weights to zero is important, how signs are all you need to make the reinitialized network train, and why masks behaves like training.