scispace - formally typeset
J

Jack W. Rae

Researcher at Google

Publications -  36
Citations -  2911

Jack W. Rae is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Language model. The author has an hindex of 22, co-authored 36 publications receiving 1905 citations. Previous affiliations of Jack W. Rae include University College London.

Papers
More filters
Proceedings Article

Compressive Transformers for Long-Range Sequence Modelling

TL;DR: The Compressive Transformer is presented, an attentive sequence model which compresses past memories for long-range sequence learning and can model high-frequency speech effectively and can be used as a memory mechanism for RL, demonstrated on an object matching task.
Posted Content

Model-Free Episodic Control

TL;DR: This work demonstrates that a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks and attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
Posted Content

Unsupervised Predictive Memory in a Goal-Directed Agent

TL;DR: A model, the Memory, RL, and Inference Network (MERLIN), in which memory formation is guided by a process of predictive modeling, demonstrates a single learning agent architecture that can solve canonical behavioural tasks in psychology and neurobiology without strong simplifying assumptions about the dimensionality of sensory input or the duration of experiences.
Posted Content

Stabilizing Transformers for Reinforcement Learning.

TL;DR: The proposed architecture, the Gated Transformer-XL (GTrXL), surpasses LSTMs on challenging memory environments and achieves state-of-the-art results on the multi-task DMLab-30 benchmark suite, exceeding the performance of an external memory architecture.