J
Jack W. Rae
Researcher at Google
Publications - 36
Citations - 2911
Jack W. Rae is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Language model. The author has an hindex of 22, co-authored 36 publications receiving 1905 citations. Previous affiliations of Jack W. Rae include University College London.
Papers
More filters
Journal ArticleDOI
A clinically applicable approach to continuous prediction of future acute kidney injury
Nenad Tomasev,Xavier Glorot,Jack W. Rae,Michal Zielinski,Harry Askham,Andre Saraiva,Anne Mottram,Clemens Meyer,Suman V. Ravuri,Ivan Protsyuk,Alistair Connell,Cian Hughes,Alan Karthikesalingam,Julien Cornebise,Hugh Montgomery,Geraint Rees,Chris Laing,Clifton R. Baker,Kelly S. Peterson,Ruth M. Reeves,Demis Hassabis,Dominic King,Mustafa Suleyman,Trevor Back,Christopher Nielson,Christopher Nielson,Joseph R. Ledsam,Shakir Mohamed +27 more
TL;DR: A deep learning approach that predicts the risk of acute kidney injury and provides confidence assessments and a list of the clinical features that are most salient to each prediction, alongside predicted future trajectories for clinically relevant blood tests are developed.
Proceedings Article
Compressive Transformers for Long-Range Sequence Modelling
TL;DR: The Compressive Transformer is presented, an attentive sequence model which compresses past memories for long-range sequence learning and can model high-frequency speech effectively and can be used as a memory mechanism for RL, demonstrated on an object matching task.
Posted Content
Model-Free Episodic Control
Charles Blundell,Benigno Uria,Alexander Pritzel,Yazhe Li,Avraham Ruderman,Joel Z. Leibo,Jack W. Rae,Daan Wierstra,Demis Hassabis +8 more
TL;DR: This work demonstrates that a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks and attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
Posted Content
Unsupervised Predictive Memory in a Goal-Directed Agent
Greg Wayne,Chia-Chun Hung,David Amos,Mehdi Mirza,Arun Ahuja,Agnieszka Grabska-Barwinska,Jack W. Rae,Piotr Mirowski,Joel Z. Leibo,Adam Santoro,Mevlana Gemici,Malcolm Reynolds,Tim Harley,Josh Abramson,Shakir Mohamed,Danilo Jimenez Rezende,David Saxton,Adam Cain,Chloe Hillier,David Silver,Koray Kavukcuoglu,Matthew Botvinick,Demis Hassabis,Timothy P. Lillicrap +23 more
TL;DR: A model, the Memory, RL, and Inference Network (MERLIN), in which memory formation is guided by a process of predictive modeling, demonstrates a single learning agent architecture that can solve canonical behavioural tasks in psychology and neurobiology without strong simplifying assumptions about the dimensionality of sensory input or the duration of experiences.
Posted Content
Stabilizing Transformers for Reinforcement Learning.
Emilio Parisotto,H. Francis Song,Jack W. Rae,Razvan Pascanu,Caglar Gulcehre,Siddhant M. Jayakumar,Max Jaderberg,Raphael Lopez Kaufman,Aidan Clark,Seb Noury,Matthew Botvinick,Nicolas Heess,Raia Hadsell +12 more
TL;DR: The proposed architecture, the Gated Transformer-XL (GTrXL), surpasses LSTMs on challenging memory environments and achieves state-of-the-art results on the multi-task DMLab-30 benchmark suite, exceeding the performance of an external memory architecture.