scispace - formally typeset
R

Richard E. Turner

Researcher at University of Cambridge

Publications -  197
Citations -  7401

Richard E. Turner is an academic researcher from University of Cambridge. The author has contributed to research in topics: Gaussian process & Inference. The author has an hindex of 45, co-authored 176 publications receiving 5827 citations. Previous affiliations of Richard E. Turner include University College London.

Papers
More filters
Proceedings ArticleDOI

Variational continual learning

TL;DR: Variational continual learning (VCL) as mentioned in this paper is a general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks, which can successfully train both deep discriminative models and deep generative models in complex continual learning settings.
Posted Content

Gaussian Process Behaviour in Wide Deep Neural Networks

TL;DR: In this paper, the authors study the relationship between random, wide, fully connected, feedforward networks with more than one hidden layer and Gaussian processes with a recursive kernel definition and show that, under broad conditions, as they make the architecture increasingly wide, the implied random function converges in distribution to a Gaussian process.
Proceedings ArticleDOI

Q-PrOP: Sample-efficient policy gradient with an off-policy critic

TL;DR: Q-Prop as discussed by the authors uses a Taylor expansion of the off-policy critic as a control variate to combine the stability of policy gradients with the efficiency of on-policy RL.
Journal ArticleDOI

The processing and perception of size information in speech sounds

TL;DR: The experiments support the hypothesis that the auditory system automatically normalizes for the size information in communication sounds.
Book ChapterDOI

Two problems with variational expectation maximisation for time-series models

TL;DR: In this paper, the success of variational expectation maximization (vEM) in simple probabilistic time series models is investigated, and it is shown that simpler variational approximations (such as mean-field) can lead to less bias than more complicated structured approximate.