Y
Yingzhen Li
Researcher at Imperial College London
Publications - 74
Citations - 2644
Yingzhen Li is an academic researcher from Imperial College London. The author has contributed to research in topics: Artificial neural network & Inference. The author has an hindex of 23, co-authored 63 publications receiving 2060 citations. Previous affiliations of Yingzhen Li include Monash University & Microsoft.
Papers
More filters
Proceedings ArticleDOI
Variational continual learning
TL;DR: Variational continual learning (VCL) as mentioned in this paper is a general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks, which can successfully train both deep discriminative models and deep generative models in complex continual learning settings.
Posted Content
Disentangled Sequential Autoencoder
Yingzhen Li,Stephan Mandt +1 more
TL;DR: Empirical evidence is given for the hypothesis that stochastic RNNs as latent state models are more efficient at compressing and generating long sequences than deterministic ones, which may be relevant for applications in video compression.
Posted Content
R\'enyi Divergence Variational Inference
Yingzhen Li,Richard E. Turner +1 more
TL;DR: The variational R\'enyi bound (VR) is introduced that extends traditional variational inference to R‐enyi's alpha-divergences, and a novel variational inferred method is proposed as a new special case in the proposed framework.
Posted Content
Variational Continual Learning
TL;DR: Variational continual learning is developed, a simple but general framework for continual learning that fuses online variational inference and recent advances in Monte Carlo VI for neural networks that outperforms state-of-the-art continual learning methods.
Proceedings ArticleDOI
Deep Gaussian processes for regression using approximate expectation propagation
Thang D. Bui,José Miguel Hernández-Lobato,Daniel Hernández-Lobato,Yingzhen Li,Richard E. Turner +4 more
TL;DR: A new approximate Bayesian learning scheme is developed that enables DGPs to be applied to a range of medium to large scale regression problems for the first time and is almost always better than state-of-the-art deterministic and sampling-based approximate inference methods for Bayesian neural networks.