scispace - formally typeset
T

Thang D. Bui

Researcher at University of Cambridge

Publications -  28
Citations -  1593

Thang D. Bui is an academic researcher from University of Cambridge. The author has contributed to research in topics: Gaussian process & Expectation propagation. The author has an hindex of 16, co-authored 27 publications receiving 1296 citations. Previous affiliations of Thang D. Bui include University of Sydney & Information Sciences Institute.

Papers
More filters
Proceedings ArticleDOI

Variational continual learning

TL;DR: Variational continual learning (VCL) as mentioned in this paper is a general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks, which can successfully train both deep discriminative models and deep generative models in complex continual learning settings.
Posted Content

Variational Continual Learning

TL;DR: Variational continual learning is developed, a simple but general framework for continual learning that fuses online variational inference and recent advances in Monte Carlo VI for neural networks that outperforms state-of-the-art continual learning methods.
Proceedings ArticleDOI

Deep Gaussian processes for regression using approximate expectation propagation

TL;DR: A new approximate Bayesian learning scheme is developed that enables DGPs to be applied to a range of medium to large scale regression problems for the first time and is almost always better than state-of-the-art deterministic and sampling-based approximate inference methods for Bayesian neural networks.
Posted Content

Deep Gaussian Processes for Regression using Approximate Expectation Propagation

TL;DR: In this paper, an approximate Expectation Propagation procedure and a novel and efficient extension of the probabilistic backpropagation algorithm for learning is proposed. But the method is not suitable for large-scale regression problems.
Journal ArticleDOI

A Unifying Framework for Gaussian Process Pseudo-Point Approximations using Power Expectation Propagation

TL;DR: This paper developed a new pseudo-point approximation framework using Power Expectation Propagation (Power EP) that unifies a large number of these pseudo-points approximations and demonstrated that the new framework includes new pseudo point approximation methods that outperform current approaches on regression and classification tasks.