scispace - formally typeset
Y

Yee Whye Teh

Researcher at University of Oxford

Publications -  351
Citations -  42930

Yee Whye Teh is an academic researcher from University of Oxford. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 68, co-authored 326 publications receiving 36155 citations. Previous affiliations of Yee Whye Teh include University of Toronto & University College London.

Papers
More filters
Posted Content

Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes

TL;DR: In this paper, the authors study the problem of learning stochastic fields, i.e. processes whose samples are fields like those occurring in physics and engineering, and show that spatial invariance requires an inference model to be equivariant.
Journal ArticleDOI

Kalman Filter for Online Classification of Non-Stationary Data

TL;DR: In this article , a probabilistic Bayesian online learning model is proposed to model non-stationarity over the linear predictor weights using a parameter drift transition density, parametrized by a coefficient that quantifies forgetting.
Proceedings Article

Non-exchangeable feature allocation models with sublinear growth of the feature sizes

TL;DR: A class of non-exchangeable feature allocation models where the number of objects sharing a given feature grows sublinearly, where the rate can be controlled by a tuning parameter is described.
Posted Content

Faithful Model Inversion Substantially Improves Auto-encoding Variational Inference.

TL;DR: This work suggests that the d-separation properties of the BN structure of the forward model should be used, in a principled way, to produce ones that are faithful to the posterior, and introduces the novel Compact Minimal I-map algorithm.
Journal ArticleDOI

Learning Instance-Specific Data Augmentations

TL;DR: It is empirically demonstrate that InstaAug learns meaningful augmentations for a wide range of transformation classes, which in turn provides better performance on supervised and self-supervised tasks compared with augmentations that assume input–transformation independence.