scispace - formally typeset
J

John Bronskill

Researcher at University of Cambridge

Publications -  28
Citations -  724

John Bronskill is an academic researcher from University of Cambridge. The author has contributed to research in topics: Computer science & Meta learning (computer science). The author has an hindex of 12, co-authored 23 publications receiving 530 citations. Previous affiliations of John Bronskill include University of Toronto & Microsoft.

Papers
More filters
Proceedings Article

Meta-Learning Probabilistic Inference for Prediction

TL;DR: VERSA is introduced, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass, amortizing the cost of inference and relieving the need for second derivatives during training.
Posted Content

Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes.

TL;DR: The goal of this paper is to design image classification systems that, after an initial multi-task training phase, can automatically adapt to new tasks encountered at test time, and introduces a conditional neural process based approach to the multi- task classification setting.
Posted Content

Meta-Learning Probabilistic Inference For Prediction

TL;DR: In this paper, a general framework for Meta-Learning approximate Probabilistic Inference for Prediction (ML-PIP) is proposed. But it does not cover a broad class of methods.
Proceedings Article

TaskNorm: Rethinking Batch Normalization for Meta-Learning

TL;DR: In this article, the authors evaluate a range of approaches to batch normalization for meta-learning scenarios, and develop a novel approach that is called TASKNORM, which has a dramatic effect on both classification accuracy and training time.
Proceedings ArticleDOI

Fast and Flexible Multi-Task Classification using Conditional Neural Adaptive Processes

TL;DR: In this article, a conditional neural process based approach is proposed to adapt to new tasks encountered at test time, which achieves state-of-the-art results on the challenging Meta-Dataset benchmark.