scispace - formally typeset
Open AccessPosted Content

Meta-Learning with Latent Embedding Optimization

Reads0
Chats0
TLDR
In this article, a data-dependent latent generative representation of model parameters is learned and a gradient-based meta-learning is performed in a low-dimensional latent space for few-shot learning.
Abstract
Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space. The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks. Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.

read more

Citations
More filters
Proceedings ArticleDOI

A Meta-Learning Framework for Predicting Power Digital Equipment Defect Texts via Hypergraph Modeling

TL;DR: In this paper , a meta-learning framework was proposed to solve the problem of small sample size and difficulty of scene migration, which is a way to acquire "learning to learn" and learn new tasks quickly based on the knowledge already acquired.
Journal ArticleDOI

Transductive Mutual Information Encoder Network for Few Shot Learning

TL;DR: This paper proposes Transductive Mutual Information Encoder Network (TMIN) for few-shot learning problems, which typically trains a convolutional neural network with a mutual information maximization module in an unsupervised manner.
Journal ArticleDOI

Learning to Accelerate by the Methods of Step-size Planning

Hengshuai Yao
- 01 Apr 2022 - 
TL;DR: It is shown that for a convex problem, the methods surpass the convergence rate of Nesterov’s accelerated gradient, 1 − (cid:112) µL , where µ, L are the strongly convex factor of the loss function F and the Lipschitz constant of F ( cid:48) , which is the theoretical limit for the converge rate of first-order methods.

A Channel Coding Benchmark for Meta-Learning

Rui Li
TL;DR: This work proposes the channel coding problem as a benchmark for meta-learning, and uses the MetaCC benchmark to study several aspects of meta- learning, including the impact of task distribution breadth and shift, which can be controlled in the coding problem.
Journal ArticleDOI

Predicting the Generalization Ability of a Few-Shot Classifier

TL;DR: In this article, the authors investigate transfer-based few-shot learning solutions, and consider three settings: (i) supervised where they only have access to a few labeled samples, (ii) semi-supervised where they have access both a few labelled samples and a set of unlabeled samples and (iii) unsupervised when they only had access to unlabelled samples, and show that these simple measures can predict the generalization ability up to a certain confidence.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.