Open AccessPosted Content
Meta-Learning with Latent Embedding Optimization
Andrei Rusu,Dushyant Rao,Jakub Sygnowski,Oriol Vinyals,Razvan Pascanu,Simon Osindero,Raia Hadsell +6 more
Reads0
Chats0
TLDR
In this article, a data-dependent latent generative representation of model parameters is learned and a gradient-based meta-learning is performed in a low-dimensional latent space for few-shot learning.Abstract:
Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space. The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks. Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.read more
Citations
More filters
Proceedings ArticleDOI
Image-to-Class Metric based on Category Traversal for Few-shot Learning
TL;DR: A feature learning module is introduced to improve the representation ability of the feature extraction network and makes full use of all the local features of a category, thus expressing the distribution of this class more richly and effectively.
Journal ArticleDOI
Meta-learning approaches for few-shot learning: A survey of recent advances
TL;DR: Meta-learning is a promising approach that addresses these issues by adapting to new tasks with few-shot datasets as discussed by the authors . But meta-learning does not address the problem of poor generalization due to the same-distribution prediction.
Proceedings ArticleDOI
Knowledge Graph enhanced Multimodal Learning for Few-shot Visual Recognition
TL;DR: Zhang et al. as mentioned in this paper proposed a meta-learning framework for few-shot visual recognition, which combines the information from multiple modalities: visual information in images and rich semantics and structural information in a knowledge graph (KG).
Posted Content
Hierarchical Few-Shot Generative Models
Giorgio Giannone,Ole Winther +1 more
TL;DR: In this article, the authors extend the Neural Statistician to a fully hierarchical approach with an attention-based point to set-level aggregation, which better captures the intrinsic variability within the sets in the small data regime.
Proceedings ArticleDOI
Learning Dense Object Descriptors from Multiple Views for Low-shot Category Generalization
TL;DR: Deep Object Patch Encodings (DOPE) as discussed by the authors learns dense discriminative object representations for low-shot category recognition without requiring any category labels and can be trained from multiple views of object instances without any category or semantic object part labels.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.