Open AccessPosted Content
Few-Shot Learning with Metric-Agnostic Conditional Embeddings
Nathan Hilliard,Lawrence Phillips,Scott Howland,Artëm Yankov,Courtney D. Corley,Nathan O. Hodas +5 more
TLDR
This work introduces a novel architecture where class representations are conditioned for each few-shot trial based on a target image, and deviates from traditional metric-learning approaches by training a network to perform comparisons between classes rather than relying on a static metric comparison.Abstract:
Learning high quality class representations from few examples is a key problem in metric-learning approaches to few-shot learning To accomplish this, we introduce a novel architecture where class representations are conditioned for each few-shot trial based on a target image We also deviate from traditional metric-learning approaches by training a network to perform comparisons between classes rather than relying on a static metric comparison This allows the network to decide what aspects of each class are important for the comparison at hand We find that this flexible architecture works well in practice, achieving state-of-the-art performance on the Caltech-UCSD birds fine-grained classification taskread more
Citations
More filters
Posted Content
A Closer Look at Few-shot Classification
TL;DR: The results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones, and a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
Proceedings Article
Delta-encoder: an effective sample synthesis method for few-shot object recognition
Eli Schwartz,Leonid Karlinsky,Joseph Shtok,Sivan Harary,Mattias Marder,Abhishek Kumar,Rogerio Feris,Raja Giryes,Alexander M. Bronstein +8 more
TL;DR: In this article, a modified auto-encoder is proposed to learn transferable intra-class deformations, or "deltas", between same-class pairs of training examples, and apply those deltas to the few provided examples of a novel class (unseen during training) in order to efficiently synthesize samples from that new class.
Posted Content
Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation
TL;DR: The core idea is to use feature-wise transformation layers for augmenting the image features using affine transforms to simulate various feature distributions under different domains in the training stage, and applies a learning-to-learn approach to search for the hyper-parameters of the feature- wise transformation layers.
Proceedings ArticleDOI
Hyperbolic Image Embeddings
TL;DR: In this paper, the authors demonstrate that in many practical scenarios, hyperbolic embeddings provide a better alternative to Euclidean and spherical embedding for few-shot learning.
Book ChapterDOI
Negative Margin Matters: Understanding Margin in Few-Shot Classification
TL;DR: In this article, negative margin loss is introduced to metric learning based few-shot learning methods, which significantly outperforms regular softmax loss, and achieves state-of-the-art accuracy.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Model-agnostic meta-learning for fast adaptation of deep networks
TL;DR: An algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning is proposed.
Posted Content
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
TL;DR: The Exponential Linear Unit (ELU) as mentioned in this paper was proposed to alleviate the vanishing gradient problem via the identity for positive values, which has improved learning characteristics compared to the units with other activation functions.
Proceedings Article
Optimization as a Model for Few-Shot Learning
Sachin Ravi,Hugo Larochelle +1 more
TL;DR: In this paper, an LSTM-based meta-learner model is proposed to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime.
Siamese Neural Networks for One-shot Image Recognition
TL;DR: A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.