Proceedings ArticleDOI
Face Representation Learning using Composite Mini-Batches
Evgeny Smirnov,Andrei Oleinik,Aleksandr Lavrentev,Elizaveta Shulga,Vasiliy Galyuk,Nikita Garaev,Margarita Zakuanova,Aleksandr Melnikov +7 more
- pp 0-0
Reads0
Chats0
TLDR
In this article, a composite mini-batch technique is proposed to combine several sampling strategies in one training process, where the main idea is to compose mini-batches from several parts, and use different sampling strategy for each part.Abstract:
Mini-batch construction strategy is an important part of the deep representation learning. Different strategies have their advantages and limitations. Usually only one of them is selected to create mini-batches for training. However, in many cases their combination can be more efficient than using only one of them. In this paper, we propose Composite Mini-Batches - a technique to combine several mini-batch sampling strategies in one training process. The main idea is to compose mini-batches from several parts, and use different sampling strategy for each part. With this kind of mini-batch construction, we combine the advantages and reduce the limitations of the individual mini-batch sampling strategies. We also propose Interpolated Embeddings and Priority Class Sampling as complementary methods to improve the training of face representations. Our experiments on a challenging task of disguised face recognition confirm the advantages of the proposed methods.read more
Citations
More filters
Posted Content
Prototype Memory for Large-scale Face Representation Learning.
TL;DR: Prototype Memory as discussed by the authors uses a limited-size memory module for storing recent class prototypes and employs a set of algorithms to update it in appropriate way, which can be used with various loss functions, hard example mining algorithms and encoder architectures.
Journal ArticleDOI
Prototype Memory for Large-Scale Face Representation Learning
TL;DR: Prototype Memory as discussed by the authors uses a limited-size memory module for storing recent class prototypes and employs a set of algorithms to update it in appropriate way to prevent prototype obsolescence.
Journal ArticleDOI
FRA: A novel Face Representation Augmentation algorithm for face recognition
TL;DR: In this paper , the authors propose a P.P.O.O (P.P.) scheme. But it is difficult to implement and computationally computationally timeconsuming.
Posted Content
Multi-Task Meta-Learning Modification with Stochastic Approximation
TL;DR: In this article, simultaneous perturbation stochastic approximation (SPSA) is used for meta-training tasks weights optimization to improve the performance of the meta-learning pipeline.
Book ChapterDOI
FaceMix: Transferring Local Regions for Data Augmentation in Face Recognition
TL;DR: FaceMix as mentioned in this paper is a flexible face-specific data augmentation technique that transfers a local area of an image to another image, and it can generate new images for a class, using face data from other classes, and these two modes also could be combined.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings ArticleDOI
FaceNet: A unified embedding for face recognition and clustering
TL;DR: A system that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure offace similarity, and achieves state-of-the-art face recognition performance using only 128-bytes perface.
Proceedings ArticleDOI
Aggregated Residual Transformations for Deep Neural Networks
TL;DR: ResNeXt as discussed by the authors is a simple, highly modularized network architecture for image classification, which is constructed by repeating a building block that aggregates a set of transformations with the same topology.
Proceedings ArticleDOI
DeepFace: Closing the Gap to Human-Level Performance in Face Verification
TL;DR: This work revisits both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network.
Proceedings Article
Prototypical Networks for Few-shot Learning
TL;DR: Prototypical Networks as discussed by the authors learn a metric space in which classification can be performed by computing distances to prototype representations of each class, and achieve state-of-the-art results on the CU-Birds dataset.