scispace - formally typeset
Open AccessProceedings ArticleDOI

On Learning Density Aware Embeddings

TLDR
A novel noise tolerant deep metric learning algorithm, termed as Density Aware Metric Learning, enforces the model to learn embeddings that are pulled towards the most dense region of the clusters for each class, leading to faster convergence and higher generalizability.
Abstract: 
Deep metric learning algorithms have been utilized to learn discriminative and generalizable models which are effective for classifying unseen classes. In this paper, a novel noise tolerant deep metric learning algorithm is proposed. The proposed method, termed as Density Aware Metric Learning, enforces the model to learn embeddings that are pulled towards the most dense region of the clusters for each class. It is achieved by iteratively shifting the estimate of the center towards the dense region of the cluster thereby leading to faster convergence and higher generalizability. In addition to this, the approach is robust to noisy samples in the training data, often present as outliers. Detailed experiments and analysis on two challenging cross-modal face recognition databases and two popular object recognition databases exhibit the efficacy of the proposed approach. It has superior convergence, requires lesser training time, and yields better accuracies than several popular deep metric learning methods.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Deep Compositional Metric Learning

TL;DR: Wang et al. as discussed by the authors proposed a deep compositional metric learning (DCML) framework for effective and generalizable similarity measurement between images, which employs a set of learnable compositors to combine the sub-embeddings and use a self-reinforced loss to train the compositors, which serve as relays to distribute the diverse training signals to avoid destroying the discrimination ability.
Proceedings ArticleDOI

Deep Metric Learning via Adaptive Learnable Assessment

TL;DR: A sequence-aware learnable assessor is proposed which re-weights each training example to train the metric towards good generalization and demonstrates the effectiveness of the proposed approach on widely used CUB-200-2011, Cars196, and Stanford Online Products datasets.
Book ChapterDOI

Generate to Adapt: Resolution Adaption Network for Surveillance Face Recognition

TL;DR: RAN which contains Multi-Resolution Generative Adversarial Networks (MR-GAN) followed by a feature adaption network with translation gate is developed to fuse the discriminative information of LR faces into backbone network, while preserving the discrimination ability of original face representations.
Posted Content

Unravelling Small Sample Size Problems in the Deep Learning World

TL;DR: A review of deep learning algorithms for small sample size problems in which the algorithms are segregated according to the space in which they operate, i.e. input space, model space, and feature space and a Dynamic Attention Pooling approach which focuses on extracting global information from the most discriminative sub-part of the feature map is presented.
Proceedings ArticleDOI

Unravelling Small Sample Size Problems in the Deep Learning World

TL;DR: In this paper, a review of deep learning algorithms for small sample size problems in which the algorithms are segregated according to the space in which they operate, i.e. input space, model space, and feature space.
References
More filters
Dissertation

Learning Multiple Layers of Features from Tiny Images

TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.
Journal ArticleDOI

Mean shift: a robust approach toward feature space analysis

TL;DR: It is proved the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density.
Proceedings ArticleDOI

FaceNet: A unified embedding for face recognition and clustering

TL;DR: A system that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure offace similarity, and achieves state-of-the-art face recognition performance using only 128-bytes perface.
Proceedings ArticleDOI

Dimensionality Reduction by Learning an Invariant Mapping

TL;DR: This work presents a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold.
Proceedings Article

Distance Metric Learning for Large Margin Nearest Neighbor Classification

TL;DR: In this article, a Mahanalobis distance metric for k-NN classification is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin.
Related Papers (5)