scispace - formally typeset
Proceedings ArticleDOI

Regularizing deep learning architecture for face recognition with weight variations

Reads0
Chats0
TLDR
This paper presents a novel approach to incorporate the weight variations during feature learning process in a deep learning architecture in terms of a regularization function which helps in learning the latent variables representative of different weight categories.
Abstract
Several mathematical models have been proposed for recognizing face images with age variations. However, effect of change in body-weight is also an interesting covariate that has not been much explored. This paper presents a novel approach to incorporate the weight variations during feature learning process. In a deep learning architecture, we propose incorporating the body-weight in terms of a regularization function which helps in learning the latent variables representative of different weight categories. The formulation has been proposed for both Autoencoder and Deep Boltzmann Machine. On extended WIT database of 200 subjects, the comparison with a commercial system and an existing algorithm show that the proposed algorithm outperforms them by more than 9% at rank-10 identification accuracy.

read more

Citations
More filters
Journal ArticleDOI

Implications of Pooling Strategies in Convolutional Neural Networks: A Deep Insight

TL;DR: This study presents a detailed review of the conventional and the latest strategies which would help in appraising the readers with the upsides and downsides of each strategy.
Journal ArticleDOI

Regularized Deep Learning for Face Recognition With Weight Variations

TL;DR: A regularizer-based approach to learn weight invariant facial representations using two different deep learning architectures, namely, sparse-stacked denoising autoencoders and deep Boltzmann machines is proposed, which incorporates a body-weight aware regularization parameter in the loss function of these architectures to help learn weight-aware features.
Journal ArticleDOI

A Comparison of Pooling Methods for Convolutional Neural Networks

TL;DR: A critical understanding of traditional and modern pooling techniques is provided and the strengths and weaknesses for readers are highlighted.
References
More filters
Journal ArticleDOI

Robust Real-Time Face Detection

TL;DR: In this paper, a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates is described. But the detection performance is limited to 15 frames per second.
Proceedings ArticleDOI

DeepFace: Closing the Gap to Human-Level Performance in Face Verification

TL;DR: This work revisits both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network.

Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising 1 criterion

P. Vincent
TL;DR: This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.
Journal Article

Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion

TL;DR: Denoising autoencoders as mentioned in this paper are trained locally to denoise corrupted versions of their inputs, which is a straightforward variation on the stacking of ordinary autoencoder.
Proceedings Article

Greedy Layer-Wise Training of Deep Networks

TL;DR: These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.
Related Papers (5)