scispace - formally typeset
Proceedings ArticleDOI

Periocular Biometrics in Head-Mounted Displays: A Sample Selection Approach for Better Recognition

TLDR
A new normalization scheme to align the ocular images and then, a new reference sample selection protocol to achieve higher verification accuracy is proposed and is exemplified using two handcrafted feature extraction methods and two deep-learning strategies.
Abstract
Virtual and augmented reality technologies are increasingly used in a wide range of applications. Such technologies employ a Head Mounted Display (HMD) that typically includes an eye-facing camera and is used for eye tracking. As some of these applications require accessing or transmitting highly sensitive private information, a trusted verification of the operator’s identity is needed. We investigate the use of HMD-setup to perform verification of operator using periocular region captured from inbuilt camera. However, the uncontrolled nature of the periocular capture within the HMD results in images with a high variation in relative eye location and eye-opening due to varied interactions. Therefore, we propose a new normalization scheme to align the ocular images and then, a new reference sample selection protocol to achieve higher verification accuracy. The applicability of our proposed scheme is exemplified using two handcrafted feature extraction methods and two deep-learning strategies. We conclude by stating the feasibility of such a verification approach despite the uncontrolled nature of the captured ocular images, especially when proper alignment and sample selection strategy is employed.

read more

Citations
More filters
Journal ArticleDOI

PerAE: An Effective Personalized AutoEncoder for ECG-Based Biometric in Augmented Reality System

TL;DR: Huang et al. as mentioned in this paper proposed an autoencoder-based EIR system, called Personalized AutoEncoder (PerAE), which maintains a small auto-encoder model (called Attention-MemAE) for each registered user of a system.
Proceedings ArticleDOI

Fusing Iris and Periocular Region for User Verification in Head Mounted Displays

TL;DR: This work presents and evaluates a fusion framework for improving the biometric authentication performance and employs score-level fusion for two independent biometric systems of iris and periocular region to avoid expensive feature- level fusion.
Journal ArticleDOI

Template-Driven Knowledge Distillation for Compact and Accurate Periocular Biometrics Deep-Learning Models

TL;DR: A novel template-driven KD approach that optimizes the distillation process so that the student model learns to produce templates similar to those produced by the teacher model, and demonstrates the superiority of this approach on intra- and cross-device periocular verification.
Proceedings ArticleDOI

Combining Real-World Constraints on User Behavior with Deep Neural Networks for Virtual Reality (VR) Biometrics

TL;DR: This work provides an approach to perform behavioral biometrics using deep networks while incorporating spatial and smoothing constraints on input data to represent real-world behavior and shows higher success over baseline methods in 36 out of 42 cases of analysis done by varying user sets and pairings of VR systems and sessions.
Posted Content

On Benchmarking Iris Recognition within a Head-mounted Display for AR/VR Application

TL;DR: A new iris quality metric that is termed as Iris Mask Ratio (IMR) is defined to quantify the iris recognition performance and is proposed for continuous authentication of users in a non-collaborative capture setting in HMD.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Posted Content

MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

TL;DR: This work introduces two simple global hyper-parameters that efficiently trade off between latency and accuracy and demonstrates the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.
Posted Content

Searching for MobileNetV3.

TL;DR: This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art of MobileNets.
Related Papers (5)