scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Person Authentication Using Head Images

TL;DR: The experiments suggest that head images can be effectively used to ascertain human identity and the availability of this database could pave further research in this field.
Abstract: In many surveillance applications, the cameras are placed at overhead heights for human identification. In such real-world scenarios, the person of interest might be walking away from the camera and the only information available is "image of the person's head". In this research, we investigate the usage of head images for person recognition and propose it as a soft-biometric modality. With its viability for human recognition, application of head images can also be extended with other face recognition algorithms for surveillance. We propose a head image database pertaining to 103 subjects with more than 600 images. In addition to the database, we propose a framework for head image-based person verification. As a pre-processing stage, the framework includes evaluation of two segmentation algorithms. We also perform benchmarking evaluations of various texture, key-point, and learning-based representation algorithms and establish the baseline results. The experiments suggest that head images can be effectively used to ascertain human identity and the availability of this database could pave further research in this field.
Citations
More filters
01 Jan 2006

3,012 citations

18 Jan 2010
TL;DR: In this paper, the authors proposed three part (head, torso, legs) height and colour soft biometric models, and demonstrate their verification performance on a subset of the PETS 2006 database.
Abstract: Soft biometrics are characteristics that can be used to describe, but not uniquely identify an individual. These include traits such as height, weight, gender, hair, skin and clothing colour. Unlike traditional biometrics (i.e. face, voice) which require cooperation from the subject, soft biometrics can be acquired by surveillance cameras at range without any user cooperation. Whilst these traits cannot provide robust authentication, they can be used to provide coarse authentication or identification at long range, locate a subject who has been previously seen or who matches a description, as well as aid in object tracking. In this paper we propose three part (head, torso, legs) height and colour soft biometric models, and demonstrate their verification performance on a subset of the PETS 2006 database. We show that these models, whilst not as accurate as traditional biometrics, can still achieve acceptable rates of accuracy in situations where traditional biometrics cannot be applied.

56 citations

Journal ArticleDOI
TL;DR: This paper presents a two-stage head detection framework that utilizes fully convolutional network (FCN) to generate scale-aware proposals followed by CNN that classifies each proposal into two classes, i.e. head and background.
Abstract: Pedestrian head detection plays an important role in identifying and localizing individuals in real world visual data. Head detection is a nontrivial problem due to considerable variance in camera view-points, scales, human poses, and appearances in the scene. Thanks to the translation invariance property of convolutional neural networks (CNNs) which enables large capacity CNNs to handle the problem of appearance and pose variations in the scene. However, the problem of scale invariance is still an open issue. To address this problem, this paper presents a two-stage head detection framework that utilizes fully convolutional network (FCN) to generate scale-aware proposals followed by CNN that classifies each proposal into two classes, i.e. head and background. Experiments results show that using scale-aware proposals obtained by FCN, the object recall rate and mean average precision (mAP) are improved. Additionaly, we demonstrate that our framework achieved state-of-the-art results on four challenging benchmark datasets, i.e. HollywoodHeads, Casablanca, SHOCK, and WIDERFACE.

7 citations


Cites methods from "Person Authentication Using Head Im..."

  • ...head detection is an important element and used as a pre-processing step in many video surveillance applications, for example, tracking [3], [12], person authentication [25] and density estimation [36]....

    [...]

Proceedings ArticleDOI
01 Jan 2019
TL;DR: A dual-pathway framework which computes head and body discriminating features independently, and learns the correlation between such features, and achieves promising experimental results on small and challenging datasets.
Abstract: In the light of the human studies that report a strong correlation between head circumference and body size, we propose a new research problem: head-body matching. Given an image of a person's head, we want to match it with his body (headless) image. We propose a dual-pathway framework which computes head and body discriminating features independently, and learns the correlation between such features. We introduce a comprehensive evaluation of our proposed framework for this problem using different features including anthropometric features and deep-CNN features, different experimental setting such as head-body scale variations, and different body parts. We demonstrate the usefulness of our framework with two novel applications: head/body recognition, and T-shirt sizing from a head image. Our evaluations for head/body recognition application on the challenging large scale PIPA dataset (contains high variations of pose, viewpoint, and occlusion) show up to 53% of performance improvement using deep-CNN features, over the global model features in which head and body features are not separated or correlated. For T-shirt sizing application, we use anthropometric features for head-body matching. We achieve promising experimental results on small and challenging datasets.

3 citations


Cites background from "Person Authentication Using Head Im..."

  • ...Mostly one body part is utilized, such as faces [25, 37, 21, 4], heads [31, 24], or fullbodies [11, 26, 30]....

    [...]

References
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Posted Content
TL;DR: Faster R-CNN as discussed by the authors proposes a Region Proposal Network (RPN) to generate high-quality region proposals, which are used by Fast R-NN for detection.
Abstract: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.

23,183 citations

Proceedings ArticleDOI
20 Sep 1999
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

16,989 citations


"Person Authentication Using Head Im..." refers methods in this paper

  • ...Due to the better encoding of hairstyle, LBPHS has better discriminative capability compared to DSIFT in the problem of head image segmentation....

    [...]

  • ...The hand-crafted features utilized include Dense SIFT (DSIFT) and Local Binary Patterns (LBPHS)....

    [...]

  • ...The major conclusion which can be drawn from the results are as follows: On evaluating texture features (LBPHS) and key point based features (DSIFT) for matching head images, it is observed that LBPHS features outperform DSIFT features by over 10% genuine accept rate (GAR) for all color channels....

    [...]

  • ...Further, the performance of dictionary features, LBPHS and DSIFT features are lower on grayscale images compared to RGB. Representations obtained from deep models give the most competitive results....

    [...]

  • ...Features maps are generated from a ZF-Net [23] architecture and is then used by the Region Proposal Network (RPN) to generate region proposals....

    [...]