scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Aiding face recognition with social context association rule based re-ranking

TL;DR: The results show that association rules extracted from social context can be used to augment face recognition and improve the identification performance.
Abstract: Humans are very efficient at recognizing familiar face images even in challenging conditions. One reason for such capabilities is the ability to understand social context between individuals. Sometimes the identity of the person in a photo can be inferred based on the identity of other persons in the same photo, when some social context between them is known. This research presents an algorithm to utilize cooccurrence of individuals as the social context to improve face recognition. Association rule mining is utilized to infer multi-level social context among subjects from a large repository of social transactions. The results are demonstrated on the G-album and on the SN-collection pertaining to 4675 identities prepared by the authors from a social networking website. The results show that association rules extracted from social context can be used to augment face recognition and improve the identification performance.
Citations
More filters
Journal ArticleDOI
TL;DR: A comprehensive review of techniques incorporating ancillary information in the biometric recognition pipeline is presented in this paper, where the authors provide a comprehensive overview of the role of information fusion in biometrics.

151 citations

Journal ArticleDOI
TL;DR: In this paper, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner, and a compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is employed to verify the kin accurately.
Abstract: Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this paper, first, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determine their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender and age and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index $d'$ , and perceptual information entropy. Utilizing the information obtained from the human study, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks ( fc DBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. A new WVU kinship database is created, which consists of multiple images per subject to facilitate kinship verification. The results show that the proposed deep learning framework (KVRL- fc DBN) yields the state-of-the-art kinship verification accuracy on the WVU kinship database and on four existing benchmark data sets. Furthermore, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL- fc DBN framework, an improvement of over 20% is observed in the performance of face verification.

81 citations

Journal ArticleDOI
17 Jul 2019
TL;DR: KVQA is introduced – the first dataset for the task of (world) knowledge-aware VQA and is the largest dataset for exploring V QA over large Knowledge Graphs (KG), which consists of 183K question-answer pairs involving more than 18K named entities and 24K images.
Abstract: Visual Question Answering (VQA) has emerged as an important problem spanning Computer Vision, Natural Language Processing and Artificial Intelligence (AI). In conventional VQA, one may ask questions about an image which can be answered purely based on its content. For example, given an image with people in it, a typical VQA question may inquire about the number of people in the image. More recently, there is growing interest in answering questions which require commonsense knowledge involving common nouns (e.g., cats, dogs, microphones) present in the image. In spite of this progress, the important problem of answering questions requiring world knowledge about named entities (e.g., Barack Obama, White House, United Nations) in the image has not been addressed in prior research. We address this gap in this paper, and introduce KVQA – the first dataset for the task of (world) knowledge-aware VQA. KVQA consists of 183K question-answer pairs involving more than 18K named entities and 24K images. Questions in this dataset require multi-entity, multi-relation, and multi-hop reasoning over large Knowledge Graphs (KG) to arrive at an answer. To the best of our knowledge, KVQA is the largest dataset for exploring VQA over KG. Further, we also provide baseline performances using state-of-the-art methods on KVQA.

75 citations


Cites background from "Aiding face recognition with social..."

  • ...There have also been works demonstrating utility of context in improving face identifica- tion often in restricted settings (Bharadwaj, Vatsa, and Singh 2014; Lin et al. 2010; O’Hare and Smeaton 2009)....

    [...]

Posted Content
TL;DR: The purpose of this article is to provide readers a comprehensive overview of the role of information fusion in biometrics with specific focus on three questions: what to fusion, when to fuse, and how to fuse.
Abstract: The performance of a biometric system that relies on a single biometric modality (e.g., fingerprints only) is often stymied by various factors such as poor data quality or limited scalability. Multibiometric systems utilize the principle of fusion to combine information from multiple sources in order to improve recognition accuracy whilst addressing some of the limitations of single-biometric systems. The past two decades have witnessed the development of a large number of biometric fusion schemes. This paper presents an overview of biometric fusion with specific focus on three questions: what to fuse, when to fuse, and how to fuse. A comprehensive review of techniques incorporating ancillary information in the biometric recognition pipeline is also presented. In this regard, the following topics are discussed: (i) incorporating data quality in the biometric recognition pipeline; (ii) combining soft biometric attributes with primary biometric identifiers; (iii) utilizing contextual information to improve biometric recognition accuracy; and (iv) performing continuous authentication using ancillary information. In addition, the use of information fusion principles for presentation attack detection and multibiometric cryptosystems is also discussed. Finally, some of the research challenges in biometric fusion are enumerated. The purpose of this article is to provide readers a comprehensive overview of the role of information fusion in biometrics.

47 citations


Cites background or methods from "Aiding face recognition with social..."

  • ...[46] proposed a social context based re770...

    [...]

  • ...rank lists from multiple matchers have been fused using techniques like Borda count, logistic regression, and highest rank method [64, 65, 66, 67, 46]....

    [...]

  • ...occurrence of various parts or attributes of an object or face” [46]....

    [...]

  • ...[46] Social context based re-ranking algorithm using association rules 2014 Hochreiter et al....

    [...]

Journal ArticleDOI
TL;DR: A novel person recognition approach is presented, that relies on the knowledge of individuals’ social behavior to enhance the performance of a traditional biometric system.
Abstract: The goal of a biometric recognition system is to make a human-like decisions on individual’s identity by recognizing their physiological and/or behavioral traits. Nevertheless, the decision-making process by either a human or a biometric recognition system can be highly complicated due to low quality of data or an uncertain environment. Human brain has an advantage over computer system due to its ability to perform a massive parallel processing of auxiliary information, such as visual cues, cognitive and social interactions, contextual, and spatio-temporal data. Similarly to a human brain, social behavioral cues can aid the reliable decision-making of an automated biometric system. In this paper, a novel person recognition approach is presented, that relies on the knowledge of individuals’ social behavior to enhance the performance of a traditional biometric system. The social behavioral information of individuals’ has been mined from an online social network and fused with traditional face and ear biometrics. Experimental results on individual’s and semi-real databases demonstrate significant performance gain in the proposed method over traditional biometric system.

34 citations


Cites background from "Aiding face recognition with social..."

  • ...is an emerging direction in biometric research [35], [44]....

    [...]

  • ...[35] demonstrates a significant recognition performance improvement in challenging environment by fusing social contextual information with face biometric....

    [...]

References
More filters
Journal Article
TL;DR: The ecosystem of R add-on packages developed around the infrastructure provided by the package arules provide comprehensive functionality for analyzing interesting patterns including frequent itemsets, association rules, frequent sequences and for building applications like associative classification.
Abstract: This paper describes the ecosystem of R add-on packages developed around the infrastructure provided by the package arules. The packages provide comprehensive functionality for analyzing interesting patterns including frequent itemsets, association rules, frequent sequences and for building applications like associative classification. After discussing the ecosystem's design we illustrate the ease of mining and visualizing rules with a short example.

120 citations


"Aiding face recognition with social..." refers methods in this paper

  • ...A visualization (Fruchterman Reingold layout) of 100 association rules [13] with the highest lift obtained from the Galbum dataset [10]....

    [...]

Journal ArticleDOI
TL;DR: The analysis of the characteristic function of quality and match scores shows that a careful selection of complimentary set of quality metrics can provide more benefit to various applications of biometric quality.
Abstract: Biometric systems encounter variability in data that influence capture, treatment, and u-sage of a biometric sample. It is imperative to first analyze the data and incorporate this understanding within the recognition system, making assessment of biometric quality an important aspect of biometrics. Though several interpretations and definitions of quality exist, sometimes of a conflicting nature, a holistic definition of quality is indistinct. This paper presents a survey of different concepts and interpretations of biometric quality so that a clear picture of the current state and future directions can be presented. Several factors that cause different types of degradations of biometric samples, including image features that attribute to the effects of these degradations, are discussed. Evaluation schemes are presented to test the performance of quality metrics for various applications. A survey of the features, strengths, and limitations of existing quality assessment techniques in fingerprint, iris, and face biometric are also presented. Finally, a representative set of quality metrics from these three modalities are evaluated on a multimodal database consisting of 2D images, to understand their behavior with respect to match scores obtained from the state-of-the-art recognition systems. The analysis of the characteristic function of quality and match scores shows that a careful selection of complimentary set of quality metrics can provide more benefit to various applications of biometric quality.

119 citations

Journal ArticleDOI
TL;DR: This paper proposes language modelling and nearest neighbor approaches to context-based person identification, in addition to novel face color and image color content-based features (used alongside face recognition and body patch features) that improve performance over content or context alone.
Abstract: Identifying the people in photos is an important need for users of photo management systems. We present MediAssist, one such system which facilitates browsing, searching and semi-automatic annotation of personal photos, using analysis of both image content and the context in which the photo is captured. This semi-automatic annotation includes annotation of the identity of people in photos. In this paper, we focus on such person annotation, and propose person identification techniques based on a combination of context and content. We propose language modelling and nearest neighbor approaches to context-based person identification, in addition to novel face color and image color content-based features (used alongside face recognition and body patch features). We conduct a comprehensive empirical study of these techniques using the real private photo collections of a number of users, and show that combining context- and content-based analysis improves performance over content or context alone.

88 citations


"Aiding face recognition with social..." refers background in this paper

  • ...Several research directions have been undertaken to improve automatic photo organizer by attaching meaningful labels pertaining to identities, relationships, and other demographics including the use of meta-information from capture devices (cellID, GPS, time) [8, 21]....

    [...]

  • ...[21] combined several context cues derived from text, event detection, image color descriptors and body part analysis to improve person identification in photo collections....

    [...]

Journal ArticleDOI
24 May 2010
TL;DR: In this paper, the authors argue that social network context may be the key for large-scale face recognition to succeed, and they leverage the resources and structure of such social networks to improve face recognition rates on the images shared.
Abstract: Personal photographs are being captured in digital form at an accelerating rate, and our computational tools for searching, browsing, and sharing these photos are struggling to keep pace. One promising approach is automatic face recognition, which would allow photos to be organized by the identities of the individuals they contain. However, achieving accurate recognition at the scale of the Web requires discriminating among hundreds of millions of individuals and would seem to be a daunting task. This paper argues that social network context may be the key for large-scale face recognition to succeed. Many personal photographs are shared on the Web through online social network sites, and we can leverage the resources and structure of such social networks to improve face recognition rates on the images shared. Drawing upon real photo collections from volunteers who are members of a popular online social network, we asses the availability of resources to improve face recognition and discuss techniques for applying these resources.

87 citations

Journal ArticleDOI
TL;DR: A general post-filtering framework to enhance robustness and accuracy of semantic concept detection using association and temporal analysis for concept knowledge discovery and a strategy to combine associated concept classifiers to improve detection accuracy is proposed.
Abstract: Automatic semantic concept detection in video is important for effective content-based video retrieval and mining and has gained great attention recently. In this paper, we propose a general post-filtering framework to enhance robustness and accuracy of semantic concept detection using association and temporal analysis for concept knowledge discovery. Co-occurrence of several semantic concepts could imply the presence of other concepts. We use association mining techniques to discover such inter-concept association relationships from annotations. With discovered concept association rules, we propose a strategy to combine associated concept classifiers to improve detection accuracy. In addition, because video is often visually smooth and semantically coherent, detection results from temporally adjacent shots could be used for the detection of the current shot. We propose temporal filter designs for inter-shot temporal dependency mining to further improve detection accuracy. Experiments on the TRECVID 2005 dataset show our post-filtering framework is both efficient and effective in improving the accuracy of semantic concept detection in video. Furthermore, it is easy to integrate our framework with existing classifiers to boost their performance.

81 citations


"Aiding face recognition with social..." refers methods in this paper

  • ...[19] use the Apriori algorithm [1] to post-filter semantic concepts that are detected in videos using association rules between known semantic concepts....

    [...]