scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Aiding face recognition with social context association rule based re-ranking

TL;DR: The results show that association rules extracted from social context can be used to augment face recognition and improve the identification performance.
Abstract: Humans are very efficient at recognizing familiar face images even in challenging conditions. One reason for such capabilities is the ability to understand social context between individuals. Sometimes the identity of the person in a photo can be inferred based on the identity of other persons in the same photo, when some social context between them is known. This research presents an algorithm to utilize cooccurrence of individuals as the social context to improve face recognition. Association rule mining is utilized to infer multi-level social context among subjects from a large repository of social transactions. The results are demonstrated on the G-album and on the SN-collection pertaining to 4675 identities prepared by the authors from a social networking website. The results show that association rules extracted from social context can be used to augment face recognition and improve the identification performance.
Citations
More filters
Journal ArticleDOI
TL;DR: A comprehensive review of techniques incorporating ancillary information in the biometric recognition pipeline is presented in this paper, where the authors provide a comprehensive overview of the role of information fusion in biometrics.

151 citations

Journal ArticleDOI
TL;DR: In this paper, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner, and a compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is employed to verify the kin accurately.
Abstract: Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this paper, first, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determine their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender and age and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index $d'$ , and perceptual information entropy. Utilizing the information obtained from the human study, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks ( fc DBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. A new WVU kinship database is created, which consists of multiple images per subject to facilitate kinship verification. The results show that the proposed deep learning framework (KVRL- fc DBN) yields the state-of-the-art kinship verification accuracy on the WVU kinship database and on four existing benchmark data sets. Furthermore, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL- fc DBN framework, an improvement of over 20% is observed in the performance of face verification.

81 citations

Journal ArticleDOI
17 Jul 2019
TL;DR: KVQA is introduced – the first dataset for the task of (world) knowledge-aware VQA and is the largest dataset for exploring V QA over large Knowledge Graphs (KG), which consists of 183K question-answer pairs involving more than 18K named entities and 24K images.
Abstract: Visual Question Answering (VQA) has emerged as an important problem spanning Computer Vision, Natural Language Processing and Artificial Intelligence (AI). In conventional VQA, one may ask questions about an image which can be answered purely based on its content. For example, given an image with people in it, a typical VQA question may inquire about the number of people in the image. More recently, there is growing interest in answering questions which require commonsense knowledge involving common nouns (e.g., cats, dogs, microphones) present in the image. In spite of this progress, the important problem of answering questions requiring world knowledge about named entities (e.g., Barack Obama, White House, United Nations) in the image has not been addressed in prior research. We address this gap in this paper, and introduce KVQA – the first dataset for the task of (world) knowledge-aware VQA. KVQA consists of 183K question-answer pairs involving more than 18K named entities and 24K images. Questions in this dataset require multi-entity, multi-relation, and multi-hop reasoning over large Knowledge Graphs (KG) to arrive at an answer. To the best of our knowledge, KVQA is the largest dataset for exploring VQA over KG. Further, we also provide baseline performances using state-of-the-art methods on KVQA.

75 citations


Cites background from "Aiding face recognition with social..."

  • ...There have also been works demonstrating utility of context in improving face identifica- tion often in restricted settings (Bharadwaj, Vatsa, and Singh 2014; Lin et al. 2010; O’Hare and Smeaton 2009)....

    [...]

Posted Content
TL;DR: The purpose of this article is to provide readers a comprehensive overview of the role of information fusion in biometrics with specific focus on three questions: what to fusion, when to fuse, and how to fuse.
Abstract: The performance of a biometric system that relies on a single biometric modality (e.g., fingerprints only) is often stymied by various factors such as poor data quality or limited scalability. Multibiometric systems utilize the principle of fusion to combine information from multiple sources in order to improve recognition accuracy whilst addressing some of the limitations of single-biometric systems. The past two decades have witnessed the development of a large number of biometric fusion schemes. This paper presents an overview of biometric fusion with specific focus on three questions: what to fuse, when to fuse, and how to fuse. A comprehensive review of techniques incorporating ancillary information in the biometric recognition pipeline is also presented. In this regard, the following topics are discussed: (i) incorporating data quality in the biometric recognition pipeline; (ii) combining soft biometric attributes with primary biometric identifiers; (iii) utilizing contextual information to improve biometric recognition accuracy; and (iv) performing continuous authentication using ancillary information. In addition, the use of information fusion principles for presentation attack detection and multibiometric cryptosystems is also discussed. Finally, some of the research challenges in biometric fusion are enumerated. The purpose of this article is to provide readers a comprehensive overview of the role of information fusion in biometrics.

47 citations


Cites background or methods from "Aiding face recognition with social..."

  • ...[46] proposed a social context based re770...

    [...]

  • ...rank lists from multiple matchers have been fused using techniques like Borda count, logistic regression, and highest rank method [64, 65, 66, 67, 46]....

    [...]

  • ...occurrence of various parts or attributes of an object or face” [46]....

    [...]

  • ...[46] Social context based re-ranking algorithm using association rules 2014 Hochreiter et al....

    [...]

Journal ArticleDOI
TL;DR: A novel person recognition approach is presented, that relies on the knowledge of individuals’ social behavior to enhance the performance of a traditional biometric system.
Abstract: The goal of a biometric recognition system is to make a human-like decisions on individual’s identity by recognizing their physiological and/or behavioral traits. Nevertheless, the decision-making process by either a human or a biometric recognition system can be highly complicated due to low quality of data or an uncertain environment. Human brain has an advantage over computer system due to its ability to perform a massive parallel processing of auxiliary information, such as visual cues, cognitive and social interactions, contextual, and spatio-temporal data. Similarly to a human brain, social behavioral cues can aid the reliable decision-making of an automated biometric system. In this paper, a novel person recognition approach is presented, that relies on the knowledge of individuals’ social behavior to enhance the performance of a traditional biometric system. The social behavioral information of individuals’ has been mined from an online social network and fused with traditional face and ear biometrics. Experimental results on individual’s and semi-real databases demonstrate significant performance gain in the proposed method over traditional biometric system.

34 citations


Cites background from "Aiding face recognition with social..."

  • ...is an emerging direction in biometric research [35], [44]....

    [...]

  • ...[35] demonstrates a significant recognition performance improvement in challenging environment by fusing social contextual information with face biometric....

    [...]

References
More filters
Proceedings Article
01 Jul 1998
TL;DR: Two new algorithms for solving thii problem that are fundamentally different from the known algorithms are presented and empirical evaluation shows that these algorithms outperform theknown algorithms by factors ranging from three for small problems to more than an order of magnitude for large problems.
Abstract: We consider the problem of discovering association rules between items in a large database of sales transactions. We present two new algorithms for solving thii problem that are fundamentally different from the known algorithms. Empirical evaluation shows that these algorithms outperform the known algorithms by factors ranging from three for small problems to more than an order of magnitude for large problems. We also show how the best features of the two proposed algorithms can be combined into a hybrid algorithm, called AprioriHybrid. Scale-up experiments show that AprioriHybrid scales linearly with the number of transactions. AprioriHybrid also has excellent scale-up properties with respect to the transaction size and the number of items in the database.

10,863 citations


"Aiding face recognition with social..." refers methods in this paper

  • ...We consider the extensively used association rule mining algorithm, Apriori [1], that uses breadth-first (level-wise) search to determine rules based on the downward closure property of support....

    [...]

  • ...[19] use the Apriori algorithm [1] to post-filter semantic concepts that are detected in videos using association rules between known semantic concepts....

    [...]

  • ...Liu et al. [19] use the Apriori algorithm [1] to post-filter semantic concepts that are detected in videos using association rules between known semantic concepts....

    [...]

Proceedings ArticleDOI
01 Apr 2001
TL;DR: A set of techniques for the rank aggregation problem is developed and compared to that of well-known methods, to design rank aggregation techniques that can be used to combat spam in Web searches.
Abstract: We consider the problem of combining ranking results from various sources. In the context of the Web, the main applications include building meta-search engines, combining ranking functions, selecting documents based on multiple criteria, and improving search precision through word associations. We develop a set of techniques for the rank aggregation problem and compare their performance to that of well-known methods. A primary goal of our work is to design rank aggregation techniques that can e ectively combat \spam," a serious problem in Web searches. Experiments show that our methods are simple, e cient, and e ective.

1,982 citations

Proceedings ArticleDOI
01 Sep 2009
TL;DR: Two novel methods for face verification using binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance and a new data set of real-world images of public figures acquired from the internet.
Abstract: We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.

1,619 citations


"Aiding face recognition with social..." refers background in this paper

  • ...The term context has been used in object recognition [27], person detection and also in face recognition research [17, 24] to imply acceptable co-occurrence of various parts or attributes of an object or face....

    [...]

Proceedings ArticleDOI
13 Oct 2003
TL;DR: A low-dimensional global image representation is presented that provides relevant information for place recognition and categorization, and it is shown how such contextual information introduces strong priors that simplify object recognition.
Abstract: While navigating in an environment, a vision system has to be able to recognize where it is and what the main objects in the scene are. We present a context-based vision system for place and object recognition. The goal is to identify familiar locations (e.g., office 610, conference room 941, main street), to categorize new environments (office, corridor, street) and to use that information to provide contextual priors for object recognition (e.g., tables are more likely in an office than a street). We present a low-dimensional global image representation that provides relevant information for place recognition and categorization, and show how such contextual information introduces strong priors that simplify object recognition. We have trained the system to recognize over 60 locations (indoors and outdoors) and to suggest the presence and locations of more than 20 different object types. The algorithm has been integrated into a mobile system that provides realtime feedback to the user.

1,028 citations


"Aiding face recognition with social..." refers background in this paper

  • ...The term context has been used in object recognition [27], person detection and also in face recognition research [17, 24] to imply acceptable co-occurrence of various parts or attributes of an object or face....

    [...]

Proceedings ArticleDOI
20 Jun 2009
TL;DR: This paper introduced contextual features that encapsulate the group structure locally (for each person in the group), and globally (the overall structure of the group) to accomplish a variety of tasks, such as demographic recognition, calculating scene and camera parameters, and even event recognition.
Abstract: In many social settings, images of groups of people are captured The structure of this group provides meaningful context for reasoning about individuals in the group, and about the structure of the scene as a whole For example, men are more likely to stand on the edge of an image than women Instead of treating each face independently from all others, we introduce contextual features that encapsulate the group structure locally (for each person in the group) and globally (the overall structure of the group) This “social context” allows us to accomplish a variety of tasks, such as such as demographic recognition, calculating scene and camera parameters, and even event recognition We perform human studies to show this context aids recognition of demographic information in images of strangers

339 citations