scispace - formally typeset
Search or ask a question

Showing papers by "Ifeoma Nwogu published in 2017"


Journal ArticleDOI
TL;DR: A novel cognitive biometrics modality based on written language-usage of an individual can be successfully learnt, using stylistic (writing style), semantic (themes), and syntactic (grammatical) features extracted from blogs.
Abstract: We propose a novel cognitive biometrics modality based on written language-usage of an individual. This is a feasibility study using the Internet-scale blogs, with tens of thousands of authors to create a cognitive fingerprint for an individual. Existing cognitive biometric modalities involve learning from obtrusive sensors placed on human body. Our modality is based on the characteristic pattern of how individuals express their thoughts through written language. The problems of cognitive authentication (1:1 comparison of genuine versus impostor) and identification (1:n search) are formulated. We detail the algorithms to learn a classifier to distinguish between genuine and impostor classes (for authentication) and multiple classes (for identification). We conclude that a cognitive fingerprint can be successfully learnt, using stylistic (writing style ), semantic ( themes ), and syntactic ( grammatical ) features extracted from blogs. Our methodology shows promising results (with 79% as the area under the ROC (AUC) in case of authentication). For identification, the individual class accuracies are up to 90%. We performed stricter tests to see how our system performs for unseen user, and report the accuracies of 72% (genuine) and 71% (impostor). Such a study lays the groundwork for building alternative cognitive systems. The modality, presented here, is easy to obtain, unobtrusive and needs no additional hardware.

18 citations


Proceedings ArticleDOI
25 Aug 2017
TL;DR: This work builds on the recent successes of several deep learning techniques and proposes the "bilateral adversarial network", and demonstrates its efficacy by performing quantitative tests on the standard benchmark datasets, and qualitative tests on large, diverse complex datasets.
Abstract: Learning the generative models of multimedia data such as audio, images and video is a challenging image analysis problem because of the infinitely many manifestations of just one concept, and the potentially large number of concepts that can be encountered. Deep learning methods have proven useful for handling such complex, high dimensional datasets by taking advantage of the use of shared and distributed representation while learning. In this work, we build on the recent successes of several deep learning techniques and propose the "bilateral adversarial network". We demonstrate its efficacy by performing quantitative tests on the standard benchmark datasets, and qualitative tests on large, diverse complex datasets (on over two million high-resolution images).

2 citations