scispace - formally typeset
Search or ask a question
Author

Ramachandra Raghavendra

Other affiliations: Gjøvik University College
Bio: Ramachandra Raghavendra is an academic researcher from Norwegian University of Science and Technology. The author has contributed to research in topics: Biometrics & Facial recognition system. The author has an hindex of 28, co-authored 128 publications receiving 2502 citations. Previous affiliations of Ramachandra Raghavendra include Gjøvik University College.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a novel approach that involves exploring the variation of the focus between multiple depth images rendered by the LFC that in turn can be used to reveal the presentation attacks.
Abstract: The vulnerability of face recognition systems is a growing concern that has drawn the interest from both academic and research communities. Despite the availability of a broad range of face presentation attack detection (PAD) (or countermeasure or antispoofing) schemes, there exists no superior PAD technique due to evolution of sophisticated presentation attacks (or spoof attacks). In this paper, we present a new perspective for face presentation attack detection by introducing light field camera (LFC). Since the use of a LFC can record the direction of each incoming ray in addition to the intensity, it exhibits an unique characteristic of rendering multiple depth (or focus) images in a single capture. Thus, we present a novel approach that involves exploring the variation of the focus between multiple depth (or focus) images rendered by the LFC that in turn can be used to reveal the presentation attacks. To this extent, we first collect a new face artefact database using LFC that comprises of 80 subjects. Face artefacts are generated by simulating two widely used attacks, such as photo print and electronic screen attack. Extensive experiments carried out on the light field face artefact database have revealed the outstanding performance of the proposed PAD scheme when benchmarked with various well established state-of-the-art schemes.

184 citations

Proceedings ArticleDOI
01 Jul 2017
TL;DR: This work proposes a novel approach leveraging the transferable features from a pre-trained Deep Convolutional Neural Networks (D-CNN) to detect both digital and print-scanned morphed face image.
Abstract: Face biometrics is widely used in various applications including border control and facilitating the verification of travellers' identity claim with respect to his electronic passport (ePass). As in most countries, passports are issued to a citizen based on the submitted photo which allows the applicant to provide a morphed face photo to conceal his identity during the application process. In this work, we propose a novel approach leveraging the transferable features from a pre-trained Deep Convolutional Neural Networks (D-CNN) to detect both digital and print-scanned morphed face image. Thus, the proposed approach is based on the feature level fusion of the first fully connected layers of two D-CNN (VGG19 and AlexNet) that are specifically fine-tuned using the morphed face image database. The proposed method is extensively evaluated on the newly constructed database with both digital and print-scanned morphed face images corresponding to bona fide and morphed data reflecting a real-life scenario. The obtained results consistently demonstrate improved detection performance of the proposed scheme over previously proposed methods on both the digital and the print-scanned morphed face image database.

183 citations

Journal ArticleDOI
TL;DR: A new segmentation scheme is proposed and adapted to smartphone based visible iris images for approximating the radius of the iris to achieve robust segmentation and a new feature extraction method based on deepsparsefiltering is proposed to obtain robust features for unconstrained iris image images.
Abstract: Good biometric performance of iris recognition motivates it to be used for many large scale security and access control applications. Recent works have identified visible spectrum iris recognition as a viable option with considerable performance. Key advantages of visible spectrum iris recognition include the possibility of iris imaging in on-the-move and at-a-distance scenarios as compared to fixed range imaging in near-infra-red light. The unconstrained iris imaging captures the images with largely varying radius of iris and pupil. In this work, we propose a new segmentation scheme and adapt it to smartphone based visible iris images for approximating the radius of the iris to achieve robust segmentation. The proposed technique has shown the improved segmentation accuracy up to 85% with standard OSIRIS v4.1. This work also proposes a new feature extraction method based on deepsparsefiltering to obtain robust features for unconstrained iris images. To evaluate the proposed segmentation scheme and feature extraction scheme, we employ a publicly available database and also compose a new iris image database. The newly composed iris image database (VSSIRIS) is acquired using two different smartphones - iPhone 5S and Nokia Lumia 1020 under mixed illumination with unconstrained conditions in visible spectrum. The biometric performance is benchmarked based on the equal error rate (EER) obtained from various state-of-art schemes and proposed feature extraction scheme. An impressive EER of 1.62% is obtained on our VSSIRIS database and an average gain of around 2% in EER is obtained on the public database as compared to the well-known state-of-art schemes.

175 citations

Proceedings ArticleDOI
01 Sep 2016
TL;DR: This work proposes a novel scheme to detect morphed face images based on facial micro-textures extracted using statistically independent filters that are trained on natural images.
Abstract: Widespread deployment of Automatic Border Control (ABC) along with the electronic Machine Readable Travel Documents (eMRTD) for person verification has enabled a prominent use case of face biometrics in border control applications. Many countries issue eMRTD passports on the basis of a printed biometric face photo submitted by the applicant. Some countries offer web-portals for passport renewal, where citizens can upload their face photo. These applications allow the possibility of the photo being altered to beautify the appearance of the data subject or being morphed to conceal the applicant identity. Specifically, if an eMRTD passport is issued with a morphed facial image, two or more data subjects, likely the known applicant and one or more unknown companion(s), can use such passport to pass a border control. In this work we propose a novel scheme to detect morphed face images based on facial micro-textures extracted using statistically independent filters that are trained on natural images. Given a face image, the proposed method will obtain a micro-texture variation using Binarized Statistical Image Features (BSIF), and the decision is made using a linear Support Vector Machine (SVM). This is first work carried out towards detecting the morphed face images. Extensive experiments are carried out on a large-scale database of 450 morphed face images created using 110 unique subjects with different ethnicity, age, and gender that indicates the superior performance.

139 citations

Proceedings ArticleDOI
04 Apr 2017
TL;DR: The vulnerability of biometric systems to morphed face attacks is investigated by evaluating the techniques proposed to detect morphed face images and two new databases are created to study the vulnerability of state-of-the-art face recognition systems with a comprehensive evaluation.
Abstract: Morphed face images are artificially generated images, which blend the facial images of two or more different data subjects into one. The resulting morphed image resembles the constituent faces, both in visual and feature representation. If a morphed image is enroled as a probe in a biometric system, the data subjects contributing to the morphed image will be verified against the enroled probe. As a result of this infiltration, which is referred to as morphed face attack, the unambiguous assignment of data subjects is not warranted, i.e. the unique link between subject and probe is annulled. In this work, we investigate the vulnerability of biometric systems to such morphed face attacks by evaluating the techniques proposed to detect morphed face images. We create two new databases by printing and scanning digitally morphed images using two different types of scanners, a flatbed scanner and a line scanner. Further, the newly created databases are employed to study the vulnerability of state-of-the-art face recognition systems with a comprehensive evaluation.

118 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper bridges the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas, and provides an encyclopedic review of mobile and Wireless networking research based on deep learning, which is categorize by different domains.
Abstract: The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper, we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.

975 citations

Proceedings ArticleDOI
25 Jan 2019
TL;DR: In this paper, the realism of state-of-the-art image manipulations, and how difficult it is to detect them, either automatically or by humans, is examined.
Abstract: The rapid progress in synthetic image generation and manipulation has now come to a point where it raises significant concerns for the implications towards society. At best, this leads to a loss of trust in digital content, but could potentially cause further harm by spreading false information or fake news. This paper examines the realism of state-of-the-art image manipulations, and how difficult it is to detect them, either automatically or by humans. To standardize the evaluation of detection methods, we propose an automated benchmark for facial manipulation detection. In particular, the benchmark is based on Deep-Fakes, Face2Face, FaceSwap and NeuralTextures as prominent representatives for facial manipulations at random compression level and size. The benchmark is publicly available and contains a hidden test set as well as a database of over 1.8 million manipulated images. This dataset is over an order of magnitude larger than comparable, publicly available, forgery datasets. Based on this data, we performed a thorough analysis of data-driven forgery detectors. We show that the use of additional domain-specific knowledge improves forgery detection to unprecedented accuracy, even in the presence of strong compression, and clearly outperforms human observers.

917 citations

Journal ArticleDOI
TL;DR: A two-stage learning method inspired by the idea of unsupervised feature learning that uses artificial intelligence techniques to learn features from raw data for intelligent diagnosis of machines that reduces the need of human labor and makes intelligent fault diagnosis handle big data more easily.
Abstract: Intelligent fault diagnosis is a promising tool to deal with mechanical big data due to its ability in rapidly and efficiently processing collected signals and providing accurate diagnosis results. In traditional intelligent diagnosis methods, however, the features are manually extracted depending on prior knowledge and diagnostic expertise. Such processes take advantage of human ingenuity but are time-consuming and labor-intensive. Inspired by the idea of unsupervised feature learning that uses artificial intelligence techniques to learn features from raw data, a two-stage learning method is proposed for intelligent diagnosis of machines. In the first learning stage of the method, sparse filtering, an unsupervised two-layer neural network, is used to directly learn features from mechanical vibration signals. In the second stage, softmax regression is employed to classify the health conditions based on the learned features. The proposed method is validated by a motor bearing dataset and a locomotive bearing dataset, respectively. The results show that the proposed method obtains fairly high diagnosis accuracies and is superior to the existing methods for the motor bearing dataset. Because of learning features adaptively, the proposed method reduces the need of human labor and makes intelligent fault diagnosis handle big data more easily.

915 citations

Posted Content
TL;DR: This paper proposes an automated benchmark for facial manipulation detection, and shows that the use of additional domain-specific knowledge improves forgery detection to unprecedented accuracy, even in the presence of strong compression, and clearly outperforms human observers.
Abstract: The rapid progress in synthetic image generation and manipulation has now come to a point where it raises significant concerns for the implications towards society. At best, this leads to a loss of trust in digital content, but could potentially cause further harm by spreading false information or fake news. This paper examines the realism of state-of-the-art image manipulations, and how difficult it is to detect them, either automatically or by humans. To standardize the evaluation of detection methods, we propose an automated benchmark for facial manipulation detection. In particular, the benchmark is based on DeepFakes, Face2Face, FaceSwap and NeuralTextures as prominent representatives for facial manipulations at random compression level and size. The benchmark is publicly available and contains a hidden test set as well as a database of over 1.8 million manipulated images. This dataset is over an order of magnitude larger than comparable, publicly available, forgery datasets. Based on this data, we performed a thorough analysis of data-driven forgery detectors. We show that the use of additional domainspecific knowledge improves forgery detection to unprecedented accuracy, even in the presence of strong compression, and clearly outperforms human observers.

737 citations

Proceedings ArticleDOI
01 Nov 2018
TL;DR: A temporal-aware pipeline to automatically detect deepfake videos is proposed that uses a convolutional neural network to extract frame-level features and a recurrent neural network that learns to classify if a video has been subject to manipulation or not.
Abstract: In recent months a machine learning based free software tool has made it easy to create believable face swaps in videos that leaves few traces of manipulation, in what are known as "deepfake" videos. Scenarios where these realistic fake videos are used to create political distress, blackmail someone or fake terrorism events are easily envisioned. This paper proposes a temporal-aware pipeline to automatically detect deepfake videos. Our system uses a convolutional neural network (CNN) to extract frame-level features. These features are then used to train a recurrent neural network (RNN) that learns to classify if a video has been subject to manipulation or not. We evaluate our method against a large set of deepfake videos collected from multiple video websites. We show how our system can achieve competitive results in this task while using a simple architecture.

645 citations