scispace - formally typeset
Search or ask a question

Showing papers by "Nalini K. Ratha published in 2022"


Proceedings ArticleDOI
08 Jan 2022
TL;DR: This work explores a class of privacy preserving machine learning technique called Fully Homomorphic Encryption in enabling CNN inference on encrypted real-world dataset and achieves the end goal of enabling encrypted inference for binary classification on melanoma dataset using Cheon-Kim- Kim-Song (CKKS) encryption scheme available in the open-source HElib library.
Abstract: Deep Learning Models such as Convolution Neural Networks (CNNs) have shown great potential in various applications. However, these techniques will face regulatory compliance challenges related to privacy of user data, especially when they are deployed as a service on a cloud platform. Such concerns can be mitigated by using privacy preserving machine learning techniques. The purpose of our work is to explore a class of privacy preserving machine learning technique called Fully Homomorphic Encryption in enabling CNN inference on encrypted real-world dataset. Fully homomorphic encryption face the limitation of computational depth. They are also resource intensive operations. We run our experiments on MNIST dataset to understand the challenges and identify the optimization techniques. We used these insights to achieve the end goal of enabling encrypted inference for binary classification on melanoma dataset using Cheon-Kim-Kim-Song (CKKS) encryption scheme available in the open-source HElib library.

5 citations


Journal ArticleDOI
01 Jun 2022
TL;DR: The possible robustness connection between natural and artificial adversarial examples is studied and can pave a way for the development of unified resiliency because defense against one attack is not sufficient for real-world use cases.
Abstract: Although recent deep neural network algorithm has shown tremendous success in several computer vision tasks, their vulnerability against minute adversarial perturbations has raised a serious concern. In the early days of crafting these adversarial examples, artificial noises are optimized through the network and added in the images to decrease the confidence of the classifiers against the true class. However, recent efforts are showcasing the presence of natural adversarial examples which can also be effectively used to fool the deep neural networks with high confidence. In this paper, for the first time, we have raised the question that whether there is any robustness connection between artificial and natural adversarial examples. The possible robustness connection between natural and artificial adversarial examples is studied in the form that whether an adversarial example detector trained on artificial examples can detect the natural adversarial examples. We have analyzed several deep neural networks for the possible detection of artificial and natural adversarial examples in seen and unseen settings to set up a robust connection. The extensive experimental results reveal several interesting insights to defend the deep classifiers whether vulnerable against natural or artificially perturbed examples. We believe these findings can pave a way for the development of unified resiliency because defense against one attack is not sufficient for real-world use cases.

4 citations


Journal ArticleDOI
10 Oct 2022
TL;DR: RidgeBase as discussed by the authors is a large-scale real-world contactless fingerprint matching dataset that consists of more than 15,000 contactless and contact-based fingerprint image pairs acquired from 88 individuals under different background and lighting conditions using two smartphone cameras and one flatbed contact sensor.
Abstract: Contactless fingerprint matching using smartphone cameras can alleviate major challenges of traditional fingerprint systems including hygienic acquisition, portability and presentation attacks. However, development of practical and robust contactless fingerprint matching techniques is constrained by the limited availablity of large scale real-world datasets. To motivate further advances in contactless fingerprint matching across sensors, we introduce the RidgeBase benchmark dataset. RidgeBase consists of more than 15,000 contactless and contact-based fingerprint image pairs acquired from 88 individuals under different background and lighting conditions using two smartphone cameras and one flatbed contact sensor. Unlike existing datasets, RidgeBase is designed to promote research under different matching scenarios that include Single Finger Matching and Multi-Finger Matching for both contactless-to-contactless (CL2CL) and contact-to-contactless (C2CL) verification and identification. Furthermore, due to the high intra-sample variance in contactless fingerprints belonging to the same finger, we propose a set-based matching protocol inspired by the advances in facial recognition datasets. This protocol is specifically designed for pragmatic contactless fingerprint matching that can account for variances in focus, polarity and finger-angles. We report qualitative and quantitative baseline results for different protocols using a COTS fingerprint matcher (Verifinger) and a Deep CNN based approach on the RidgeBase dataset. The dataset can be downloaded here: https://www.buffalo.edu/cubs/research/datasets/ridgebase-benchmark-dataset.html

2 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed an imperceptible backdoor attack, which is agnostic to classifiers and trigger patterns, and the extensive evaluation using multiple databases and networks illustrates the effectiveness of the proposed attack.
Abstract: Traditional backdoor attacks insert a trigger patch in the training images and associate the trigger with the targeted class label. Backdoor attacks are one of the rapidly evolving types of attack which can have a significant impact. On the other hand, adversarial perturbations have a significantly different attack mechanism from the traditional backdoor corruptions, where an imperceptible noise is learned to fool the deep learning models. In this research, we amalgamate these two concepts and propose a novel imperceptible backdoor attack, termed as the IBAttack where the adversarial images are associated with the desired target classes. A significant advantage of the adversarial-based proposed backdoor attack is the imperceptibility as compared to the traditional trigger-based mechanism. The proposed adversarial dynamic attack, in contrast to existing attacks, is agnostic to classifiers and trigger patterns. The extensive evaluation using multiple databases and networks illustrates the effectiveness of the proposed attack.

2 citations


Proceedings ArticleDOI
21 Aug 2022
TL;DR: Wang et al. as mentioned in this paper proposed a multi-task deep learning model with a denoising convolutional skip autoencoder and a classifier to inbuilt robustness against noisy images.
Abstract: The vulnerability of iris recognition algorithms against presentation attacks demands a robust defense mechanism. Much research has been done in the literature to create a robust attack detection algorithm; however, most of the algorithms suffer from generalizability, such as inter database testing or unseen attack type. The problem of attack detection can further be exacerbated if the images contain noise such as Gaussian or Salt-Pepper noise. In this research, we propose a multi-task deep learning model with a denoising convolutional skip autoencoder and a classifier to inbuilt robustness against noisy images. The Gaussian noise layer is introduced as a dropout between the encoder network’s hidden layers, which helps the model learn generalized features that are robust to data noise. The proposed algorithm is evaluated on multiple presentation attack databases and extensive experiments across different noise types and a comparison with other deep learning models show the generalizability and efficacy of the proposed model.

2 citations


Journal ArticleDOI
15 Aug 2022
TL;DR: This paper proposes a non-interactive end-to-end solution for secure fusion and matching of biometric templates using fully homomorphic encryption (FHE) and introduces an FHE-aware algorithm for learning the linear projection matrix to mitigate errors induced by approximate normalization.
Abstract: This paper proposes a non-interactive end-to-end solution for secure fusion and matching of biometric templates using fully homomorphic encryption (FHE). Given a pair of encrypted feature vectors, we perform the following ciphertext operations, i) feature concatenation, ii) fusion and dimensionality reduction through a learned linear projection, iii) scale normalization to unit ℓ2-norm, and iv) match score computation. Our method, dubbed HEFT (Homomorphi-cally Encrypted Fusion of biometric Templates), is custom-designed to overcome the unique constraint imposed by FHE, namely the lack of support for non-arithmetic operations. From an inference perspective, we systematically explore different data packing schemes for computationally efficient linear projection and introduce a polynomial approximation for scale normalization. From a training perspective, we introduce an FHE-aware algorithm for learning the linear projection matrix to mitigate errors induced by approximate normalization. Experimental evaluation for template fusion and matching of face and voice biometrics shows that HEFT (i) improves biometric verification performance by 11.07% and 9.58% AUROC compared to the respective unibiometric representations while compressing the feature vectors by a factor of 16 (512D to 32D), and (ii) fuses a pair of encrypted feature vectors and computes its match score against a gallery of size 1024 in 884 ms. Code and data are available at https://github.com/humananalysis/encrypted-biometric-fusion

1 citations


Journal ArticleDOI
TL;DR: This research claims that based on the understanding of the image components, the proposed research has been able to identify a newer adversarial attack unseen so far and unsolvable using the current defense mechanisms.
Abstract: Adversarial attacks have been demonstrated to fool the deep classification networks. There are two key characteristics of these attacks: firstly, these perturbations are mostly additive noises carefully crafted from the deep neural network itself. Secondly, the noises are added to the whole image, not considering them as the combination of multiple components from which they are made. Motivated by these observations, in this research, we first study the role of various image components and the impact of these components on the classification of the images. These manipulations do not require the knowledge of the networks and external noise to function effectively and hence have the potential to be one of the most practical options for real-world attacks. Based on the significance of the particular image components, we also propose a transferable adversarial attack against unseen deep networks. The proposed attack utilizes the projected gradient descent strategy to add the adversarial perturbation to the manipulated component image. The experiments are conducted on a wide range of networks and four databases including ImageNet and CIFAR-100. The experiments show that the proposed attack achieved better transferability and hence gives an upper hand to an attacker. On the ImageNet database, the success rate of the proposed attack is up to 88.5%, while the current state-of-the-art attack success rate on the database is 53.8%. We have further tested the resiliency of the attack against one of the most successful defenses namely adversarial training to measure its strength. The comparison with several challenging attacks shows that: (i) the proposed attack has a higher transferability rate against multiple unseen networks and (ii) it is hard to mitigate its impact. We claim that based on the understanding of the image components, the proposed research has been able to identify a newer adversarial attack unseen so far and unsolvable using the current defense mechanisms.

1 citations


DOI
04 Nov 2022
TL;DR: In this paper , the authors designed an end-to-end pipeline for Dorsal Hand Vein (DHV) authentication that includes image enhancement, region of interest (ROI) extraction, and finally deep learning models.
Abstract: The use of biometrics has been one of the most effective solutions for a person’s identification and verification. Traditional biometric modalities such as fingerprint, iris, and face recognition have been successfully employed and have shown tremendous success in providing a secure access mechanism. On top of that, the success of deep learning algorithms has showcased that automated biometrics recognition has the potential of surpassing human-level accuracy. Another relatively unexplored biometric modality namely Dorsal Hand Vein (DHV) recently has gained traction in the industry and among researchers from academia. In this paper, we have designed an end-to-end pipeline for DHV biometric authentication that includes image enhancement, region of interest (ROI) extraction, and finally deep learning models for DHV recognition. Three deep learning models namely a custom convolutional neural network (CNN), a Siamese network, and a Triplet Network are trained on publicly available images of DHV datasets. Later, these models are used as feature extractors and tested on images of unseen subjects for authentication. We find that the simple CNN model learns a better feature representation than the Triplet network, which outperforms the Siamese network. One potential reason for such behavior is the limited availability of the datasets used in training.

Journal ArticleDOI
TL;DR: Schuckers and Chen as discussed by the authors have been named as T-BIOM Associate Editor-in-Chief and Assistant Editor-In-Chief, respectively, and their addition greatly enhances the depth of our editorial board.
Abstract: It is with great pleasure that we welcome Dr. Stephanie Schuckers and Dr. Chang Wen Chen as T-BIOM Associate Editor-in-Chief. Their addition greatly enhances the depth of our editorial board.