scispace - formally typeset
Search or ask a question

Showing papers by "Nalini K. Ratha published in 2020"


Proceedings ArticleDOI
14 Jun 2020
TL;DR: A novel "defense layer" in a network which aims to block the generation of adversarial noise and prevents an adversarial attack in black-box and gray-box settings is presented.
Abstract: Several successful adversarial attacks have demonstrated the vulnerabilities of deep learning algorithms. These attacks are detrimental in building deep learning based dependable AI applications. Therefore, it is imperative to build a defense mechanism to protect the integrity of deep learning models. In this paper, we present a novel "defense layer" in a network which aims to block the generation of adversarial noise and prevents an adversarial attack in black-box and gray-box settings. The parameter-free defense layer, when applied to any convolutional network, helps in achieving protection against attacks such as FGSM, L 2 , Elastic-Net, and DeepFool. Experiments are performed with different CNN architectures, including VGG, ResNet, and DenseNet, on three databases, namely, MNIST, CIFAR-10, and PaSC. The results showcase the efficacy of the proposed defense layer without adding any computational overhead. For example, on the CIFAR-10 database, while the attack can reduce the accuracy of the ResNet-50 model to as low as 6.3%, the proposed "defense layer" retains the original accuracy of 81.32%.

21 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: This research presents a novel scheme termed as Camera Inspired Perturbations to generate adversarial noise, which is model-agnostic and can be utilized to fool multiple deep learning classifiers on various databases.
Abstract: Deep learning solutions are vulnerable to adversarial perturbations and can lead a "frog" image to be misclassified as a "deer" or random pattern into "guitar". Adversarial attack generation algorithms generally utilize the knowledge of database and CNN model to craft the noise. In this research, we present a novel scheme termed as Camera Inspired Perturbations to generate adversarial noise. The proposed approach relies on the noise embedded in the image due to environmental factors or camera noise incorporated. We extract these noise patterns using image filtering algorithms and incorporate them into images to generate adversarial images. Unlike most of the existing algorithms that require learning of noise, the proposed adversarial noise can be applied in real-time. It is model-agnostic and can be utilized to fool multiple deep learning classifiers on various databases. The effectiveness of the proposed approach is evaluated on five different databases with five different convolutional neural networks such as ResNet-50, VGG-16, and VGG-Face. The proposed attack reduces the classification accuracy of every network, for instance, the performance of VGG-16 on the Tiny ImageNet database is reduced by more than 33%. The robustness of the proposed adversarial noise is also evaluated against different adversarial defense algorithms.

16 citations


Book ChapterDOI
01 Jan 2020
TL;DR: This chapter categorize, illustrate, and analyze different domain adaptation based machine learning algorithms for visual understanding that cater to specific scenarios where the classifiers are updated for inclusivity and generalizability.
Abstract: Advances in visual understanding in the last two decades have been aided by exemplary progress in machine learning and deep learning methods. One of the principal issues of modern classifiers is generalization toward unseen testing data which may have a distribution different to that of the training set. Further, classifiers need to be adapted to scenarios where training data is made available online. Domain adaptation based machine learning algorithms cater to these specific scenarios where the classifiers are updated for inclusivity and generalizability. Such methods need to encompass the covariate shift so that the trained model gives appreciable performance on the testing data. In this chapter, we categorize, illustrate, and analyze different domain adaptation based machine learning algorithms for visual understanding.

8 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: This paper describes an end-to-end approach to support privacyenhanced decision tree classification using IBM supported open-source library HELib and shows that a highly secure and trusted decision tree service can be enabled.
Abstract: In many areas in machine learning, decision trees play a crucial role in classification and regression. When a decision tree based classifier is hosted as a service in a critical application with the need for privacy protection of the service as well as the user data, fully homomorphic encrypted can be employed. However, a decision node in a decision tree can’t be directly implemented in FHE. In this paper, we describe an end-to-end approach to support privacyenhanced decision tree classification using IBM supported open-source library HELib. Using several options for building a decision node and employing oblivious computations coupled with an argmax function in FHE we show that a highly secure and trusted decision tree service can be enabled.

6 citations


Journal ArticleDOI
TL;DR: The articles in this special section provide a glimpse of the diverse research challenges in adopting blockchain technology into mainstream applications, focusing on the following core issues: scalability, transparency versus privacy, standardization, ecosystem, and integration.
Abstract: The articles in this special section provide a glimpse of the diverse research challenges in adopting blockchain technology into mainstream applications The four articles focus on the following core issues: scalability, transparency versus privacy, standardization, ecosystem, and integration

3 citations


Posted Content
TL;DR: In this paper, the authors propose a tutorial on trustworthy AI to address six critical issues in enhancing user and public trust in AI systems, namely bias and fairness, explainability, robust mitigation of adversarial attacks, improved privacy and security in model building, being decent, and model attribution, including the right level of credit assignment to the data sources, model architectures and transparency in lineage.
Abstract: Modern AI systems are reaping the advantage of novel learning methods. With their increasing usage, we are realizing the limitations and shortfalls of these systems. Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, high opacity in terms of revealing the lineage of the system, how they were trained and tested, and under which parameters and conditions they can reliably guarantee a certain level of performance, are some of the most prominent limitations. Ensuring the privacy and security of the data, assigning appropriate credits to data sources, and delivering decent outputs are also required features of an AI system. We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems, namely: (i) bias and fairness, (ii) explainability, (iii) robust mitigation of adversarial attacks, (iv) improved privacy and security in model building, (v) being decent, and (vi) model attribution, including the right level of credit assignment to the data sources, model architectures, and transparency in lineage.

2 citations


Patent
25 Jun 2020
TL;DR: In this article, the source file is decomposed into source file and auxiliary data segments corresponding to source file segments, and each auxiliary data segment includes a random string of data that is generated based on a corresponding source file segment.
Abstract: An example operation may include one or more of creating a source file, segmenting the source file into source file segments, creating a number of auxiliary data segments corresponding to source file segments, performing a chameleon hash of the source file segments and the auxiliary data segments, obtaining a source file signature from the chameleon hash, performing a cryptographic hash of the auxiliary data segments, obtaining an auxiliary data signature from the cryptographic hash, and storing the source file and cryptographic signatures to a shared ledger of a blockchain network Each auxiliary data segment includes a random string of data that is generated based on a corresponding source file segment

2 citations


Patent
16 Jan 2020
TL;DR: In this article, a computer system receives a set of data encrypted by a homomorphic encryption transformation, and the computer system performs machine learning operations using the encrypted data set, which is stored for use for performing inferencing of other encrypted data to determine a corresponding output of the trained model.
Abstract: A computer system receives a set of data encrypted by a homomorphic encryption transformation. The computer system performs machine learning operations using the encrypted set of data. The machine learning operations build, using homomorphic operations, a trained model of the data having a mapping between the encrypted data and output of the trained model. The model is stored for use for performing inferencing of other encrypted data to determine a corresponding output of the trained model. The computer system may perform inferencing of the other encrypted data at least by accessing the stored trained model and predicting by using the trained model a label in an encrypted format that corresponds to the other encrypted data. The computer system may send the label toward the client for the client to decrypt the label.

1 citations


Patent
25 Jun 2020
TL;DR: In this paper, a file redaction device, a signature update device, the redacted source file segments, a stored trapdoor key, and stored auxiliary data segments are used to determine modified auxiliary data from the redacted file segments.
Abstract: An example operation may include one or more of determining, by a file redaction device, redacted segments of a source file, receiving, by a signature update device, the redacted source file segments, a stored trapdoor key, and stored auxiliary data segments, determining modified auxiliary data from the redacted source file segments, the trapdoor key and the auxiliary data segments, executing chaincode to obtain a modified auxiliary data signature and identifiers of the redacted source file segments, and storing the modified auxiliary data signature and identifiers of the redacted source file segments to a shared ledger of a blockchain network. Each stored auxiliary data segment including a random string of data corresponding to a segment of the source file.

1 citations


Patent
10 Sep 2020
TL;DR: In this paper, a multi-feature multi-matcher fusion (MMF) predictor was proposed for image recognition. And a system, method and program product for implementing image recognition was described.
Abstract: A system, method and program product for implementing image recognition. A system is disclosed that includes a training system for generating a multi-feature multi-matcher fusion (MMF) predictor for scoring pairs of images, the training system having: a neural network configurable to extract a set of feature spaces at different resolutions based on a training dataset; and an optimizer that processes the training dataset, extracted feature spaces and a set of matcher functions to generate the MMF predictor having a series of weighted feature/matcher components; and a prediction system that utilizes the MMF predictor to generate a prediction score indicative of a match for a pair of images.

Patent
29 Dec 2020
TL;DR: In this article, an inherent supervision summarization device is used to collect group-level supervision and instance level supervision within a same chunklet based on a user input of face images for a person.
Abstract: A face clustering system for video face clustering in a video sequence, the system including an inherent supervision summarization device configured to collect group-level supervision and instance level supervision within a same chunklet based on a user input of face images for a person, a discriminative projection learning device configured to embed group constraints of the group-level supervision into a transformed space, and configured to generate an embedding space from the original image feature space, and a clustering device, in the embedding space, configured to execute pair-wise based clustering to cluster the video images into different clusters with the instance level supervision collected by the inherent supervision summarization device.

Posted Content
TL;DR: In this paper, the authors model a trained biometric recognition system in an architecture which leverages the blockchain technology to provide fault tolerant access in a distributed environment, where tampering in one particular component alerts the whole system and helps in easy identification of ''any'' possible alteration.
Abstract: Blockchain has emerged as a leading technology that ensures security in a distributed framework. Recently, it has been shown that blockchain can be used to convert traditional blocks of any deep learning models into secure systems. In this research, we model a trained biometric recognition system in an architecture which leverages the blockchain technology to provide fault tolerant access in a distributed environment. The advantage of the proposed approach is that tampering in one particular component alerts the whole system and helps in easy identification of `any' possible alteration. Experimentally, with different biometric modalities, we have shown that the proposed approach provides security to both deep learning model and the biometric template.