scispace - formally typeset
Proceedings ArticleDOI

Securing CNN Model and Biometric Template using Blockchain

01 Sep 2019-pp 1-7

...read more


Citations
More filters
Journal ArticleDOI

[...]

TL;DR: A number of COVID-19 diagnostic methods that rely on DL algorithms with relevant adversarial examples (AEs) are tested, showing that DL models that do not consider defensive models against adversarial perturbations remain vulnerable to adversarial attacks.
Abstract: Medical IoT devices are rapidly becoming part of management ecosystems for pandemics such as COVID-19. Existing research shows that deep learning (DL) algorithms have been successfully used by researchers to identify COVID-19 phenomena from raw data obtained from medical IoT devices. Some examples of IoT technology are radiological media, such as CT scanning and X-ray images, body temperature measurement using thermal cameras, safe social distancing identification using live face detection, and face mask detection from camera images. However, researchers have identified several security vulnerabilities in DL algorithms to adversarial perturbations. In this article, we have tested a number of COVID-19 diagnostic methods that rely on DL algorithms with relevant adversarial examples (AEs). Our test results show that DL models that do not consider defensive models against adversarial perturbations remain vulnerable to adversarial attacks. Finally, we present in detail the AE generation process, implementation of the attack model, and the perturbations of the existing DL-based COVID-19 diagnostic applications. We hope that this work will raise awareness of adversarial attacks and encourages others to safeguard DL models from attacks on healthcare systems.

39 citations


Cites background from "Securing CNN Model and Biometric Te..."

  • [...]

Journal ArticleDOI

[...]

03 Apr 2020
TL;DR: Different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working are summarized.
Abstract: Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working Different types of attacks such as physical presentation attacks, disguise/makeup, digital adversarial attacks, and morphing/tampering using GANs have been discussed We also present a discussion on the effect of bias on face recognition models and showcase that factors such as age and gender variations affect the performance of modern algorithms The paper also presents the potential reasons for these challenges and some of the future research directions for increasing the robustness of face recognition models

28 citations


Cites background or methods from "Securing CNN Model and Biometric Te..."

  • [...]

  • [...]

Journal ArticleDOI

[...]

TL;DR: This article proposes a non-deep learning approach that searches over a set of well-known image transforms such as Discrete Wavelet Transform and Discrete Sine Transform, and classifies the features with a support vector machine-based classifier, efficiently generalizes across databases as well as different unseen attacks and combinations of both.
Abstract: Deep learning algorithms provide state-of-the-art results on a multitude of applications. However, it is also well established that they are highly vulnerable to adversarial perturbations. It is often believed that the solution to this vulnerability of deep learning systems must come from deep networks only. Contrary to this common understanding, in this article, we propose a non-deep learning approach that searches over a set of well-known image transforms such as Discrete Wavelet Transform and Discrete Sine Transform, and classifying the features with a support vector machine-based classifier. Existing deep networks-based defense have been proven ineffective against sophisticated adversaries, whereas image transformation-based solution makes a strong defense because of the non-differential nature, multiscale, and orientation filtering. The proposed approach, which combines the outputs of two transforms, efficiently generalizes across databases as well as different unseen attacks and combinations of both (i.e., cross-database and unseen noise generation CNN model). The proposed algorithm is evaluated on large scale databases, including object database (validation set of ImageNet) and face recognition (MBGC) database. The proposed detection algorithm yields at-least 84.2% and 80.1% detection accuracy under seen and unseen database test settings, respectively. Besides, we also show how the impact of the adversarial perturbation can be neutralized using a wavelet decomposition-based filtering method of denoising. The mitigation results with different perturbation methods on several image databases demonstrate the effectiveness of the proposed method.

13 citations

Proceedings ArticleDOI

[...]

14 Jun 2020
TL;DR: This research addresses the question of whether an imperceptible gradient noise can be generated to fool the deep neural networks and shows that without-sign function, i.e. gradient magnitude, not only leads to a successful attack mechanism but the noise is also imperceptable to the human observer.
Abstract: State-of-the-art deep learning models have achieved superlative performance across multiple computer vision applications such as object recognition, face recognition, and digits/character classification. Most of these models highly rely on the gradient information flows through the network for learning. By utilizing this gradient information, a simple gradient sign method based attack is developed to fool the deep learning models. However, the primary concern with this attack is the perceptibility of noise for large degradation in classification accuracy. This research address the question of whether an imperceptible gradient noise can be generated to fool the deep neural networks? For this, the role of sign function in the gradient attack is analyzed. The analysis shows that without-sign function, i.e. gradient magnitude, not only leads to a successful attack mechanism but the noise is also imperceptible to the human observer. Extensive quantitative experiments performed using two convolutional neural networks validate the above observation. For instance, AlexNet architecture yields 63.54% accuracy on the CIFAR-10 database which reduces to 0.0% and 26.39% when sign (i.e., perceptible) and without-sign (i.e., imperceptible) of the gradient is utilized, respectively.Further, the role of the direction of the gradient for image manipulation is studied. When an image is manipulated in the positive direction of the gradient, an adversarial image is generated. On the other hand, if the opposite direction of the gradient is utilized for image manipulation, it is observed that the classification error rate of the CNN model is reduced. On AlexNet, the error rate of 36.46% reduces to 4.29% when images of CIFAR-10 are manipulated in the negative direction of the gradient. To explore other enthusiastic results on multiple object databases, including CIFAR-100, fashion-MNIST, and SVHN, please refer to the full paper.

8 citations


Cites background from "Securing CNN Model and Biometric Te..."

  • [...]

Posted Content

[...]

22 Jul 2020
TL;DR: This article presents a comprehensive survey on adversarial attacks against FR systems and elaborate on the competence of new countermeasures against them, and proposes a taxonomy of existing attack and defense strategies according to different criteria.
Abstract: Face recognition (FR) systems have demonstrated outstanding verification performance, suggesting suitability for real-world applications, ranging from photo tagging in social media to automated border control (ABC). In an advanced FR system with deep learning-based architecture, however, promoting the recognition efficiency alone is not sufficient and the system should also withstand potential kinds of attacks designed to target its proficiency. Recent studies show that (deep) FR systems exhibit an intriguing vulnerability to imperceptible or perceptible but natural-looking adversarial input images that drive the model to incorrect output predictions. In this article, we present a comprehensive survey on adversarial attacks against FR systems and elaborate on the competence of new countermeasures against them. Further, we propose a taxonomy of existing attack and defense strategies according to different criteria. Finally, we compare the presented approaches according to techniques' characteristics.

8 citations


Cites background or methods from "Securing CNN Model and Biometric Te..."

  • [...]

  • [...]


References
More filters
Journal ArticleDOI

[...]

TL;DR: This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
Abstract: In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.

12,938 citations


"Securing CNN Model and Biometric Te..." refers methods in this paper

  • [...]

  • [...]

  • [...]

Proceedings ArticleDOI

[...]

01 Jan 2015
TL;DR: It is shown how a very large scale dataset can be assembled by a combination of automation and human in the loop, and the trade off between data purity and time is discussed.
Abstract: The goal of this paper is face recognition – from either a single photograph or from a set of faces tracked in a video. Recent progress in this area has been due to two factors: (i) end to end learning for the task using a convolutional neural network (CNN), and (ii) the availability of very large scale training datasets. We make two contributions: first, we show how a very large scale dataset (2.6M images, over 2.6K people) can be assembled by a combination of automation and human in the loop, and discuss the trade off between data purity and time; second, we traverse through the complexities of deep network training and face recognition to present methods and procedures to achieve comparable state of the art results on the standard LFW and YTF face benchmarks.

4,347 citations

Proceedings ArticleDOI

[...]

22 May 2017
TL;DR: In this paper, the authors demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability.
Abstract: Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.

3,972 citations

Journal ArticleDOI

[...]

TL;DR: The inherent strengths of biometrics-based authentication are outlined, the weak links in systems employing biometric authentication are identified, and new solutions for eliminating these weak links are presented.
Abstract: Because biometrics-based authentication offers several advantages over other authentication methods, there has been a significant surge in the use of biometrics for user authentication in recent years. It is important that such biometrics-based authentication systems be designed to withstand attacks when employed in security-critical applications, especially in unattended remote applications such as e-commerce. In this paper we outline the inherent strengths of biometrics-based authentication, identify the weak links in systems employing biometrics-based authentication, and present new solutions for eliminating some of these weak links. Although, for illustration purposes, fingerprint authentication is used throughout, our analysis extends to other biometrics-based methods.

1,601 citations


"Securing CNN Model and Biometric Te..." refers background in this paper

  • [...]

Journal ArticleDOI

[...]

TL;DR: This paper introduces the database, describes the recording procedure, and presents results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE.
Abstract: A close relationship exists between the advancement of face recognition algorithms and the availability of face databases varying factors that affect facial appearance in a controlled manner. The CMU PIE database has been very influential in advancing research in face recognition across pose and illumination. Despite its success the PIE database has several shortcomings: a limited number of subjects, a single recording session and only few expressions captured. To address these issues we collected the CMU Multi-PIE database. It contains 337 subjects, imaged under 15 view points and 19 illumination conditions in up to four recording sessions. In this paper we introduce the database and describe the recording procedure. We furthermore present results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE.

1,203 citations