scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Securing CNN Model and Biometric Template using Blockchain

01 Sep 2019-pp 1-7
TL;DR: This research model a trained biometric recognition system in an architecture which leverages the blockchain technology to provide fault tolerant access in a distributed environment and shows that the proposed approach provides security to both deep learning model and the biometric template.
Abstract: Blockchain has emerged as a leading technology that ensures security in a distributed framework. Recently, it has been shown that blockchain can be used to convert traditional blocks of any deep learning models into secure systems. In this research, we model a trained biometric recognition system in an architecture which leverages the blockchain technology to provide fault tolerant access in a distributed environment. The advantage of the proposed approach is that tampering in one particular component alerts the whole system and helps in easy identification of ‘any’ possible alteration. Experimentally, with different biometric modalities, we have shown that the proposed approach provides security to both deep learning model and the biometric template.
Citations
More filters
Journal ArticleDOI
TL;DR: A number of COVID-19 diagnostic methods that rely on DL algorithms with relevant adversarial examples (AEs) are tested, showing that DL models that do not consider defensive models against adversarial perturbations remain vulnerable to adversarial attacks.
Abstract: Medical IoT devices are rapidly becoming part of management ecosystems for pandemics such as COVID-19. Existing research shows that deep learning (DL) algorithms have been successfully used by researchers to identify COVID-19 phenomena from raw data obtained from medical IoT devices. Some examples of IoT technology are radiological media, such as CT scanning and X-ray images, body temperature measurement using thermal cameras, safe social distancing identification using live face detection, and face mask detection from camera images. However, researchers have identified several security vulnerabilities in DL algorithms to adversarial perturbations. In this article, we have tested a number of COVID-19 diagnostic methods that rely on DL algorithms with relevant adversarial examples (AEs). Our test results show that DL models that do not consider defensive models against adversarial perturbations remain vulnerable to adversarial attacks. Finally, we present in detail the AE generation process, implementation of the attack model, and the perturbations of the existing DL-based COVID-19 diagnostic applications. We hope that this work will raise awareness of adversarial attacks and encourages others to safeguard DL models from attacks on healthcare systems.

126 citations


Cites background from "Securing CNN Model and Biometric Te..."

  • ...[12] to stop the alteration of input data, feature vectors, model attributes, classifiers, and the final...

    [...]

Journal ArticleDOI
03 Apr 2020
TL;DR: Different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working are summarized.
Abstract: Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working Different types of attacks such as physical presentation attacks, disguise/makeup, digital adversarial attacks, and morphing/tampering using GANs have been discussed We also present a discussion on the effect of bias on face recognition models and showcase that factors such as age and gender variations affect the performance of modern algorithms The paper also presents the potential reasons for these challenges and some of the future research directions for increasing the robustness of face recognition models

53 citations


Cites background or methods from "Securing CNN Model and Biometric Te..."

  • ...Recently, Goel et al. (2019) presented one of the best security mechanism, namely blockchain to protect against attacks on face recognition....

    [...]

  • ...hm based on the data distribution. Further, Goel et al. (2018) developed the first benchmark toolbox of algorithms for adversarial generation, detection, and mitigation for face recognition. Recently, Goel et al. (2019) presented one of the best security mechanism, namely blockchain to protect against attacks on face recognition. Layers of CNN are converted into blocks similar to blocks in the blockchain. Each block...

    [...]

Journal ArticleDOI
TL;DR: This article proposes a non-deep learning approach that searches over a set of well-known image transforms such as Discrete Wavelet Transform and Discrete Sine Transform, and classifies the features with a support vector machine-based classifier, efficiently generalizes across databases as well as different unseen attacks and combinations of both.
Abstract: Deep learning algorithms provide state-of-the-art results on a multitude of applications. However, it is also well established that they are highly vulnerable to adversarial perturbations. It is often believed that the solution to this vulnerability of deep learning systems must come from deep networks only. Contrary to this common understanding, in this article, we propose a non-deep learning approach that searches over a set of well-known image transforms such as Discrete Wavelet Transform and Discrete Sine Transform, and classifying the features with a support vector machine-based classifier. Existing deep networks-based defense have been proven ineffective against sophisticated adversaries, whereas image transformation-based solution makes a strong defense because of the non-differential nature, multiscale, and orientation filtering. The proposed approach, which combines the outputs of two transforms, efficiently generalizes across databases as well as different unseen attacks and combinations of both (i.e., cross-database and unseen noise generation CNN model). The proposed algorithm is evaluated on large scale databases, including object database (validation set of ImageNet) and face recognition (MBGC) database. The proposed detection algorithm yields at-least 84.2% and 80.1% detection accuracy under seen and unseen database test settings, respectively. Besides, we also show how the impact of the adversarial perturbation can be neutralized using a wavelet decomposition-based filtering method of denoising. The mitigation results with different perturbation methods on several image databases demonstrate the effectiveness of the proposed method.

35 citations

Journal ArticleDOI
TL;DR: This work identifies the emergence of centrality in three layers of Blockchain based systems, namely governance layer, network layer and storage layer, and quantifies decentrality in these layers using various metrics.
Abstract: Blockchain promises to provide a distributed and decentralized means of trust among untrusted users. However, in recent years, a shift from decentrality to centrality has been observed in the most accepted Blockchain system, i.e., Bitcoin. This shift has motivated researchers to identify the cause of decentrality, quantify decentrality and analyze the impact of decentrality. In this work, we take a holistic approach to identify and quantify decentrality in Blockchain based systems. First, we identify the emergence of centrality in three layers of Blockchain based systems, namely governance layer, network layer and storage layer. Then, we quantify decentrality in these layers using various metrics. At the governance layer, we measure decentrality in terms of fairness, entropy, Gini coefficient, Kullback–Leibler divergence, etc. Similarly, in the network layer, we measure decentrality by using degree centrality, betweenness centrality and closeness centrality. At the storage layer, we apply a distribution index to define centrality. Subsequently, we evaluate the decentrality in Bitcoin and Ethereum networks and discuss our observations. We noticed that, with time, both Bitcoin and Ethereum networks tend to behave like centralized systems where a few nodes govern the whole network.

32 citations


Cites methods from "Securing CNN Model and Biometric Te..."

  • ...We can also use Cosine Similarity to define decentrality [59], [60]....

    [...]

Journal ArticleDOI
20 Jan 2021
TL;DR: In this paper, the current state of privacy preservation utilising blockchain and smart contracts, as applied to a number of fields and problem domains, is outlined, and future directions of research in areas combining future technologies, privacy-preservation and blockchain.
Abstract: Blockchain and smart contracts have seen significant application over the last decade, revolutionising many industries, including cryptocurrency, finance and banking, and supply chain management. In many cases, however, the transparency provided potentially comes at the cost of privacy. Blockchain does have potential uses to increase privacy-preservation. This paper outlines the current state of privacy preservation utilising Blockchain and Smart Contracts, as applied to a number of fields and problem domains. It provides a background of blockchain, outlines the challenges in blockchain as they relate to privacy, and then classifies into areas in which this paradigm can be applied to increase or protect privacy. These areas are cryptocurrency, data management and storage, e-voting, the Internet of Things, and smart agriculture. This work then proposes PPSAF, a new privacy-preserving framework designed explicitly for the issues that are present in smart agriculture. Finally, this work outlines future directions of research in areas combining future technologies, privacy-preservation and blockchain.

22 citations

References
More filters
Journal ArticleDOI
TL;DR: This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
Abstract: In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.

14,340 citations


"Securing CNN Model and Biometric Te..." refers methods in this paper

  • ...We use the python implementation of Shamir’s secret sharing4 to implement key sharding....

    [...]

  • ...The root divides the private key of the pair into 2 ∗ n + 1 shards using the Shamir’s [22] secret sharing principle2 A total of n+2 shards are required to reconstruct the key....

    [...]

  • ...In a distributed framework, the time needed by the template matcher will be: Time to delegate to the leaves + Time to conduct a template match + Time to compare n values + Time taken to carry out Shamirs secret (x) + Time taken to compare c values (n: number of leaf nodes and c: number of chief nodes)....

    [...]

Proceedings ArticleDOI
22 May 2017
TL;DR: In this paper, the authors demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability.
Abstract: Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.

6,528 citations

Proceedings ArticleDOI
01 Jan 2015
TL;DR: It is shown how a very large scale dataset can be assembled by a combination of automation and human in the loop, and the trade off between data purity and time is discussed.
Abstract: The goal of this paper is face recognition – from either a single photograph or from a set of faces tracked in a video. Recent progress in this area has been due to two factors: (i) end to end learning for the task using a convolutional neural network (CNN), and (ii) the availability of very large scale training datasets. We make two contributions: first, we show how a very large scale dataset (2.6M images, over 2.6K people) can be assembled by a combination of automation and human in the loop, and discuss the trade off between data purity and time; second, we traverse through the complexities of deep network training and face recognition to present methods and procedures to achieve comparable state of the art results on the standard LFW and YTF face benchmarks.

5,308 citations

Journal ArticleDOI
TL;DR: The inherent strengths of biometrics-based authentication are outlined, the weak links in systems employing biometric authentication are identified, and new solutions for eliminating these weak links are presented.
Abstract: Because biometrics-based authentication offers several advantages over other authentication methods, there has been a significant surge in the use of biometrics for user authentication in recent years. It is important that such biometrics-based authentication systems be designed to withstand attacks when employed in security-critical applications, especially in unattended remote applications such as e-commerce. In this paper we outline the inherent strengths of biometrics-based authentication, identify the weak links in systems employing biometrics-based authentication, and present new solutions for eliminating some of these weak links. Although, for illustration purposes, fingerprint authentication is used throughout, our analysis extends to other biometrics-based methods.

1,709 citations


"Securing CNN Model and Biometric Te..." refers background in this paper

  • ...[21], biometric systems are vulnerable to external attacks....

    [...]

Journal ArticleDOI
TL;DR: A comprehensive survey on adversarial attacks on deep learning in computer vision can be found in this paper, where the authors review the works that design adversarial attack, analyze the existence of such attacks and propose defenses against them.
Abstract: Deep learning is at the heart of the current rise of artificial intelligence. In the field of computer vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. Whereas, deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently led to a large influx of contributions in this direction. This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them. To emphasize that adversarial attacks are possible in practical conditions, we separately review the contributions that evaluate adversarial attacks in the real-world scenarios. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction.

1,542 citations