scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Securing CNN Model and Biometric Template using Blockchain

01 Sep 2019-pp 1-7
TL;DR: This research model a trained biometric recognition system in an architecture which leverages the blockchain technology to provide fault tolerant access in a distributed environment and shows that the proposed approach provides security to both deep learning model and the biometric template.
Abstract: Blockchain has emerged as a leading technology that ensures security in a distributed framework. Recently, it has been shown that blockchain can be used to convert traditional blocks of any deep learning models into secure systems. In this research, we model a trained biometric recognition system in an architecture which leverages the blockchain technology to provide fault tolerant access in a distributed environment. The advantage of the proposed approach is that tampering in one particular component alerts the whole system and helps in easy identification of ‘any’ possible alteration. Experimentally, with different biometric modalities, we have shown that the proposed approach provides security to both deep learning model and the biometric template.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article , a text mining literature analysis of research articles published in major digital libraries on blockchain technology and cybersecurity is presented, highlighting the multidisciplinary nature of blockchain technology within the cybersecurity domain.
Abstract: Blockchain, the technology infrastructure behind the famous cryptocurrency bitcoin, can take away the notion of trust from centralized organizations to a decentralized platform that is mathematically verifiable and cryptographically secure. It is gaining more significant momentum exponentially and disrupts the way businesses function beyond the digital currency aspects. This work presents a text mining literature analysis of research articles published in major digital libraries on blockchain technology and cybersecurity. This literature analysis employs automated text mining approaches such as topic modeling and keyphrase extraction for unearthing the themes from a vast body of literature. This analysis highlights the multidisciplinary nature of blockchain technology within the cybersecurity domain. The findings also show the cyber threats and vulnerabilities that evolve with blockchain technology developments. This analysis also showcases the computer security research community’s vulnerabilities and provides future research dimensions that are crucial for designing secure blockchain applications and platforms.

8 citations

Journal ArticleDOI
TL;DR: DAMAD as mentioned in this paper is a generalized perturbation detection algorithm which is agnostic to model architecture, training data set, and loss function used during training, which is based on the fusion of autoencoder embedding and statistical texture features extracted from convolutional neural networks.
Abstract: Adversarial perturbations have demonstrated the vulnerabilities of deep learning algorithms to adversarial attacks. Existing adversary detection algorithms attempt to detect the singularities; however, they are in general, loss-function, database, or model dependent. To mitigate this limitation, we propose DAMAD--a generalized perturbation detection algorithm which is agnostic to model architecture, training data set, and loss function used during training. The proposed adversarial perturbation detection algorithm is based on the fusion of autoencoder embedding and statistical texture features extracted from convolutional neural networks. The performance of DAMAD is evaluated on the challenging scenarios of cross-database, cross-attack, and cross-architecture training and testing along with traditional evaluation of testing on the same database with known attack and model. Comparison with state-of-the-art perturbation detection algorithms showcase the effectiveness of the proposed algorithm on six databases: ImageNet, CIFAR-10, Multi-PIE, MEDS, point and shoot challenge (PaSC), and MNIST. Performance evaluation with nearly a quarter of a million adversarial and original images and comparison with recent algorithms show the effectiveness of the proposed algorithm.

8 citations

Posted Content
TL;DR: In this paper, different types of attacks such as physical presentation attacks, disguise/makeup, digital adversarial attacks, and morphing/tampering using GANs have been discussed.
Abstract: Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications. Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged. This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working. Different types of attacks such as physical presentation attacks, disguise/makeup, digital adversarial attacks, and morphing/tampering using GANs have been discussed. We also present a discussion on the effect of bias on face recognition models and showcase that factors such as age and gender variations affect the performance of modern algorithms. The paper also presents the potential reasons for these challenges and some of the future research directions for increasing the robustness of face recognition models.

5 citations

Book ChapterDOI
01 Jan 2021
TL;DR: In this article, an overview of adversarial AI techniques for IoT-enabled 5G network for classifying and detecting the threats automatically and securing transactions using blockchain are discussed, where the major concern is security of data which cannot be compromised at all.
Abstract: Recently, the fifth-generation (5G) communication network added the advancement in capability of network which increases Internet traffic in the world as well as increases the usage of Internet of Things (IoT) technologies. 5G technology brings revolution in the next generation of Industry 5.0 with the integration of the Internet of Things. The evolution of new 5G-enabled IoT increases the demand of cloud infrastructure and virtualization of resources in distributed networks. Artificial intelligence (AI) integrates intelligence with these IoT devices to learn from the given knowledge and make decisions independently. In recent years, AI has come up with large solutions set in every domain such as car automation, robotics, gaming, medical, image processing, information retrieval, smart data analysis, social media, digital marketing, and many more. Several researchers have proven already the application of AI in those fields and are still finding the application of AI. 5G is all about increasing speed and data rate so its biggest challenge is to identify the threats in between accurately. AI integration with 5G-enabled IoT is very essential. The integration of these technologies increases the generation of data day by day where the major concern is security of data which cannot be compromised at all. Risk, vulnerabilities, network exploration, traffic manipulation, and some other threats are instantiated to fool the system and existing static techniques. Adversarial attacks on machine learning model become hotspot in the current industry. In this chapter, an overview of adversarial AI techniques for IoT-enabled 5G network for classifying and detecting the threats automatically and securing transactions using blockchain are discussed.

4 citations

Posted Content
TL;DR: A comprehensive survey on adversarial attacks against face recognition systems is presented in this article, where a taxonomy of existing attack and defense methods based on different criteria is proposed, and attack methods on orientation and attributes and defense approaches on the category are compared.
Abstract: Face recognition (FR) systems have demonstrated outstanding verification performance, suggesting suitability for real-world applications ranging from photo tagging in social media to automated border control (ABC) In an advanced FR system with deep learning-based architecture, however, promoting the recognition efficiency alone is not sufficient, and the system should also withstand potential kinds of attacks designed to target its proficiency Recent studies show that (deep) FR systems exhibit an intriguing vulnerability to imperceptible or perceptible but natural-looking adversarial input images that drive the model to incorrect output predictions In this article, we present a comprehensive survey on adversarial attacks against FR systems and elaborate on the competence of new countermeasures against them Further, we propose a taxonomy of existing attack and defense methods based on different criteria We compare attack methods on the orientation and attributes and defense approaches on the category Finally, we explore the challenges and potential research direction

4 citations

References
More filters
Journal ArticleDOI
TL;DR: This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
Abstract: In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.

14,340 citations


"Securing CNN Model and Biometric Te..." refers methods in this paper

  • ...We use the python implementation of Shamir’s secret sharing4 to implement key sharding....

    [...]

  • ...The root divides the private key of the pair into 2 ∗ n + 1 shards using the Shamir’s [22] secret sharing principle2 A total of n+2 shards are required to reconstruct the key....

    [...]

  • ...In a distributed framework, the time needed by the template matcher will be: Time to delegate to the leaves + Time to conduct a template match + Time to compare n values + Time taken to carry out Shamirs secret (x) + Time taken to compare c values (n: number of leaf nodes and c: number of chief nodes)....

    [...]

Proceedings ArticleDOI
22 May 2017
TL;DR: In this paper, the authors demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability.
Abstract: Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.

6,528 citations

Proceedings ArticleDOI
01 Jan 2015
TL;DR: It is shown how a very large scale dataset can be assembled by a combination of automation and human in the loop, and the trade off between data purity and time is discussed.
Abstract: The goal of this paper is face recognition – from either a single photograph or from a set of faces tracked in a video. Recent progress in this area has been due to two factors: (i) end to end learning for the task using a convolutional neural network (CNN), and (ii) the availability of very large scale training datasets. We make two contributions: first, we show how a very large scale dataset (2.6M images, over 2.6K people) can be assembled by a combination of automation and human in the loop, and discuss the trade off between data purity and time; second, we traverse through the complexities of deep network training and face recognition to present methods and procedures to achieve comparable state of the art results on the standard LFW and YTF face benchmarks.

5,308 citations

Journal ArticleDOI
TL;DR: The inherent strengths of biometrics-based authentication are outlined, the weak links in systems employing biometric authentication are identified, and new solutions for eliminating these weak links are presented.
Abstract: Because biometrics-based authentication offers several advantages over other authentication methods, there has been a significant surge in the use of biometrics for user authentication in recent years. It is important that such biometrics-based authentication systems be designed to withstand attacks when employed in security-critical applications, especially in unattended remote applications such as e-commerce. In this paper we outline the inherent strengths of biometrics-based authentication, identify the weak links in systems employing biometrics-based authentication, and present new solutions for eliminating some of these weak links. Although, for illustration purposes, fingerprint authentication is used throughout, our analysis extends to other biometrics-based methods.

1,709 citations


"Securing CNN Model and Biometric Te..." refers background in this paper

  • ...[21], biometric systems are vulnerable to external attacks....

    [...]

Journal ArticleDOI
TL;DR: A comprehensive survey on adversarial attacks on deep learning in computer vision can be found in this paper, where the authors review the works that design adversarial attack, analyze the existence of such attacks and propose defenses against them.
Abstract: Deep learning is at the heart of the current rise of artificial intelligence. In the field of computer vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. Whereas, deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently led to a large influx of contributions in this direction. This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them. To emphasize that adversarial attacks are possible in practical conditions, we separately review the contributions that evaluate adversarial attacks in the real-world scenarios. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction.

1,542 citations