scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

DeepRing: Protecting Deep Neural Network With Blockchain

16 Jun 2019-pp 2821-2828
TL;DR: A model which uses the learned parameters of a typical deep neural network and is secured from external adversaries by cryptography and blockchain technology is proposed and a new parameter tampering attack is proposed to properly justify the role of blockchain in machine learning.
Abstract: Several computer vision applications such as object detection and face recognition have started to completely rely on deep learning based architectures. These architectures, when paired with appropriate loss functions and optimizers, produce state-of-the-art results in a myriad of problems. On the other hand, with the advent of "blockchain", the cybersecurity industry has developed a new sense of trust which was earlier missing from both the technical and commercial perspectives. Employment of cryptographic hash as well as symmetric/asymmetric encryption and decryption algorithms ensure security without any human intervention (i.e., centralized authority). In this research, we present the synergy between the best of both these worlds. We first propose a model which uses the learned parameters of a typical deep neural network and is secured from external adversaries by cryptography and blockchain technology. As the second contribution of the proposed research, a new parameter tampering attack is proposed to properly justify the role of blockchain in machine learning.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

[...]

01 Sep 2019
TL;DR: This research model a trained biometric recognition system in an architecture which leverages the blockchain technology to provide fault tolerant access in a distributed environment and shows that the proposed approach provides security to both deep learning model and the biometric template.
Abstract: Blockchain has emerged as a leading technology that ensures security in a distributed framework. Recently, it has been shown that blockchain can be used to convert traditional blocks of any deep learning models into secure systems. In this research, we model a trained biometric recognition system in an architecture which leverages the blockchain technology to provide fault tolerant access in a distributed environment. The advantage of the proposed approach is that tampering in one particular component alerts the whole system and helps in easy identification of ‘any’ possible alteration. Experimentally, with different biometric modalities, we have shown that the proposed approach provides security to both deep learning model and the biometric template.

19 citations


Cites background from "DeepRing: Protecting Deep Neural Ne..."

  • [...]

Journal ArticleDOI

[...]

TL;DR: This research presents a state-of-the-art, scalable, scalable and scalable approach that can be implemented in the rapidly changing environment of mobile devices to address the ever-growing number of security and privacy concerns.
Abstract: Security and privacy of users have become significant concerns due to the involvement of the Internet of Things (IoT) devices in numerous applications. Cyber threats are growing at an explosive pace making the existing security and privacy measures inadequate. Hence, everyone on the Internet is a product for hackers. Consequently, Machine Learning (ML) algorithms are used to produce accurate outputs from large complex databases, where the generated outputs can be used to predict and detect vulnerabilities in IoT-based systems. Furthermore, Blockchain (BC) techniques are becoming popular in modern IoT applications to solve security and privacy issues. Several studies have been conducted on either ML algorithms or BC techniques. However, these studies target either security or privacy issues using ML algorithms or BC techniques, thus posing a need for a combined survey on efforts made in recent years addressing both security and privacy issues using ML algorithms and BC techniques. In this article, we provide a summary of research efforts made in the past few years, from 2008 to 2019, addressing security and privacy issues using ML algorithms and BC techniques in the IoT domain. First, we discuss and categorize various security and privacy threats reported in the past 12 years in the IoT domain. We then classify the literature on security and privacy efforts based on ML algorithms and BC techniques in the IoT domain. Finally, we identify and illuminate several challenges and future research directions using ML algorithms and BC techniques to address security and privacy issues in the IoT domain.

17 citations

Journal ArticleDOI

[...]

TL;DR: This article proposes a non-deep learning approach that searches over a set of well-known image transforms such as Discrete Wavelet Transform and Discrete Sine Transform, and classifies the features with a support vector machine-based classifier, efficiently generalizes across databases as well as different unseen attacks and combinations of both.
Abstract: Deep learning algorithms provide state-of-the-art results on a multitude of applications. However, it is also well established that they are highly vulnerable to adversarial perturbations. It is often believed that the solution to this vulnerability of deep learning systems must come from deep networks only. Contrary to this common understanding, in this article, we propose a non-deep learning approach that searches over a set of well-known image transforms such as Discrete Wavelet Transform and Discrete Sine Transform, and classifying the features with a support vector machine-based classifier. Existing deep networks-based defense have been proven ineffective against sophisticated adversaries, whereas image transformation-based solution makes a strong defense because of the non-differential nature, multiscale, and orientation filtering. The proposed approach, which combines the outputs of two transforms, efficiently generalizes across databases as well as different unseen attacks and combinations of both (i.e., cross-database and unseen noise generation CNN model). The proposed algorithm is evaluated on large scale databases, including object database (validation set of ImageNet) and face recognition (MBGC) database. The proposed detection algorithm yields at-least 84.2% and 80.1% detection accuracy under seen and unseen database test settings, respectively. Besides, we also show how the impact of the adversarial perturbation can be neutralized using a wavelet decomposition-based filtering method of denoising. The mitigation results with different perturbation methods on several image databases demonstrate the effectiveness of the proposed method.

13 citations

Proceedings ArticleDOI

[...]

14 Jun 2020
TL;DR: A novel "defense layer" in a network which aims to block the generation of adversarial noise and prevents an adversarial attack in black-box and gray-box settings is presented.
Abstract: Several successful adversarial attacks have demonstrated the vulnerabilities of deep learning algorithms. These attacks are detrimental in building deep learning based dependable AI applications. Therefore, it is imperative to build a defense mechanism to protect the integrity of deep learning models. In this paper, we present a novel "defense layer" in a network which aims to block the generation of adversarial noise and prevents an adversarial attack in black-box and gray-box settings. The parameter-free defense layer, when applied to any convolutional network, helps in achieving protection against attacks such as FGSM, L 2 , Elastic-Net, and DeepFool. Experiments are performed with different CNN architectures, including VGG, ResNet, and DenseNet, on three databases, namely, MNIST, CIFAR-10, and PaSC. The results showcase the efficacy of the proposed defense layer without adding any computational overhead. For example, on the CIFAR-10 database, while the attack can reduce the accuracy of the ResNet-50 model to as low as 6.3%, the proposed "defense layer" retains the original accuracy of 81.32%.

9 citations


Cites methods from "DeepRing: Protecting Deep Neural Ne..."

  • [...]

Proceedings ArticleDOI

[...]

14 Jun 2020
TL;DR: This research addresses the question of whether an imperceptible gradient noise can be generated to fool the deep neural networks and shows that without-sign function, i.e. gradient magnitude, not only leads to a successful attack mechanism but the noise is also imperceptable to the human observer.
Abstract: State-of-the-art deep learning models have achieved superlative performance across multiple computer vision applications such as object recognition, face recognition, and digits/character classification. Most of these models highly rely on the gradient information flows through the network for learning. By utilizing this gradient information, a simple gradient sign method based attack is developed to fool the deep learning models. However, the primary concern with this attack is the perceptibility of noise for large degradation in classification accuracy. This research address the question of whether an imperceptible gradient noise can be generated to fool the deep neural networks? For this, the role of sign function in the gradient attack is analyzed. The analysis shows that without-sign function, i.e. gradient magnitude, not only leads to a successful attack mechanism but the noise is also imperceptible to the human observer. Extensive quantitative experiments performed using two convolutional neural networks validate the above observation. For instance, AlexNet architecture yields 63.54% accuracy on the CIFAR-10 database which reduces to 0.0% and 26.39% when sign (i.e., perceptible) and without-sign (i.e., imperceptible) of the gradient is utilized, respectively.Further, the role of the direction of the gradient for image manipulation is studied. When an image is manipulated in the positive direction of the gradient, an adversarial image is generated. On the other hand, if the opposite direction of the gradient is utilized for image manipulation, it is observed that the classification error rate of the CNN model is reduced. On AlexNet, the error rate of 36.46% reduces to 4.29% when images of CIFAR-10 are manipulated in the negative direction of the gradient. To explore other enthusiastic results on multiple object databases, including CIFAR-100, fashion-MNIST, and SVHN, please refer to the full paper.

8 citations


Cites background from "DeepRing: Protecting Deep Neural Ne..."

  • [...]

References
More filters
Proceedings Article

[...]

04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations


"DeepRing: Protecting Deep Neural Ne..." refers background or methods in this paper

  • [...]

  • [...]

  • [...]

Proceedings Article

[...]

01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,857 citations

Journal ArticleDOI

[...]

01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

34,930 citations

Journal ArticleDOI

[...]

28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

33,931 citations

Dissertation

[...]

01 Jan 2009
TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.
Abstract: In this work we describe how to train a multi-layer generative model of natural images. We use a dataset of millions of tiny colour images, described in the next section. This has been attempted by several groups but without success. The models on which we focus are RBMs (Restricted Boltzmann Machines) and DBNs (Deep Belief Networks). These models learn interesting-looking filters, which we show are more useful to a classifier than the raw pixels. We train the classifier on a labeled subset that we have collected and call the CIFAR-10 dataset.

14,902 citations