scispace - formally typeset
Open AccessPosted Content

T-BFA: Targeted Bit-Flip Adversarial Weight Attack

TLDR
This paper proposes the first work of targetedBFA based (T-BFA) adversarial weight attack on DNN models, which can intentionally mislead selected inputs to a target output class through a novel class-dependent weight bit ranking algorithm.
Abstract
Traditional Deep Neural Network (DNN) security is mostly related to the well-known adversarial input example attack. Recently, another dimension of adversarial attack, namely, attack on DNN weight parameters, has been shown to be very powerful. As a representative one, the Bit-Flip-based adversarial weight Attack (BFA) injects an extremely small amount of faults into weight parameters to hijack the executing DNN function. Prior works of BFA focus on un-targeted attack that can hack all inputs into a random output class by flipping a very small number of weight bits stored in computer memory. This paper proposes the first work of targeted BFA based (T-BFA) adversarial weight attack on DNNs, which can intentionally mislead selected inputs to a target output class. The objective is achieved by identifying the weight bits that are highly associated with classification of a targeted output through a class-dependent weight bit ranking algorithm. Our proposed T-BFA performance is successfully demonstrated on multiple DNN architectures for image classification tasks. For example, by merely flipping 27 out of 88 million weight bits of ResNet-18, our T-BFA can misclassify all the images from 'Hen' class into 'Goose' class (i.e., 100 % attack success rate) in ImageNet dataset, while maintaining 59.35 % validation accuracy. Moreover, we successfully demonstrate our T-BFA attack in a real computer prototype system running DNN computation, with Ivy Bridge-based Intel i7 CPU and 8GB DDR3 memory.

read more

Citations
More filters
Posted Content

Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA

TL;DR: This work proposes a novel adversarial attack framework: Deep-Dup, in which the adversarial tenant can inject faults to the DNN model of victim tenant in FPGA, and proposes a generic vulnerable weight package searching algorithm, called Progressive Differential Evolution Search (P-DES), which is adaptive to both deep learning white-box and black-box attack models.
Proceedings Article

Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits

TL;DR: In this article, a binary integer programming (BIP) based attack paradigm was proposed to modify model parameters in the deployment stage for malicious purposes, where the parameters are stored as binary bits (i.e., 0 and 1) in the memory.
Journal ArticleDOI

Energy-Latency Attacks via Sponge Poisoning

TL;DR: This work presents a novel formalization for sponge poisoning, overcoming the limitations related to the optimization of test-time sponge examples, and shows that this attack is possible even if the attacker only controls a few poisoning samples and model updates.
Peer ReviewDOI

Review of spike-based neuromorphic computing for brain-inspired vision: biology, algorithms, and hardware

TL;DR: This work provides a holistic treatment of spike-based neuromorphic computing (i.e., based on spiking neural networks), detailing biological motivation, key aspects of neuromorphic algorithms, and a survey of state-of-the-art neuromorphic hardware.
Journal ArticleDOI

How Practical Are Fault Injection Attacks, Really?

TL;DR: Fault injection attacks can be mounted on most commonly used architectures from ARM, Intel, AMD, by utilizing injection devices that are often below the thousand dollar mark, and can be considered practical in many scenarios, especially when the attacker can physically access the target device.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI

MobileNetV2: Inverted Residuals and Linear Bottlenecks

TL;DR: MobileNetV2 as mentioned in this paper is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers and intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity.
Journal ArticleDOI

Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups

TL;DR: This article provides an overview of progress and represents the shared views of four research groups that have had recent successes in using DNNs for acoustic modeling in speech recognition.
Related Papers (5)