scispace - formally typeset
Open AccessBook ChapterDOI

Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

TLDR
Refool is proposed, a new type of backdoor attack inspired by an important natural phenomenon: reflection to plant reflections as backdoor into a victim model and can attack state-of-the-art DNNs with high success rate, and is resistant to state of theart backdoor defenses.
Abstract
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at training time. A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small proportion of the training data. At test time, the victim model behaves normally on clean test data, yet consistently predicts a specific (likely incorrect) target class whenever the backdoor pattern is present in a test example. While existing backdoor attacks are effective, they are not stealthy. The modifications made on training data or labels are often suspicious and can be easily detected by simple data filtering or human inspection. In this paper, we present a new type of backdoor attack inspired by an important natural phenomenon: reflection. Using mathematical modeling of physical reflection models, we propose reflection backdoor (Refool) to plant reflections as backdoor into a victim model. We demonstrate on 3 computer vision tasks and 5 datasets that, Refoolcan attack state-of-the-art DNNs with high success rate, and is resistant to state-of-the-art backdoor defenses.

read more

Citations
More filters
Posted Content

Backdoor Learning: A Survey

TL;DR: This paper summarizes and categorizes existing backdoor attacks and defenses based on their characteristics, and provides a unified framework for analyzing poisoning-based backdoor attacks.
Posted Content

Privacy and Robustness in Federated Learning: Attacks and Defenses.

TL;DR: This paper conducts the first comprehensive survey on federated learning, and provides a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defense against robustness; 3) inference attacks and defenses against privacy.
Posted Content

ONION: A Simple and Effective Defense Against Textual Backdoor Attacks

TL;DR: A simple and effective textual backdoor defense named ONION, which is based on outlier word detection and, to the best of the knowledge, is the first method that can handle all the textual backdoor attack situations.
Posted Content

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review.

TL;DR: This work provides the community with a timely comprehensive review of backdoor attacks and countermeasures on deep learning, and presents key areas for future research on the backdoor, such as empirical security evaluations from physical trigger attacks, and more efficient and practical countermeasures are solicited.
Posted Content

Graph Backdoor.

TL;DR: The effectiveness of GTA is demonstrated: for instance, on pre-trained, off-the-shelf GNNs, GTA attains over 99.2% attack success rate with merely less than 0.3% accuracy drop.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Proceedings ArticleDOI

Densely Connected Convolutional Networks

TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Proceedings Article

Sequence to Sequence Learning with Neural Networks

TL;DR: The authors used a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.
Related Papers (5)