scispace - formally typeset
Y

Yingqi Liu

Researcher at Purdue University

Publications -  31
Citations -  1983

Yingqi Liu is an academic researcher from Purdue University. The author has contributed to research in topics: Computer science & Backdoor. The author has an hindex of 8, co-authored 19 publications receiving 1074 citations.

Papers
More filters
Proceedings ArticleDOI

Trojaning Attack on Neural Networks

TL;DR: A trojaning attack on neuron networks that can be successfully triggered without affecting its test accuracy for normal input data, and it only takes a small amount of time to attack a complex neuron network model.
Proceedings ArticleDOI

ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation

TL;DR: A novel technique that analyzes inner neuron behaviors by determining how output activations change when the authors introduce different levels of stimulation to a neuron substantially out-performs the state-of-the-art technique Neural Cleanse that requires a lot of input samples and small trojan triggers to achieve good performance.
Proceedings ArticleDOI

NIC: Detecting Adversarial Samples with Neural Network Invariant Checking.

TL;DR: This paper analyzes the internals of DNN models under various attacks and identifies two common exploitation channels: the provenance channel and the activation value distribution channel, and proposes a novel technique to extract DNN invariants and use them to perform runtime adversarial sample detection.
Proceedings ArticleDOI

MODE: automated neural network model debugging via state differential analysis and input selection

TL;DR: This work proposes a novel model debugging technique that works by first conducting model state differential analysis to identify the internal features of the model that are responsible for model bugs and then performing training input selection that is similar to program input selection in regression testing.
Proceedings ArticleDOI

Composite Backdoor Attack for Deep Neural Network by Mixing Existing Benign Features

TL;DR: In this article, a more flexible and stealthy trojan attack that eludes backdoor scanners using trojan triggers composed from existing benign features of multiple labels is introduced, which can achieve accuracy comparable to its original version on benign data and misclassifies when the composite trigger is present in the input.