scispace - formally typeset
A

Ali Shafahi

Researcher at University of Maryland, College Park

Publications -  37
Citations -  2442

Ali Shafahi is an academic researcher from University of Maryland, College Park. The author has contributed to research in topics: Job shop scheduling & Artificial neural network. The author has an hindex of 15, co-authored 36 publications receiving 1806 citations.

Papers
More filters
Proceedings Article

Adversarial training for free

TL;DR: In this paper, the authors propose to reuse the gradient information computed when updating model parameters to eliminate the overhead cost of generating adversarial examples by recycling the gradients of the model parameters and achieve comparable robustness to PGD adversarial training.
Proceedings Article

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

TL;DR: This paper explores poisoning attacks on neural nets using "clean-labels", an optimization-based method for crafting poisons, and shows that just one single poison image can control classifier behavior when transfer learning is used.
Posted Content

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

TL;DR: In this article, the authors present an optimization-based method for crafting poisons, and show that just one single poison image can control classifier behavior when transfer learning is used, and demonstrate their method by generating poisoned frog images from CIFAR dataset and using them to manipulate image classifiers.
Posted Content

Adversarial Training for Free

TL;DR: This work presents an algorithm that eliminates the overhead cost of generating adversarial examples by recycling the gradient information computed when updating model parameters, and achieves comparable robustness to PGD adversarial training on the CIFAR-10 and CIFar-100 datasets at negligible additional cost compared to natural training.
Posted Content

Transferable Clean-Label Poisoning Attacks on Deep Neural Nets

TL;DR: A new "polytope attack" is proposed in which poison images are designed to surround the targeted image in feature space, and it is demonstrated that using Dropout during poison creation helps to enhance transferability of this attack.