scispace - formally typeset
A

Arjun Gupta

Researcher at University of Maryland, College Park

Publications -  11
Citations -  165

Arjun Gupta is an academic researcher from University of Maryland, College Park. The author has contributed to research in topics: Backdoor & Computer science. The author has an hindex of 4, co-authored 11 publications receiving 57 citations.

Papers
More filters
Posted Content

Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks

TL;DR: Unified benchmarks for data poisoning and backdoor attacks are developed in order to promote fair comparison in future work and to find that existing poisoning methods have been tested in contrived scenarios, and they fail in realistic settings.
Posted Content

Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff

TL;DR: It is found that strong data augmentations, such as mixup and CutMix, can significantly diminish the threat of poisoning and backdoor attacks without trading off performance.
Proceedings ArticleDOI

Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff

TL;DR: In this paper, strong data augmentations, such as mixup and CutMix, can significantly diminish the threat of poisoning and backdoor attacks without trading off performance, and they further verify the effectiveness of this simple defense against adaptive poisoning methods, and compare to baselines including the popular differentially private SGD (DP-SGD) defense.
Posted Content

DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations.

TL;DR: In this article, the authors show that strong data augmentations, such as mixup and random additive noise, nullify poison attacks while enduring only a small accuracy trade-off, and propose a training method, DP-InstaHide, which combines the mixup regularizer with additive noise.
Proceedings Article

Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks

TL;DR: This article found that the impressive performance evaluations from data poisoning attacks are, in large part, artifacts of inconsistent experimental design, and that existing poisoning methods have been tested in contrived scenarios, and many fail in more realistic settings.