scispace - formally typeset
Open AccessProceedings Article

Data Augmentation Can Improve Robustness

Reads0
Chats0
TLDR
In this paper, the authors focus on reducing robust overfitting by using common data augmentation schemes and demonstrate that combining with model weight averaging can significantly boost robust accuracy compared to previous state-of-the-art methods.
Abstract
Adversarial training suffers from robust overfitting, a phenomenon where the robust test accuracy starts to decrease during training. In this paper, we focus on reducing robust overfitting by using common data augmentation schemes. We demonstrate that, contrary to previous findings, when combined with model weight averaging, data augmentation can significantly boost robust accuracy. Furthermore, we compare various augmentations techniques and observe that spatial composition techniques work the best for adversarial training. Finally, we evaluate our approach on CIFAR-10 against $\ell_\infty$ and $\ell_2$ norm-bounded perturbations of size $\epsilon = 8/255$ and $\epsilon = 128/255$, respectively. We show large absolute improvements of +2.93% and +2.16% in robust accuracy compared to previous state-of-the-art methods. In particular, against $\ell_\infty$ norm-bounded perturbations of size $\epsilon = 8/255$, our model reaches 60.07% robust accuracy without using any external data. We also achieve a significant performance boost with this approach while using other architectures and datasets such as CIFAR-100, SVHN and TinyImageNet.

read more

Citations
More filters
Posted Content

Fixing Data Augmentation to Improve Adversarial Robustness

TL;DR: In this paper, both heuristics-driven and data-driven augmentations are used to reduce robust overfitting in adversarial training, which is a phenomenon where the robust test accuracy starts to decrease during training.
Posted Content

Simple Post-Training Robustness Using Test Time Augmentations and Random Forest.

TL;DR: Augmented Random Forest (ARF) as mentioned in this paper generates randomized test time augmentations by applying diverse color, blur, noise, and geometric transforms, then uses the DNN's logits output to train a simple random forest to predict the real class label.
Related Papers (5)
Trending Questions (1)
Does averaging the results of multiple ANNs improve the robustness of predictions?

No, the paper does not mention averaging the results of multiple ANNs to improve robustness.