SmartBox: Benchmarking Adversarial Detection and Mitigation Algorithms for Face Recognition
Citations
51 citations
Cites background from "SmartBox: Benchmarking Adversarial ..."
...[46] highlighted that much work is done to counter input level attacks [2, 6, 47, 50, 51]; however, the research focus on adversarial attacks on network parameters is very less....
[...]
37 citations
Cites background from "SmartBox: Benchmarking Adversarial ..."
...While a lot of work has happened in attacking at the input level [1, 3, 7, 9, 10], very limited research has focused on adversarial attack on network parameters....
[...]
35 citations
31 citations
Cites methods from "SmartBox: Benchmarking Adversarial ..."
...[12] have implemented the adversarial examples generation and detection algorithms and prepared a toolbox called Smartbox....
[...]
28 citations
References
11,866 citations
"SmartBox: Benchmarking Adversarial ..." refers background in this paper
...Deep learning models have achieved state-of-the-art performance in various computer vision related tasks such as object detection and face recognition [18, 24]....
[...]
11,732 citations
9,561 citations
"SmartBox: Benchmarking Adversarial ..." refers background or methods in this paper
...Adversarial Training: In adversarial training [33], a new model is trained using the original dataset and adversarial examples with their correct labels....
[...]
...[33] Trains a new model on original and adversarial training images....
[...]
7,994 citations
"SmartBox: Benchmarking Adversarial ..." refers background or methods in this paper
...FGSM [15]: It computes the gradient of the loss function of the model concerning the image vector to get the direction of pixel change....
[...]
...[15] Computes gradient of the loss function w....
[...]
...While whitebox attacks such as ElasticNet (EAD) [6], DeepFool [28], L2 [5], Fast Gradient Sign Method (FGSM) [15], Projective Gradient Descent (PGD) [26], and MI-FGSM [10] have complete access and information about the trained network, blackbox attacks such as one pixel attack [32] and universal perturbations [27] have no information about the trained Deep Neural Network (DNN)....
[...]
...While whitebox attacks such as ElasticNet (EAD) [6], DeepFool [28], L2 [5], Fast Gradient Sign Method (FGSM) [15], Projective Gradient Descent (PGD) [26], and MI-FGSM [10] have complete access and information about the trained network, blackbox attacks such as one pixel attack [32] and universal perturbations [27]...
[...]
...FGSM perturbations can be computed by minimizing either the L1, L2 or L∞ norm....
[...]
6,528 citations