Fast Geometrically-Perturbed Adversarial Faces
Citations
85 citations
Cites background from "Fast Geometrically-Perturbed Advers..."
...To date, adversarial attacks against face models [21], [24], [58]–[60] have mainly focused on the white-box setting....
[...]
...[60] demonstrate that deep face models are vulnerable to geometrically-perturbed adversarial examples generated by a fast algorithm, which directly manipulates landmarks of the face images....
[...]
71 citations
Cites background or methods from "Fast Geometrically-Perturbed Advers..."
...We compare our adversarial face synthesis method with state-of-the-art methods that have specifically been implemented or proposed for faces, including GFLM [5], PGD [23], FGSM [13], and A(3)GN [35]8....
[...]
...Obfuscation Attack AdvFaces GFLM [5] PGD [23] FGSM [13]...
[...]
...GFLM [5], on the other hand, geometrically warps the face images and thereby, results in low structural similarity....
[...]
...The (a) Probe (b) AdvFaces (c) GFLM [5]...
[...]
...For 500 real face images (probes), we generate 500 corresponding adversarial examples via AdvFaces, GFLM [5], A(3)GN [35], PGD [23], and FGSM [13]....
[...]
47 citations
43 citations
42 citations
Cites background from "Fast Geometrically-Perturbed Advers..."
...re our performance on this way in Table 7. If the cosine distance between the original image and the generated image is lower than 0.45, it model SR(%) Attack acc. on CASIA(%) stAdv [49] 99.18 - GFLM [7] 99.96 - A3GN 99.94 98.23 Table 7. Comparison with other attack models in face recognition. ‘SR’ means the success rate of fooling the network to a false label. ‘Attack acc. on CASIA’ means the accura...
[...]
References
73,978 citations
28,225 citations
9,561 citations
"Fast Geometrically-Perturbed Advers..." refers background or methods in this paper
...[30] showed that a small perturbation in the input domain can fool a trained classifier into making a wrong prediction confidently....
[...]
...However, the noisy structure of the perturbation makes these attacks vulnerable against conventional defense methods such as quantizing [18], smoothing [6] or training on adversarial examples [30]....
[...]
...[30] used a box-constrained L-BFGS [20] to generate some of the very first adversarial examples....
[...]
...Despite the excellent performance, it has been shown [30, 7] that DNNs are vulnerable to a small perturbation in the input domain which can result in a drastic change of predictions in the output domain....
[...]
9,132 citations
8,289 citations
"Fast Geometrically-Perturbed Advers..." refers background in this paper
...[28] that obtained the state-ofthe-art results on the Labeled Faces in the Wild (LFW) [11] challenge as the victim model....
[...]