scispace - formally typeset
Proceedings ArticleDOI

Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection

Yong Xin Zhang, +3 more
- pp 18689-18698
Reads0
Chats0
TLDR
This work addresses the generalizable deepfake detection from a simple principle: a generalizable representation should be sensitive to diverse types of forgeries and synthesize augmented forgeries with a pool of forgery configurations and strengthen the “sensitivity” to the forgeries by enforcing the model to predict the forgery configuration.
Abstract
Recent studies in deepfake detection have yielded promising results when the training and testing face forgeries are from the same dataset. However, the problem remains challenging when one tries to generalize the detector to forgeries created by unseen methods in the training dataset. This work addresses the generalizable deepfake detection from a simple principle: a generalizable representation should be sensitive to diverse types of forgeries. Following this principle, we propose to enrich the “diversity” of forgeries by synthesizing augmented forgeries with a pool of forgery configurations and strengthen the “sensitivity” to the forgeries by enforcing the model to predict the forgery configurations. To effectively explore the large forgery augmentation space, we further propose to use the adversarial training strategy to dynamically synthesize the most challenging forgeries to the current model. Through extensive experiments, we show that the proposed strategies are surprisingly effective (see Figure 1), and they could achieve superior performance than the current state-of-the-art methods. Code is available at https://github.com/liangchen527/SLADD.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Visualization and Cybersecurity in the Metaverse: A Survey

TL;DR: In this article, the authors present a survey of existing work and open research directions on the development of countermeasures against cyber-attacks in the Metaverse in relation to visualization technologies.
Journal ArticleDOI

A Survey of Self-Supervised Learning from Multiple Perspectives: Algorithms, Theory, Applications and Future Trends

TL;DR: SelfSelf-Supervised Learning (SSL) as mentioned in this paper is a subset of unsupervised learning, and it can learn good features from many unlabeled examples without any human-annotated labels.
Proceedings ArticleDOI

Mix and Reason: Reasoning over Semantic Topology with Data Mixing for Domain Generalization

TL;DR: Experiments on multiple DG benchmarks validate the effectiveness and robustness of the proposed MiRe, a new DG framework that learns semantic representations via enforcing the structural invariance of semantic topology.
Journal ArticleDOI

Detecting Deepfake by Creating Spatio-Temporal Regularity Disruption

TL;DR: This work proposes to disrupt a real video through a Pseudo-fake Generator and create a wide range of pseudo-fake videos for training to boost the generalization of deepfake detection by distinguishing the “ regularity disruption ” that does not appear in real videos.
Journal ArticleDOI

AVoiD-DF: Audio-Visual Joint Learning for Detecting Deepfake

TL;DR: Wang et al. as mentioned in this paper proposed an Audio-Visual Joint Learning for Detecting Deepfake (AVoiD-DF), which exploits audio-visual inconsistency for multi-modal forgery detection.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings ArticleDOI

Xception: Deep Learning with Depthwise Separable Convolutions

TL;DR: This work proposes a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions, and shows that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset, and significantly outperforms it on a larger image classification dataset.
Journal ArticleDOI

Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning

TL;DR: This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units that are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reInforcement tasks, and they do this without explicitly computing gradient estimates.
Posted Content

Towards Deep Learning Models Resistant to Adversarial Attacks

TL;DR: This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Posted Content

Explaining and Harnessing Adversarial Examples

TL;DR: The authors argue that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, which is supported by new quantitative results while giving the first explanation of the most intriguing fact about adversarial examples: their generalization across architectures and training sets.