scispace - formally typeset
Open AccessJournal ArticleDOI

Fighting Deepfake by Exposing the Convolutional Traces on Images

Luca Guarnera, +2 more
- 09 Sep 2020 - 
- Vol. 8, pp 165085-165098
Reads0
Chats0
TLDR
A new approach aimed to extract a Deepfake fingerprint from images is proposed, based on the Expectation-Maximization algorithm trained to detect and extract a fingerprint that represents the Convolutional Traces left by GANs during image generation.
Abstract
Advances in Artificial Intelligence and Image Processing are changing the way people interacts with digital images and video. Widespread mobile apps like FACEAPP make use of the most advanced Generative Adversarial Networks (GAN) to produce extreme transformations on human face photos such gender swap, aging, etc. The results are utterly realistic and extremely easy to be exploited even for non-experienced users. This kind of media object took the name of Deepfake and raised a new challenge in the multimedia forensics field: the Deepfake detection challenge. Indeed, discriminating a Deepfake from a real image could be a difficult task even for human eyes but recent works are trying to apply the same technology used for generating images for discriminating them with preliminary good results but with many limitations: employed Convolutional Neural Networks are not so robust, demonstrate to be specific to the context and tend to extract semantics from images. In this paper, a new approach aimed to extract a Deepfake fingerprint from images is proposed. The method is based on the Expectation-Maximization algorithm trained to detect and extract a fingerprint that represents the Convolutional Traces (CT) left by GANs during image generation. The CT demonstrates to have high discriminative power achieving better results than state-of-the-art in the Deepfake detection task also proving to be robust to different attacks. Achieving an overall classification accuracy of over 98%, considering Deepfakes from 10 different GAN architectures not only involved in images of faces, the CT demonstrates to be reliable and without any dependence on image semantic. Finally, tests carried out on Deepfakes generated by FACEAPP achieving 93% of accuracy in the fake detection task, demonstrated the effectiveness of the proposed technique on a real-case scenario.

read more

Citations
More filters
Posted ContentDOI

Deep Learning for Deepfakes Creation and Detection: A Survey

TL;DR: This study provides a comprehensive overview of deepfake techniques and facilitates the development of new and more robust methods to deal with the increasingly challenging deepfakes.
Journal ArticleDOI

Deepfake Detection: A Systematic Literature Review

TL;DR: A systematic literature review (SLR) is conducted, summarizing 112 relevant articles from 2018 to 2020 that presented a variety of methodologies in Deepfake detection and concluding that the deep learning-based methods outperform other methods in Deep fake detection.
Posted Content

Countering Malicious DeepFakes: Survey, Battleground, and Horizon.

TL;DR: A comprehensive overview and detailed analysis of the research work on the topic of DeepFake generation, DeepFake detection as well as evasion of deepfake detection, with more than 191 research papers carefully surveyed is provided in this article.
Journal ArticleDOI

Adversarial Attacks Against Face Recognition: A Comprehensive Study

TL;DR: A comprehensive survey on adversarial attacks against face recognition systems is presented in this paper, where a taxonomy of existing attack and defense methods based on different criteria is proposed, and the challenges and potential research direction are discussed.
References
More filters
Proceedings Article

Generative adversarial text to image synthesis

TL;DR: In this article, a deep convolutional generative adversarial network (GAN) is used to generate plausible images of birds and flowers from detailed text descriptions, translating visual concepts from characters to pixels.
Posted Content

Generative Adversarial Text to Image Synthesis

TL;DR: A novel deep architecture and GAN formulation is developed to effectively bridge advances in text and image modeling, translating visual concepts from characters to pixels.
Posted Content

A Style-Based Generator Architecture for Generative Adversarial Networks

TL;DR: This article proposed an alternative generator architecture for GANs, borrowing from style transfer literature, which leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images.
Posted Content

Progressive Growing of GANs for Improved Quality, Stability, and Variation

TL;DR: A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.
Proceedings ArticleDOI

Face2Face: Real-Time Face Capture and Reenactment of RGB Videos

TL;DR: A novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video) that addresses the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling and re-render the manipulated output video in a photo-realistic fashion.
Related Papers (5)