scispace - formally typeset
Open AccessProceedings ArticleDOI

MaskGAN: Towards Diverse and Interactive Facial Image Manipulation

Reads0
Chats0
TLDR
MaskGAN as mentioned in this paper proposes MaskGAN to enable diverse and interactive face manipulation by learning style mapping between a free-form user modified mask and a target image, enabling diverse generation results.
Abstract
Facial image manipulation has achieved great progress in recent years. However, previous methods either operate on a predefined set of face attributes or leave users little freedom to interactively manipulate images. To overcome these drawbacks, we propose a novel framework termed MaskGAN, enabling diverse and interactive face manipulation. Our key insight is that semantic masks serve as a suitable intermediate representation for flexible face manipulation with fidelity preservation. MaskGAN has two main components: 1) Dense Mapping Network (DMN) and 2) Editing Behavior Simulated Training (EBST). Specifically, DMN learns style mapping between a free-form user modified mask and a target image, enabling diverse generation results. EBST models the user editing behavior on the source mask, making the overall framework more robust to various manipulated inputs. Specifically, it introduces dual-editing consistency as the auxiliary supervision signal. To facilitate extensive studies, we construct a large-scale high-resolution face dataset with fine-grained mask annotations named CelebAMask-HQ. MaskGAN is comprehensively evaluated on two challenging tasks: attribute transfer and style copy, demonstrating superior performance over other state-of-the-art methods. The code, models, and dataset are available at https://github.com/switchablenorms/CelebAMask-HQ.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Explainability Requires Interactivity.

TL;DR: In this article, the authors introduce an interactive framework to understand the decision boundaries of modern vision models, allowing the user to exhaustively inspect, probe, and test a network's decisions.
Proceedings ArticleDOI

SupRes: Facial Image Upscaling Using Sparse Denoising Autoencoder

TL;DR: In this article , the Sparse Denoising Autoencoders (SDAEs) were proposed for upscaling blurry images and compared with the deep learning architecture, Pix2Pix Generative Adversarial Networks (GANs) by primarily focusing on the facial images.
Journal ArticleDOI

Facial Landmark, Head Pose, and Occlusion Analysis Using Multitask Stacked Hourglass

TL;DR: In this paper , a 2-stacked hourglass with three task-specific heads is proposed for landmark, head pose, and occlusion from a face image, and the proposed network achieved competitive performance across all the datasets and notably outperformed the state of the art methods on AFLW2000 and Masked 300W datasets.
Journal ArticleDOI

Interactive Neural Painting

TL;DR: I-Paint as mentioned in this paper is based on a conditional transformer VAE architecture with a two-stage decoder to assist the user's creativity by suggesting the next stroke to paint, that can be possibly used to complete the artwork.
Posted Content

ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis

TL;DR: ForgeryNet as mentioned in this paper is a large dataset of 2.9 million images, 221,247 videos, manipulations (7 imagelevel approaches, 8 video-level approaches), perturbations (36 independent and more mixed perturbation) and annotations (6.3 million classification labels, 2.5 million manipulated area annotations and 221, 247 temporal forgery segment labels).
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Related Papers (5)