MaskGAN: Towards Diverse and Interactive Facial Image Manipulation
Cheng-Han Lee,Ziwei Liu,Lingyun Wu,Ping Luo +3 more
- pp 5549-5558
Reads0
Chats0
TLDR
MaskGAN as mentioned in this paper proposes MaskGAN to enable diverse and interactive face manipulation by learning style mapping between a free-form user modified mask and a target image, enabling diverse generation results.Abstract:
Facial image manipulation has achieved great progress in recent years. However, previous methods either operate on a predefined set of face attributes or leave users little freedom to interactively manipulate images. To overcome these drawbacks, we propose a novel framework termed MaskGAN, enabling diverse and interactive face manipulation. Our key insight is that semantic masks serve as a suitable intermediate representation for flexible face manipulation with fidelity preservation. MaskGAN has two main components: 1) Dense Mapping Network (DMN) and 2) Editing Behavior Simulated Training (EBST). Specifically, DMN learns style mapping between a free-form user modified mask and a target image, enabling diverse generation results. EBST models the user editing behavior on the source mask, making the overall framework more robust to various manipulated inputs. Specifically, it introduces dual-editing consistency as the auxiliary supervision signal. To facilitate extensive studies, we construct a large-scale high-resolution face dataset with fine-grained mask annotations named CelebAMask-HQ. MaskGAN is comprehensively evaluated on two challenging tasks: attribute transfer and style copy, demonstrating superior performance over other state-of-the-art methods. The code, models, and dataset are available at https://github.com/switchablenorms/CelebAMask-HQ.read more
Citations
More filters
Journal ArticleDOI
Paint2Pix: Interactive Painting Based Progressive Image Synthesis and Editing
TL;DR: Paint2pix as discussed by the authors learns to predict what a user wants to draw from simple brushstroke inputs by learning a mapping from the manifold of incomplete human paintings to their realistic renderings.
Proceedings ArticleDOI
MaskFuzzer: A MaskGAN-based Industrial Control Protocol Fuzz Testing Framework
TL;DR: A fuzzy testing framework called MaskFuzzer is proposed to solve the problems of protocol fuzzing and compared with the GAN-based test case generation method and Peach, this method is best.
Journal ArticleDOI
Semantic 3D-aware Portrait Synthesis and Manipulation Based on Compositional Neural Radiance Field
TL;DR: Wang et al. as mentioned in this paper propose a compositional neural radiance field (CNeRF) for semantic 3D-aware portrait synthesis and manipulation, which divides the image by semantic regions and learns an independent neural radiances field for each region, and finally fuses them and renders the complete image.
Journal ArticleDOI
DiffFaceSketch: High-Fidelity Face Image Synthesis with Sketch-Guided Latent Diffusion Model
TL;DR: In this article , a sketch-guided Latent Diffusion Model (SGLDM) is proposed to synthesize high-quality face images with different expressions, facial accessories, and hairstyles from various sketches with different abstraction levels.
Proceedings ArticleDOI
SIRA: Relightable Avatars from a Single Image
Pol Caselles,Eduard Ramon,Jaime Garcia Giraldez,Xavier Giro-i-Nieto,Francesc Moreno-Noguer,Gil Triginer +5 more
TL;DR: SIRA is introduced, a method which, from a single image, reconstructs human head avatars with high fidelity geometry and factorized lights and surface materials and is amenable to physically-based appearance editing and head model relighting.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Book ChapterDOI
U-Net: Convolutional Networks for Biomedical Image Segmentation
TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Journal ArticleDOI
Generative Adversarial Nets
Ian Goodfellow,Jean Pouget-Abadie,Mehdi Mirza,Bing Xu,David Warde-Farley,Sherjil Ozair,Aaron Courville,Yoshua Bengio +7 more
TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.