scispace - formally typeset
Open AccessPosted Content

StyleUV: Diverse and High-fidelity UV Map Generative Model.

Reads0
Chats0
TLDR
A novel UV map generative model that learns to generate diverse and realistic synthetic UV maps without requiring high-quality UV maps for training is presented.
Abstract
Reconstructing 3D human faces in the wild with the 3D Morphable Model (3DMM) has become popular in recent years. While most prior work focuses on estimating more robust and accurate geometry, relatively little attention has been paid to improving the quality of the texture model. Meanwhile, with the advent of Generative Adversarial Networks (GANs), there has been great progress in reconstructing realistic 2D images. Recent work demonstrates that GANs trained with abundant high-quality UV maps can produce high-fidelity textures superior to those produced by existing methods. However, acquiring such high-quality UV maps is difficult because they are expensive to acquire, requiring laborious processes to refine. In this work, we present a novel UV map generative model that learns to generate diverse and realistic synthetic UV maps without requiring high-quality UV maps for training. Our proposed framework can be trained solely with in-the-wild images (i.e., UV maps are not required) by leveraging a combination of GANs and a differentiable renderer. Both quantitative and qualitative evaluations demonstrate that our proposed texture model produces more diverse and higher fidelity textures compared to existing methods.

read more

Citations
More filters
Proceedings Article

A morphable model for the synthesis of 3D faces

Matthew Turk
Posted ContentDOI

ClipFace: Text-guided Editing of Textured 3D Morphable Models

TL;DR: ClipFace as discussed by the authors employs user-friendly language prompts to enable control of the expressions as well as appearance of 3D face morphable models and generates high quality texture generation for 3D faces by adversarial self-supervised training, guided by differentiable rendering against collections of real RGB images.
Journal ArticleDOI

DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance

TL;DR: Zhang et al. as discussed by the authors proposed a coarse-to-fine scheme to generate the neutral facial geometry with a unified topology, and employed a selection strategy in the CLIP embedding space, and subsequently optimized both the details displacements and normals using Score Distillation Sampling from generic Latent Diffusion Model.
Journal ArticleDOI

Towards High-Fidelity Face Self-Occlusion Recovery via Multi-View Residual-Based GAN Inversion

TL;DR: This paper proposes a new generative adversarial network (MvInvert) for natural face self-occlusion recovery without using paired image-texture data, and demonstrates that this approach outperforms the state-of-the-art methods in faceSelf-occlusions recovery under unconstrained scenarios.
Posted Content

Enhanced 3DMM Attribute Control via Synthetic Dataset Creation Pipeline

TL;DR: A novel pipeline for generating paired 3D faces by harnessing the power of GANs is designed and an enhanced non-linear 3D conditional attribute controller is proposed that increases the precision and diversity of 3D attribute control compared to existing methods.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Proceedings ArticleDOI

FaceNet: A unified embedding for face recognition and clustering

TL;DR: A system that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure offace similarity, and achieves state-of-the-art face recognition performance using only 128-bytes perface.
Proceedings ArticleDOI

A Style-Based Generator Architecture for Generative Adversarial Networks

TL;DR: This paper proposed an alternative generator architecture for GANs, borrowing from style transfer literature, which leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images.
Proceedings ArticleDOI

Deep Learning Face Attributes in the Wild

TL;DR: A novel deep learning framework for attribute prediction in the wild that cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently.
Related Papers (5)