scispace - formally typeset
Q

Qian He

Publications -  9
Citations -  50

Qian He is an academic researcher. The author has contributed to research in topics: Computer science & Engineering. The author has an hindex of 3, co-authored 9 publications receiving 50 citations.

Papers
More filters
Journal ArticleDOI

CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP

Zihao Wang, +3 more
- 01 Mar 2022 - 
TL;DR: Qualitative and quantitative evaluations verify that the self-supervised CLIP-GEN scheme significantly outperforms optimization-based text-to-image methods in terms of image quality while not compromising the text-image matching.
Proceedings ArticleDOI

Region-Aware Face Swapping

TL;DR: A novel Region-Aware Face Swapping (RAFSwap) network to achieve identity-consistent harmonious high-resolution face generation in a local-global manner and proposes a Face Mask Predictor (FMP) module incorporated with StyleGAN2 to predict identity-relevant soft facial masks in an unsupervised manner that is more practical for generating harmonioushigh-resolution faces.
Proceedings ArticleDOI

XMP-Font: Self-Supervised Cross-Modality Pre-training for Few-Shot Font Generation

Fei Din, +2 more
TL;DR: A self-supervised cross-modality pre-training strategy and a cross- modality transformer-based encoder that is conditioned jointly on the glyph image and the corresponding stroke labels are proposed, which facilitates the content-style disentanglement and modeling style representations of all scales.
Journal ArticleDOI

Semantic 3D-aware Portrait Synthesis and Manipulation Based on Compositional Neural Radiance Field

TL;DR: Wang et al. as mentioned in this paper propose a compositional neural radiance field (CNeRF) for semantic 3D-aware portrait synthesis and manipulation, which divides the image by semantic regions and learns an independent neural radiances field for each region, and finally fuses them and renders the complete image.
Journal ArticleDOI

Design Booster: A Text-Guided Diffusion Model for Image Translation with Spatial Layout Preservation

TL;DR: Zhang et al. as mentioned in this paper proposed a new approach for flexible image translation by learning a layout-aware image condition together with a text condition, which co-encodes images and text into a new domain during the training phase.