Z
Zili Yi
Researcher at Huawei
Publications - 34
Citations - 2285
Zili Yi is an academic researcher from Huawei. The author has contributed to research in topics: Computer science & Biology. The author has an hindex of 6, co-authored 20 publications receiving 1434 citations. Previous affiliations of Zili Yi include Memorial University of Newfoundland.
Papers
More filters
Proceedings ArticleDOI
DualGAN: Unsupervised Dual Learning for Image-to-Image Translation
TL;DR: DualGAN as mentioned in this paper learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task, which enables image translators to be trained from two sets of unlabeled images from two domains.
Posted Content
DualGAN: Unsupervised Dual Learning for Image-to-Image Translation
TL;DR: A novel dual-GAN mechanism is developed, which enables image translators to be trained from two sets of unlabeled images from two domains, and can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.
Proceedings ArticleDOI
Contextual Residual Aggregation for Ultra High-Resolution Image Inpainting
TL;DR: A Contextual Residual Aggregation mechanism that can produce high-frequency residuals for missing contents by weighted aggregating residuals from contextual patches, thus only requiring a low-resolution prediction from the network.
Journal ArticleDOI
CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP
TL;DR: Qualitative and quantitative evaluations verify that the self-supervised CLIP-GEN scheme significantly outperforms optimization-based text-to-image methods in terms of image quality while not compromising the text-image matching.
Proceedings ArticleDOI
Region-Aware Face Swapping
TL;DR: A novel Region-Aware Face Swapping (RAFSwap) network to achieve identity-consistent harmonious high-resolution face generation in a local-global manner and proposes a Face Mask Predictor (FMP) module incorporated with StyleGAN2 to predict identity-relevant soft facial masks in an unsupervised manner that is more practical for generating harmonioushigh-resolution faces.