scispace - formally typeset
Journal ArticleDOI

Recurrent Generative Adversarial Network for Face Completion

TLDR
Experimental results on benchmark datasets demonstrate qualitatively and quantitatively that the proposed RGAN model performs better than the state-of-the-art face completion models, and simultaneously generates realistic image content and high-frequency details.
Abstract
Most recently-proposed face completion algorithms use high-level features extracted from convolutional neural networks (CNNs) to recover semantic texture content. Although the completed face is natural-looking, the synthesized content still lacks lots of high-frequency details, since the high-level features cannot supply sufficient spatial information for details recovery. To tackle this limitation, in this paper, we propose a R ecurrent G enerative A dversarial N etwork (RGAN) for face completion. Unlike previous algorithms, RGAN can take full advantage of multi-level features, and further provide advanced representations from multiple perspectives, which can well restore spatial information and details in face completion. Specifically, our RGAN model is composed of a CompletionNet and a DisctiminationNet, where the CompletionNet consists of two deep CNNs and a recurrent neural network (RNN). The first deep CNN is presented to learn the internal regulations of a masked image and represent it with multi-level features. The RNN model then exploits the relationships among the multi-level features and transfers these features in another domain, which can be used to complete the face image. Benefiting from bidirectional short links, another CNN is used to fuse multi-level features transferred from RNN and reconstruct the face image in different scales. Meanwhile, two context discrimination networks in the DisctiminationNet are adopted to ensure the completed image consistency globally and locally. Experimental results on benchmark datasets demonstrate qualitatively and quantitatively that our model performs better than the state-of-the-art face completion models, and simultaneously generates realistic image content and high-frequency details. The code will be released available soon.

read more

Citations
More filters
Proceedings ArticleDOI

What Can Be Transferred: Unsupervised Domain Adaptation for Endoscopic Lesions Segmentation

TL;DR: Zhang et al. as discussed by the authors developed a new unsupervised semantic transfer model including two complementary modules (i.e., T_D and T_F ) for endoscopic lesions segmentation, which can alternatively determine where and how to explore transferable domain-invariant knowledge between labeled source lesions dataset and unlabeled target diseases dataset.
Posted Content

What Can Be Transferred: Unsupervised Domain Adaptation for Endoscopic Lesions Segmentation

TL;DR: A new unsupervised semantic transfer model including two complementary modules for endoscopic lesions segmentation, which can alternatively determine where and how to explore transferable domain-invariant knowledge between labeled source lesions dataset and unlabeled target diseases dataset is developed.
Journal ArticleDOI

Generative Adversarial Networks for Face Generation: A Survey

TL;DR: Facial GANs are reviewed, the progress of architectures are reviewed and the contributions and limits of each are discussed and the encountered problems are exposed and proposed solutions to handle them.
Book ChapterDOI

CSCL: Critical Semantic-Consistent Learning for Unsupervised Domain Adaptation

TL;DR: In this article, a new Critical Semantic Consistent Learning (CSCL) model is proposed to mitigate the discrepancy of both domain-wise and category-wise distributions. But, the model is not designed to handle untransferable knowledge.
Journal ArticleDOI

Missing Data Imputation on IoT Sensor Networks: Implications for on-Site Sensor Calibration

TL;DR: Experimental results showed that a simple model on the imputed dataset can achieve state of-the-art result on in-situ sensor calibration, improving the data quality of the sensor.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Posted Content

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: It is shown that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Related Papers (5)