scispace - formally typeset
H

Hongyu Liu

Researcher at Hunan University

Publications -  19
Citations -  673

Hongyu Liu is an academic researcher from Hunan University. The author has contributed to research in topics: Inpainting & Computer science. The author has an hindex of 4, co-authored 11 publications receiving 187 citations.

Papers
More filters
Proceedings ArticleDOI

Coherent Semantic Attention for Image Inpainting

TL;DR: This work investigates the human behavior in repairing pictures and proposes a fined deep generative model-based approach with a novel coherent semantic attention (CSA) layer, which can not only preserve contextual structure but also make more effective predictions of missing parts by modeling the semantic relevance between the holes features.
Book ChapterDOI

Rethinking Image Inpainting via a Mutual Encoder-Decoder with Feature Equalizations

TL;DR: Li et al. as mentioned in this paper proposed a mutual encoder-decoder CNN for joint recovery of both structures and textures. But, the CNN features of each encoder are learned to capture either missing structures or textures without considering them as a whole.
Proceedings ArticleDOI

PD-GAN: Probabilistic Diverse GAN for Image Inpainting

TL;DR: PD-GAN as mentioned in this paper modulates deep features of input random noise from coarse-to-fine by injecting an initially restored image and the hole regions in multiple scales to generate multiple inpainting results with diverse and visually realistic content.
Posted Content

Coherent Semantic Attention for Image Inpainting

TL;DR: Zhang et al. as mentioned in this paper proposed a coherent semantic attention (CSA) layer to model the semantic relevance between the holes features and propose a consistency loss to enforce the both the CSA layer and the corresponding layer of CSA in decoder to be close to the VGG feature layer of ground truth image simultaneously.
Proceedings ArticleDOI

DeFLOCNet: Deep Image Editing via Flexible Low-level Controls

TL;DR: DeFLOCNet as mentioned in this paper uses a deep encoder-decoder CNN to retain the guidance of low-level features in the deep feature representations, and then concatenates the modulated features with the original decoder features for structure generation.