scispace - formally typeset
N

Ning Wang

Researcher at Wuhan University

Publications -  8
Citations -  558

Ning Wang is an academic researcher from Wuhan University. The author has contributed to research in topics: Inpainting & Feature (computer vision). The author has an hindex of 4, co-authored 5 publications receiving 150 citations.

Papers
More filters
Proceedings ArticleDOI

Recurrent Feature Reasoning for Image Inpainting

TL;DR: Jiang et al. as mentioned in this paper proposed a Recurrent Feature Reasoning (RFR) network which is mainly constructed by a plug-and-play RFR module and a knowledge consistent attention module.
Posted Content

Recurrent Feature Reasoning for Image Inpainting

TL;DR: A Recurrent Feature Reasoning (RFR) network which is mainly constructed by a plug-and-play Recurrent feature Reasoning module and a Knowledge Consistent Attention (KCA) module, which recurrently infers the hole boundaries of the convolutional feature maps and uses them as clues for further inference.
Journal ArticleDOI

Multistage attention network for image inpainting

TL;DR: A novel image inpainting method for large-scale irregular masks is proposed with a special multistage attention module that considers structure consistency and detail fineness and adopts a partial convolution strategy to avoid the misuse of invalid data during convolution.
Journal ArticleDOI

Dynamic Selection Network for Image Inpainting

TL;DR: Zhang et al. as discussed by the authors proposed a dynamic selection network (DSNet) to distinguish the corrupted regions from the valid ones throughout the entire network architecture, which may help make full use of the information in the known area.
Proceedings ArticleDOI

MUSICAL: Multi-Scale Image Contextual Attention Learning for Inpainting

TL;DR: A multi-scale image contextual attention learning (MUSICAL) strategy is proposed that helps to flexibly handle richer background information while avoid to misuse of it and it is noticed that replacing some of the down sampling layers in the baseline network with the stride 1 dilated convolution layers is beneficial for producing sharper and fine-detailed results.