scispace - formally typeset
D

Deng-Ping Fan

Researcher at Nankai University

Publications -  68
Citations -  8697

Deng-Ping Fan is an academic researcher from Nankai University. The author has contributed to research in topics: Computer science & Segmentation. The author has an hindex of 27, co-authored 40 publications receiving 3607 citations.

Papers
More filters
Proceedings ArticleDOI

EGNet: Edge Guidance Network for Salient Object Detection

TL;DR: In this article, an edge guidance network (EGNet) is proposed for salient object detection with three steps to simultaneously model these two kinds of complementary information in a single network, which can help locate salient objects especially their boundaries more accurately.
Proceedings ArticleDOI

Structure-Measure: A New Way to Evaluate Foreground Maps

TL;DR: In this paper, the structural similarity measure (Structure-measure) is proposed to evaluate non-binary foreground maps, which simultaneously evaluates region-aware and object-aware structural similarity between a saliency map and a ground-truth map.
Journal ArticleDOI

Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images

TL;DR: Li et al. as discussed by the authors proposed a COVID-19 Lung Infection Segmentation Deep Network ( Inf-Net) to automatically identify infected regions from chest CT slices, where a parallel partial decoder is used to aggregate the high-level features and generate a global map.
Proceedings ArticleDOI

Enhanced-alignment Measure for Binary Foreground Map Evaluation

TL;DR: In this article, a novel and effective E-measure (Enhanced-alignment measure) is proposed, which combines local pixel values with the image-level mean value in one term, jointly capturing imagelevel statistics and local pixel matching information.
Proceedings ArticleDOI

Shifting More Attention to Video Salient Object Detection

TL;DR: A visual-attention-consistent Densely Annotated VSOD (DAVSOD) dataset, which contains 226 videos with 23,938 frames that cover diverse realistic-scenes, objects, instances and motions, and a baseline model equipped with a saliency shift- aware convLSTM, which can efficiently capture video saliency dynamics through learning human attention-shift behavior is proposed.