Y
Yafeng Zhou
Researcher at Peking University
Publications - 12
Citations - 101
Yafeng Zhou is an academic researcher from Peking University. The author has contributed to research in topics: Storyboard & Comics. The author has an hindex of 5, co-authored 12 publications receiving 74 citations.
Papers
More filters
Proceedings ArticleDOI
A Faster R-CNN Based Method for Comic Characters Face Detection
TL;DR: Experimental results have demonstrated that the proposed Faster R-CNN based method for face detection of comic characters not only performs better than existing methods, but also works for comic images with different drawing styles.
Proceedings ArticleDOI
Comic frame extraction via line segments combination
TL;DR: This work presents a method that identifies frame polygons via connected component labeling and line segments combination and optimize an energy-like score function constrained by several rules to choose frames.
Proceedings ArticleDOI
An End-to-End Quadrilateral Regression Network for Comic Panel Extraction
TL;DR: This work proposes an end-to-end, two-stage quadrilateral regressing network architecture for comic panel detection, which inherits the architecture of Faster R-CNN and demonstrates that the proposed method significantly outperforms the existing Comic panel detection methods on multiple datasets by F1-score and page accuracy.
Patent
Cartoon image layout recognition method and automatic recognition system
TL;DR: In this article, a cartoon image layout recognition method and an automatic recognition system is presented. And the recognition system comprises a foreground and background segmentation module, an outline detection module, a straight-line segment detection module and a storyboard searching module, and a post processing module.
Proceedings Article
SReN: Shape Regression Network for Comic Storyboard Extraction.
TL;DR: This work proposes a novel architecture based on deep convolutional neural network, named Shape Regression Network (SReN), to detect storyboards within comic images, which outperforms the state-of-the-art methods by more than 10% in terms of F1score and page correction rate.