scispace - formally typeset
H

Haiyang Wei

Researcher at Guangxi Normal University

Publications -  7
Citations -  83

Haiyang Wei is an academic researcher from Guangxi Normal University. The author has contributed to research in topics: Closed captioning & Language model. The author has an hindex of 3, co-authored 7 publications receiving 15 citations.

Papers
More filters
Journal ArticleDOI

Integrating Scene Semantic Knowledge into Image Captioning

TL;DR: Zhang et al. as mentioned in this paper proposed an improved visual attention model, which calculated the focus intensity coefficient of the attention mechanism through the context information of the model, and automatically adjusted the attention intensity through the coefficient to extract more accurate visual information.
Journal ArticleDOI

The synergy of double attention: Combine sentence-level and word-level attention for image captioning

TL;DR: A double attention model is proposed which combines sentence-level attention model with word- level attention model to generate more accurate captions and outperforms many state-of-the-art image captioning approaches in various evaluation metrics.
Journal ArticleDOI

Boost image captioning with knowledge reasoning

TL;DR: This paper proposes word attention to improve the correctness of visual attention when generating sequential descriptions word-by-word, and introduces a new strategy to inject external knowledge extracted from knowledge graph into the encoder-decoder framework to facilitate meaningful captioning.
Posted Content

Boost Image Captioning with Knowledge Reasoning

TL;DR: Zhang et al. as mentioned in this paper proposed word attention to improve the correctness of visual attention when generating sequential descriptions word-by-word, which makes full use of the internal annotation knowledge to assist in calculating visual attention.
Proceedings ArticleDOI

Image Captioning Based On Sentence-Level And Word-Level Attention

TL;DR: The experimental results show that the proposed image captioning approach based on self-attention can generate more accurate and richer captions, and outperforms many state-of-the-art image captioned approaches on various evaluation metrics.