scispace - formally typeset
M

Mengshi Qi

Researcher at Beihang University

Publications -  23
Citations -  555

Mengshi Qi is an academic researcher from Beihang University. The author has contributed to research in topics: Computer science & Graph (abstract data type). The author has an hindex of 8, co-authored 14 publications receiving 282 citations. Previous affiliations of Mengshi Qi include Beijing University of Posts and Telecommunications & Baidu.

Papers
More filters
Book ChapterDOI

stagNet: An Attentive Semantic RNN for Group Activity Recognition

TL;DR: A novel attentive semantic recurrent neural network (RNN) for understanding group activities in videos, dubbed as stagNet, is proposed, based on the spatio-temporal attention and semantic graph, and adopted to attend to key persons/frames for improved performance.
Proceedings ArticleDOI

Attentive Relational Networks for Mapping Images to Scene Graphs

TL;DR: Zhang et al. as mentioned in this paper proposed an attentive relational network that consists of two key modules with an object detection backbone to automatically map an image into a semantic structural graph, which requires correctly labeling each extracted object and their interaction relationships.
Journal ArticleDOI

stagNet: An Attentive Semantic RNN for Group Activity and Individual Action Recognition

TL;DR: A novel attentive semantic recurrent neural network (RNN), namely, stagNet, is presented for understanding group activities and individual actions in videos, by combining the spatio-temporal attention mechanism and semantic graph modeling.
Posted Content

Attentive Relational Networks for Mapping Images to Scene Graphs

TL;DR: A novel Attentive Relational Network that consists of two key modules with an object detection backbone to approach this problem, and accurate scene graphs are produced by the relation inference module to recognize all entities and corresponding relations.
Journal ArticleDOI

Sports Video Captioning via Attentive Motion Representation and Group Relationship Modeling

TL;DR: A novel hierarchical recurrent neural network-based framework with an attention mechanism for sports video captioning is presented, in which a motion representation module is proposed to capture individual pose attribute and dynamical trajectory cluster information with extra professional sports knowledge.