scispace - formally typeset
X

Xingshan Zeng

Researcher at The Chinese University of Hong Kong

Publications -  33
Citations -  135

Xingshan Zeng is an academic researcher from The Chinese University of Hong Kong. The author has contributed to research in topics: Computer science & Conversation. The author has an hindex of 4, co-authored 22 publications receiving 58 citations. Previous affiliations of Xingshan Zeng include Huawei.

Papers
More filters
Proceedings ArticleDOI

Microblog Conversation Recommendation via Joint Modeling of Topics and Discourse

TL;DR: A statistical model is proposed that jointly captures topics for representing user interests and conversation content, and discourse modes for describing user replying behavior and conversation dynamics that outperforms methods that only model content without considering discourse.
Proceedings ArticleDOI

Continuity of Topic, Interaction, and Query: Learning to Quote in Online Conversations

TL;DR: This work studies automatic quotation generation in an online conversation and explores how language consistency affects whether a quotation fits the given context and captures the contextual consistency of a quotation in terms of latent topics, interactions with the dialogue history, and coherence to the query’s existing contents.
Proceedings ArticleDOI

Joint Effects of Context and User History for Predicting Online Conversation Re-entries

TL;DR: The authors proposed a neural framework with three main layers, each modeling context, user history, and interactions between them, to explore how the conversation context and user chatting history jointly result in their re-entry behavior.
Proceedings ArticleDOI

Dynamic Online Conversation Recommendation

TL;DR: This work proposes a neural architecture to exploit changes of user interactions and interests over time, to predict which discussions they are likely to enter and significantly outperforms state-of-the-art models that make a static assumption of user interests.
Proceedings ArticleDOI

SimulSLT: End-to-End Simultaneous Sign Language Translation

TL;DR: SimulSLT as mentioned in this paper is the first end-to-end simultaneous sign language translation model, which can translate sign language videos into target text concurrently, which is composed of a text decoder, a boundary predictor, and a masked encoder.