scispace - formally typeset
S

Siqi Bao

Researcher at Baidu

Publications -  52
Citations -  571

Siqi Bao is an academic researcher from Baidu. The author has contributed to research in topics: Image segmentation & Computer science. The author has an hindex of 8, co-authored 45 publications receiving 291 citations. Previous affiliations of Siqi Bao include Hong Kong University of Science and Technology.

Papers
More filters
Proceedings ArticleDOI

PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable

TL;DR: The authors propose a dialogue generation pre-training framework to support various kinds of conversations, including chit-chat, knowledge grounded dialogues, and conversational question answering, which adopts flexible attention mechanisms to fully leverage the bi-directional context and the uni-irectional characteristic of language generation.
Posted Content

PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable

TL;DR: This work proposes a novel dialogue generation pre-training framework to support various kinds of conversations, including chit-chat, knowledge grounded dialogues, and conversational question answering, and introduces discrete latent variables to tackle the inherent one-to-many mapping problem in response generation.
Posted Content

PLATO-2: Towards Building an Open-Domain Chatbot via Curriculum Learning

TL;DR: To build a high-quality open-domain chatbot, this work introduces the effective training process of PLATO-2 via curriculum learning, achieving new state-of-the-art results.
Journal ArticleDOI

Multi-scale structured CNN with label consistency for brain MR image segmentation

TL;DR: Experimental results indicate that the proposed method for brain MR image segmentation can obtain better segmentation quality efficiently, and comprehensive evaluations have been carried out on two publicly available data-sets.
Proceedings ArticleDOI

PLATO-2: Towards Building an Open-Domain Chatbot via Curriculum Learning

TL;DR: In this paper, a coarse-grained generation model is trained to learn response generation under the simplified framework of one-to-one mapping and a finegrained generative model augmented with latent variables and an evaluation model are further trained to generate diverse responses and to select the best response, respectively.