scispace - formally typeset
Q

Qian Chen

Researcher at Alibaba Group

Publications -  53
Citations -  2592

Qian Chen is an academic researcher from Alibaba Group. The author has contributed to research in topics: Computer science & Engineering. The author has an hindex of 16, co-authored 34 publications receiving 1978 citations. Previous affiliations of Qian Chen include University of Science and Technology of China.

Papers
More filters
Proceedings ArticleDOI

Enhanced LSTM for Natural Language Inference

TL;DR: This paper presents a new state-of-the-art result, achieving the accuracy of 88.6% on the Stanford Natural Language Inference Dataset, and demonstrates that carefully designing sequential inference models based on chain LSTMs can outperform all previous models.
Posted Content

BERT for Joint Intent Classification and Slot Filling

TL;DR: This work proposes a joint intent classification and slot filling model based on BERT that achieves significant improvement on intent classification accuracy, slot filling F1, and sentence-level semantic frame accuracy on several public benchmark datasets, compared to the attention-based recurrent neural network models and slot-gated models.
Proceedings ArticleDOI

Neural Natural Language Inference Models Enhanced with External Knowledge

TL;DR: This paper enrich the state-of-the-art neural natural language inference models with external knowledge and demonstrate that the proposed models improve neural NLI models to achieve the state of the art performance on the SNLI and MultiNLI datasets.
Proceedings ArticleDOI

Enhanced LSTM for Natural Language Inference

TL;DR: This article showed that carefully designing sequential inference models based on chain LSTMs can outperform all previous models and further showed that by explicitly considering recursive architectures in both local inference modeling and inference composition, they achieved additional improvement.
Posted Content

Enhancing and Combining Sequential and Tree LSTM for Natural Language Inference.

TL;DR: This paper presents a new state-of-the-art result, achieving the accuracy of 88.3% on the standard benchmark, the Stanford Natural Language Inference dataset, through an enhanced sequential encoding model, which outperforms the previous best model that employs more complicated network architectures.