scispace - formally typeset
Q

Qiongkai Xu

Researcher at Australian National University

Publications -  36
Citations -  2847

Qiongkai Xu is an academic researcher from Australian National University. The author has contributed to research in topics: Computer science & Natural language generation. The author has an hindex of 9, co-authored 28 publications receiving 2105 citations. Previous affiliations of Qiongkai Xu include IBM & Commonwealth Scientific and Industrial Research Organisation.

Papers
More filters
Proceedings ArticleDOI

GraRep: Learning Graph Representations with Global Structural Information

TL;DR: A novel model for learning vertex representations of weighted graphs that integrates global structural information of the graph into the learning process and significantly outperforms other state-of-the-art methods in such tasks.
Proceedings Article

Deep neural networks for learning graph representations

TL;DR: A novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information directly, and which outperforms other stat-of-the-art models in such tasks.
Proceedings Article

Using Deep Linguistic Features for Finding Deceptive Opinion Spam

TL;DR: This work proposes a novel model which integrates some deep linguistic features derived from a syntactic dependency parsing tree to discriminate deceptive opinions from normal ones and produces state-of-the-art results on both of the topics.
Proceedings ArticleDOI

Semantic Documents Relatedness using Concept Graph Representation

TL;DR: This work deals with the problem of document representation for the task of measuring semantic relatedness between documents and outperforms state-of-the-art methods including ESA (Explicit Semantic Annotation), while its concept graphs are much smaller than the concept vectors generated by ESA.
Posted Content

D-PAGE: Diverse Paraphrase Generation

TL;DR: This paper proposes a simple method Diverse Paraphrase Generation (D-PAGE), which extends neural machine translation models to support the generation of diverse paraphrases with implicit rewriting patterns and demonstrates that this model generates at least one order of magnitude more diverse outputs than the baselines.