F
Fangli Xu
Researcher at College of William & Mary
Publications - 25
Citations - 458
Fangli Xu is an academic researcher from College of William & Mary. The author has contributed to research in topics: Graph (abstract data type) & Computer science. The author has an hindex of 7, co-authored 24 publications receiving 259 citations. Previous affiliations of Fangli Xu include Nanjing University.
Papers
More filters
Posted Content
Word Mover's Embedding: From Word2Vec to Document Embedding
Lingfei Wu,Ian En-Hsu Yen,Kun Xu,Fangli Xu,Avinash Balakrishnan,Pin-Yu Chen,Pradeep Ravikumar,Michael Witbrock +7 more
TL;DR: The Word Mover’s Embedding (WME) is proposed, a novel approach to building an unsupervised document (sentence) embedding from pre-trained word embeddings that consistently matches or outperforms state-of-the-art techniques, with significantly higher accuracy on problems of short length.
Proceedings ArticleDOI
Word Mover’s Embedding: From Word2Vec to Document Embedding
Lingfei Wu,Ian En-Hsu Yen,Kun Xu,Fangli Xu,Avinash Balakrishnan,Pin-Yu Chen,Pradeep Ravikumar,Michael Witbrock +7 more
TL;DR: The authors proposed the Word Mover's Embedding (WME), a novel approach to building an unsupervised document (sentence) embedding from pre-trained word embeddings.
Journal ArticleDOI
Deep Graph Matching and Searching for Semantic Code Retrieval
Xiang Ling,Lingfei Wu,Saizhuo Wang,Gaoning Pan,Tengfei Ma,Fangli Xu,Alex X. Liu,Chunming Wu,Shouling Ji +8 more
TL;DR: An end-to-end deep graph matching and searching (DGMS) model based on graph neural networks for the task of semantic code retrieval that significantly outperforms state-of-the-art baseline models by a large margin on both datasets.
Posted Content
Random Warping Series: A Random Features Method for Time-Series Embedding.
TL;DR: This work studies a family of alignment-aware positive definite (p.d.) kernels, with its feature embedding given by a distribution of Random Warping Series (RWS), which reduces the computational complexity of existing DTW-based techniques from quadratic to linear in terms of both the number and the length of time-series.
Proceedings ArticleDOI
Graph-to-Tree Neural Networks for Learning Structured Input-Output Translation with Applications to Semantic Parsing and Math Word Problem.
TL;DR: Graph2Tree as mentioned in this paper uses a graph encoder and a hierarchical tree decoder, which encodes an augmented graph-structured input and decodes a treestructured output.