scispace - formally typeset
Y

Yongji Wu

Researcher at Duke University

Publications -  15
Citations -  338

Yongji Wu is an academic researcher from Duke University. The author has contributed to research in topics: Computer science & Matrix decomposition. The author has an hindex of 3, co-authored 9 publications receiving 72 citations. Previous affiliations of Yongji Wu include University of Science and Technology of China.

Papers
More filters
Proceedings ArticleDOI

Geography-Aware Sequential Location Recommendation

TL;DR: This work proposes a new loss function based on importance sampling for optimization, to address the sparsity issue by emphasizing the use of informative negative samples, and puts forward geography-aware negative samplers to promote the informativeness of negative samples.
Journal ArticleDOI

Graph Convolutional Networks with Markov Random Field Reasoning for Social Spammer Detection

TL;DR: This paper proposes a novel social spammer detection model based on Graph Convolutional Networks (GCNs) that operate on directed social graphs by explicitly considering three types of neighbors and demonstrates that the method outperforms the state-of-the-art approaches.
Proceedings ArticleDOI

Graph Convolutional Networks on User Mobility Heterogeneous Graphs for Social Relationship Inference

TL;DR: A novel model is proposed that utilizes Graph Convolutional Networks (GCNs) to learn user embeddings on the User Mobility Heterogeneous Graph in an unsupervised manner that is capable of propagating relation layer-wisely as well as combining both the rich structural information in the heterogeneous graph and predictive node features provided.
Proceedings ArticleDOI

How Powerful is Graph Convolution for Recommendation

TL;DR: In this article, a unified graph convolution-based framework for collaborative filtering (GF-CF) is proposed, which can be obtained in a closed form instead of expensive training with back-propagation.
Proceedings ArticleDOI

Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation

TL;DR: Li et al. as discussed by the authors proposed LISA (LInear-time Self Attention), which scales linearly with the sequence length, while enabling full contextual attention via computing differentiable histograms of codeword distributions.