scispace - formally typeset
K

Kaize Ding

Researcher at Arizona State University

Publications -  65
Citations -  1232

Kaize Ding is an academic researcher from Arizona State University. The author has contributed to research in topics: Computer science & Anomaly detection. The author has an hindex of 8, co-authored 34 publications receiving 320 citations.

Papers
More filters
Book ChapterDOI

Deep Anomaly Detection on Attributed Networks.

TL;DR: This paper develops a novel deep model that explicitly models the topological structure and nodal attributes seamlessly for node embedding learning with the prevalent graph convolutional network (GCN) and is customized to address the anomaly detection problem by virtue of deep autoencoder that leverages the learned embeddings to reconstruct the original data.
Proceedings ArticleDOI

Next-item Recommendation with Sequential Hypergraphs

TL;DR: The proposed model can significantly outperform the state-of-the-art in predicting the next interesting item for each user and is equipped with a fusion layer to incorporate both the dynamic item embedding and short-term user intent to the representation of each interaction.
Proceedings ArticleDOI

Interactive Anomaly Detection on Attributed Networks

TL;DR: This paper investigates the problem of anomaly detection on attributed networks in an interactive setting by allowing the system to proactively communicate with the human expert in making a limited number of queries about ground truth anomalies, and develops a novel collaborative contextual bandit algorithm, named GraphUCB.
Proceedings ArticleDOI

Graph Prototypical Networks for Few-shot Learning on Attributed Networks

TL;DR: By constructing a pool of semi-supervised node classification tasks to mimic the real test environment, GPN is able to perform meta-learning on an attributed network and derive a highly generalizable model for handling the target classification task.
Posted Content

Be More with Less: Hypergraph Attention Networks for Inductive Text Classification.

TL;DR: This paper proposes a principled model -- hypergraph attention networks (HyperGAT), which can obtain more expressive power with less computational consumption for text representation learning.