scispace - formally typeset
Open AccessProceedings ArticleDOI

Sequential Recommendation with Graph Neural Networks

Reads0
Chats0
TLDR
Wang et al. as mentioned in this paper proposed a graph neural network model called SURGE (short forSeqUential Recommendation with Graph neural nEtworks) to address two main challenges in sequential recommendation.
Abstract
Sequential recommendation aims to leverage users' historical behaviors to predict their next interaction. Existing works have not yet addressed two main challenges in sequential recommendation. First, user behaviors in their rich historical sequences are often implicit and noisy preference signals, they cannot sufficiently reflect users' actual preferences. In addition, users' dynamic preferences often change rapidly over time, and hence it is difficult to capture user patterns in their historical sequences. In this work, we propose a graph neural network model called SURGE (short forSeqUential Recommendation with Graph neural nEtworks) to address these two issues. Specifically, SURGE integrates different types of preferences in long-term user behaviors into clusters in the graph by re-constructing loose item sequences into tight item-item interest graphs based on metric learning. This helps explicitly distinguish users' core interests, by forming dense clusters in the interest graph. Then, we perform cluster-aware and query-aware graph convolutional propagation and graph pooling on the constructed graph. It dynamically fuses and extracts users' current activated core interests from noisy user behavior sequences. We conduct extensive experiments on both public and proprietary industrial datasets. Experimental results demonstrate significant performance gains of our proposed method compared to state-of-the-art methods. Further studies on sequence length confirm that our method can model long behavioral sequences effectively and efficiently.

read more

Citations
More filters
Journal ArticleDOI

Criteria Tell You More than Ratings: Criteria Preference-Aware Light Graph Convolution for Effective Multi-Criteria Recommendation

TL;DR: Zhang et al. as mentioned in this paper proposed a criteria preference-aware light graph convolution CPA-LGC method, which is capable of precisely capturing the criteria preference of users as well as the collaborative signal in complex high-order connectivities.
Proceedings ArticleDOI

Global Interest Transfer Guided Session-based Recommendation

TL;DR: Wang et al. as mentioned in this paper proposed a method called Global Interest Transfer Guided Session based Recommendation (GITG), which uses global information to learn interest representations and transfer rules between interests to help the recommendation.
Proceedings ArticleDOI

Joint Internal Multi-Interest Exploration and External Domain Alignment for Cross Domain Sequential Recommendation

TL;DR: In this article , the authors proposed a cross-domain sequential recommendation model (IESRec), which includes two main modules, i.e., internal multi-interest exploration module and external domain alignment module.
Journal ArticleDOI

Multi-View Attention Networks with Contrastive Predictive Coding for Sequential Recommendation

TL;DR: In this paper , a multi-view attention network with contrastive predictive coding (MVACPC) is proposed to solve the problem of missing critical information in the sequence, which can obtain more comprehensive user representations.
Journal ArticleDOI

CoRec: An Efficient Internet Behavior-based Recommendation Framework with Edge-cloud Collaboration on Deep Convolution Neural Networks

TL;DR: This work proposes an efficient internet behavior based recommendation framework with edge-cloud collaboration on deep CNNs (CoRec) to improve both the accuracy and speed for mobile recommendation and introduces a novel convolutional interest network (CIN) that improves the accuracy by modeling the long- and short-term interests and accelerates the prediction through parallel-friendly convolutions.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article

Attention is All you Need

TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Proceedings Article

Neural Machine Translation by Jointly Learning to Align and Translate

TL;DR: It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Posted Content

Semi-Supervised Classification with Graph Convolutional Networks

TL;DR: A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.
Related Papers (5)