Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation
read more
Citations
Hypergraph Contrastive Collaborative Filtering
Self-Supervised Graph Co-Training for Session-based Recommendation
Graph Self-Supervised Learning: A Survey
Knowledge Graph Contrastive Learning for Recommendation
Towards Unsupervised Deep Graph Structure Learning
References
Birds of a Feather: Homophily in Social Networks
Representation Learning with Contrastive Predictive Coding
A Comprehensive Survey on Graph Neural Networks
Social Influence: Compliance and Conformity
Recommender systems with social regularization
Related Papers (5)
Representation Learning with Contrastive Predictive Coding
Frequently Asked Questions (15)
Q2. What are the metrics used to evaluate the performance of all the methods?
To evaluate the performance of all methods, two relevancybased metrics Precision@10 and Recall@10 and one ranking-based metric NDCG@10 are used.ย
Q3. What are the common ideas of MF-based social recommendation algorithms?
The common ideas of MF-based social recommendation algorithms can be categorized into three groups: co-factorization methods [22, 46], ensemble methods [20], and regularization methods [23].ย
Q4. What is the relevant work to ours?
The most relevant work to ours is GroupIM [32], which maximizes mutual information between representations of groups and group members to overcome the sparsity problem of group interactions.ย
Q5. What is the purpose of the self-supervised task?
The self-supervised task serves as the auxiliary task to improve the recommendation task by maximizing hierarchical mutual information between the user, user-centered sub-hypergraph, and hypergraph representations.ย
Q6. What is the way to set up a self-supervised task?
contrasting congruent and incongruent views of graphs with mutual information maximization [29, 37] is another way to set up a self-supervised task, which has also shown promising results.ย
Q7. How many social triangles are there in the dataset?
despite the benefits of hypergraph convolution, there are a huge number of motif-induced hyperedges (e.g. there are 19,385 social triangles in the used dataset, LastFM), which would cause a high cost to build the incidence matrix ๐ฏ๐ .ย
Q8. What is the purpose of the proposed hypergraph convolutional network?
As the authors define multiple categories of motifs which concretize different types of high-order relations such as โhaving a mutual friendโ, โfriends purchasing the same itemโ, and โstrangers but purchasing the same itemโ in social recommender systems, each channel of the proposed hypergraph convolutional network undertakes the task of encoding a different motif-induced hypergraph.ย
Q9. What is the advantage of using the strengths of hypergraph convolutional networks?
๐ฏ๐ with any of ๐ฏ๐ , ๐ฏ ๐ and ๐ฏ๐ , the authors can borrow the strengths of hypergraph convolutional networks to learn user representations encoded high-order information in the corresponding channel.ย
Q10. What is the simplest way to learn comprehensive user embeddings?
Then the authors use the attention mechanism [36] to selectively aggregate information from different channel-specific user embeddings to form the comprehensive user embeddings.ย
Q11. How much improvement does MHCN achieve in the general recommendation task?
On average, ๐บ2-MHCN achieves about 5.389% improvement in the general recommendation task and 9.442% improvement in the cold-start recommendation task compared with MHCN.ย
Q12. How can the authors avoid the negative interference from the auxiliary task in gradient propagating?
As the authors adopt the primary & auxiliary paradigm, to avoid the negative interference from the auxiliary task in gradient propagating, the authors can only choose small values for ๐ฝ .ย
Q13. What is the reason why the authors will work against it?
Consideringthe over-smoothed representations could be a pervasive problem in hypergraph convolutional network based models, the authors will work against it in the future.ย
Q14. How many vertices can appear in the same instance of Mk?
As two vertices can appear in multipleinstances ofM๐ , (๐จ๐๐ )๐, ๐ is computed by:(๐จ๐๐ )๐, ๐ = #(๐, ๐ occur in the same instance ofM๐ ) .ย
Q15. How do the authors investigate the multi-channel setting?
The authors first investigate the multi-channel setting by removing any of the three channels from ๐2-MHCN and leaving the other two to observe the changes of performance.ย