scispace - formally typeset
Open AccessProceedings ArticleDOI

Meta-learning on Heterogeneous Information Networks for Cold-start Recommendation

TLDR
This work proposes a novel semantic-enhanced tasks constructor and a co-adaptation meta-learner to address the two questions for how to capture HIN-based semantics in the meta-learning setting, and how to learn the general knowledge that can be easily adapted to multifaceted semantics.
Abstract
Cold-start recommendation has been a challenging problem due to sparse user-item interactions for new users or items. Existing efforts have alleviated the cold-start issue to some extent, most of which approach the problem at the data level. Earlier methods often incorporate auxiliary data as user or item features, while more recent methods leverage heterogeneous information networks (HIN) to capture richer semantics via higher-order graph structures. On the other hand, recent meta-learning paradigm sheds light on addressing cold-start recommendation at the model level, given its ability to rapidly adapt to new tasks with scarce labeled data, or in the context of cold-start recommendation, new users and items with very few interactions. Thus, we are inspired to develop a novel meta-learning approach named MetaHIN to address cold-start recommendation on HINs, to exploit the power of meta-learning at the model level and HINs at the data level simultaneously. The solution is non-trivial, for how to capture HIN-based semantics in the meta-learning setting, and how to learn the general knowledge that can be easily adapted to multifaceted semantics, remain open questions. In MetaHIN, we propose a novel semantic-enhanced tasks constructor and a co-adaptation meta-learner to address the two questions. Extensive experiments demonstrate that MetaHIN significantly outperforms the state of the arts in various cold-start scenarios. (Code and dataset are available at https://github.com/rootlu/MetaHIN.)

read more

Citations
More filters
Proceedings ArticleDOI

AutoDebias: Learning to Debias for Recommendation

TL;DR: AotoDebias as mentioned in this paper leverages another (small) set of uniform data to optimize the debiasing parameters by solving the bi-level optimization problem with meta-learning, which provides a valuable opportunity to develop a universal solution for debiases.
Proceedings ArticleDOI

AutoDebias: Learning to Debias for Recommendation

TL;DR: AotoDebias as discussed by the authors leverages another (small) set of uniform data to optimize the debiasing parameters by solving the bi-level optimization problem with meta-learning.
Proceedings ArticleDOI

Contrastive Meta Learning with Behavior Multiplicity for Recommendation

TL;DR: A multi-behavior contrastive learning framework to distill transferable knowledge across different types of behaviors via the constructed contrastive loss, and a contrastive meta network to encode the customized behavior heterogeneity for different users are proposed.
Proceedings ArticleDOI

Learning to Warm Up Cold Item Embeddings for Cold-start Recommendation with Meta Scaling and Shifting Networks

TL;DR: Wang et al. as mentioned in this paper proposed Meta Scaling and Shifting Networks to generate scaling and shifting functions for each item, respectively, which can directly transform cold item ID embeddings into warm feature space which can fit the model better.
Proceedings ArticleDOI

Task-adaptive Neural Process for User Cold-Start Recommendation

TL;DR: TaNP as discussed by the authors is a new member of the neural process family, where making recommendations for each user is associated with a corresponding stochastic process, which directly maps the observed interactions of each user to a predictive distribution, sidestepping some training issues in gradient-based meta learning models.
References
More filters
Proceedings ArticleDOI

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

TL;DR: BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Posted Content

Semi-Supervised Classification with Graph Convolutional Networks

TL;DR: A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.
Proceedings Article

Model-agnostic meta-learning for fast adaptation of deep networks

TL;DR: An algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning is proposed.
Proceedings Article

Prototypical Networks for Few-shot Learning

TL;DR: Prototypical Networks as discussed by the authors learn a metric space in which classification can be performed by computing distances to prototype representations of each class, and achieve state-of-the-art results on the CU-Birds dataset.
Proceedings ArticleDOI

Neural Collaborative Filtering

TL;DR: This work strives to develop techniques based on neural networks to tackle the key problem in recommendation --- collaborative filtering --- on the basis of implicit feedback, and presents a general framework named NCF, short for Neural network-based Collaborative Filtering.