scispace - formally typeset
Open AccessJournal ArticleDOI

A survey of collaborative filtering techniques

Reads0
Chats0
TLDR
From basic techniques to the state-of-the-art, this paper attempts to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area.
Abstract
As one of the most successful approaches to building recommender systems, collaborative filtering (CF) uses the known preferences of a group of users to make recommendations or predictions of the unknown preferences for other users. In this paper, we first introduce CF tasks and their main challenges, such as data sparsity, scalability, synonymy, gray sheep, shilling attacks, privacy protection, etc., and their possible solutions. We then present three main categories of CF techniques: memory-based, modelbased, and hybrid CF algorithms (that combine CF with other recommendation techniques), with examples for representative algorithms of each category, and analysis of their predictive performance and their ability to address the challenges. From basic techniques to the state-of-the-art, we attempt to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

A novel Collaborative Filtering recommendation approach based on Soft Co-Clustering

TL;DR: This paper proposes the Soft K-indicators Alternative Projection algorithm, which can efficiently resolve soft clustering problem with high dimensions, to generate a sparse partition matrix and further a Top-N recommendation list is given and results show that multi-label classification framework is a better description than classical Co-Clustering framework.
Journal ArticleDOI

Increasing prediction accuracy in collaborative filtering with initialized factor matrices

TL;DR: A new method which can be performed as a preprocessing method for initial latent factor matrices of users and items which alleviates data sparsity and increases the speed of matrix factorization convergence.
Book ChapterDOI

A Novel Social Event Recommendation Method Based on Social and Collaborative Friendships

TL;DR: This paper proposes a social event recommendation method that exploits user's social and collaborative friendships to recommend events of interest and shows that the proposed method is effective and it outperforms many well-known recommendation methods.
Posted Content

A Comparative Study of Matrix Factorization and Random Walk with Restart in Recommender Systems

TL;DR: In this article, a comparative study of matrix factorization and random walk with restart (RWR) in recommender systems is presented, where the authors evaluate the performance of each method in terms of various measures.
Journal ArticleDOI

Evaluating Recommender Systems: Survey and Framework

TL;DR: The FEVR framework provides a structured foundation to adopt adequate evaluation configurations that encompass this required multi-facettedness and provides the basis to advance in the field.
References
More filters
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Journal ArticleDOI

Latent dirichlet allocation

TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Proceedings Article

Latent Dirichlet Allocation

TL;DR: This paper proposed a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hof-mann's aspect model, also known as probabilistic latent semantic indexing (pLSI).

Some methods for classification and analysis of multivariate observations

TL;DR: The k-means algorithm as mentioned in this paper partitions an N-dimensional population into k sets on the basis of a sample, which is a generalization of the ordinary sample mean, and it is shown to give partitions which are reasonably efficient in the sense of within-class variance.
Related Papers (5)