scispace - formally typeset
Journal ArticleDOI

A Deep Latent Factor Model for High-Dimensional and Sparse Matrices in Recommender Systems

TLDR
A deep latent factor model (DLFM) is proposed for building a deep-structured RS on an HiDS matrix efficiently by sequentially connecting multiple latent factor (LF) models instead of multilayered neural networks through a nonlinear activation function.
Abstract
Recommender systems (RSs) commonly adopt a user-item rating matrix to describe users’ preferences on items. With users and items exploding, such a matrix is usually high-dimensional and sparse (HiDS). Recently, the idea of deep learning has been applied to RSs. However, current deep-structured RSs suffer from high computational complexity. Enlightened by the idea of deep forest, this paper proposes a deep latent factor model (DLFM) for building a deep-structured RS on an HiDS matrix efficiently. Its main idea is to construct a deep-structured model by sequentially connecting multiple latent factor (LF) models instead of multilayered neural networks through a nonlinear activation function. Thus, the computational complexity grows linearly with its layer count, which is easy to resolve in practice. The experimental results on four HiDS matrices from industrial RSs demonstrate that when compared with state-of-the-art LF models and deep-structured RSs, DLFM can well balance the prediction accuracy and computational efficiency, which well fits the desire of industrial RSs for fast and right recommendations.

read more

Citations
More filters
Journal ArticleDOI

A Latent Factor Analysis-Based Approach to Online Sparse Streaming Feature Selection

TL;DR: This study proposes a latent-factor-analysis-based online sparse-streaming-feature selection algorithm (LOSSA), which is to apply latent factor analysis to pre-estimate missing data in sparse streaming features before conducting feature selection, thereby addressing the missing data issue effectively and efficiently.
Journal ArticleDOI

Robust Latent Factor Analysis for Precise Representation of High-Dimensional and Sparse Data

TL;DR: Zhang et al. as discussed by the authors proposed a smooth $L 1 -norm-oriented latent factor (SL-LF) model, which is more robust to outlier data.
Journal ArticleDOI

A Novel Group Recommendation Model With Two-Stage Deep Learning

TL;DR: A novel model, called group recommendation model with two-stage deep learning (GRMTDL), which can effectively absorb knowledge of user preferences into the process of GPL and design a novel layered transfer learning (LTL) method to learn group preferences by alternately optimizing these two subnetworks.
Journal ArticleDOI

An Overview of Recommendation Techniques and Their Applications in Healthcare

TL;DR: A comprehensive review of typical recommendation techniques and their applications in the field of healthcare is presented in this article, where an overview is provided on three famous recommendation techniques, namely, content-based, collaborative filtering (CF)-based, and hybrid methods.
Journal ArticleDOI

A Latent Factor Analysis-Based Approach to Online Sparse Streaming Feature Selection

TL;DR: Li et al. as mentioned in this paper proposed a latent factor analysis-based online sparse-streaming-feature selection algorithm (LOSSA) to pre-estimate missing data in sparse streaming features before conducting feature selection, thereby addressing the missing data issue effectively and efficiently.
References
More filters
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Journal ArticleDOI

Reducing the Dimensionality of Data with Neural Networks

TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Journal Article

Statistical Comparisons of Classifiers over Multiple Data Sets

TL;DR: A set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers is recommended: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparisons of more classifiers over multiple data sets.
Journal ArticleDOI

Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions

TL;DR: This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches.
Journal ArticleDOI

Matrix Factorization Techniques for Recommender Systems

TL;DR: As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.
Related Papers (5)