scispace - formally typeset
Open AccessProceedings ArticleDOI

Deep Neural Networks for YouTube Recommendations

Paul Covington, +2 more
- pp 191-198
Reads0
Chats0
TLDR
This paper details a deep candidate generation model and then describes a separate deep ranking model and provides practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.
Abstract
YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.

read more

Citations
More filters
Proceedings ArticleDOI

MARS: Memory Attention-Aware Recommender System

TL;DR: A Memory Attention-aware Recommender System (MARS), which utilizes a memory component and a novel attentional mechanism to learn deep adaptive user representations, trained in an end-to-end fashion and adaptively summarizes users' interests.
Posted Content

A Survey on Neural Recommendation: From Collaborative Filtering to Content and Context Enriched Recommendation.

TL;DR: A systematic review on neural recommender models is conducted, aiming to summarize the field from the perspective of recommendation modeling, and discusses some promising directions in this field, including benchmarking recommender systems, graph reasoning based recommendation models, and explainable and fair recommendations for social good.
Proceedings ArticleDOI

Show me the Cache: Optimizing Cache-Friendly Recommendations for Sequential Content Access

TL;DR: In this paper, the authors proposed a Markovian model for recommendation-driven user requests and formulated the problem of biasing the recommendation algorithm to minimize access cost, while maintaining acceptable recommendation quality.
Proceedings ArticleDOI

Learning to Embed Categorical Features without Embedding Tables for Recommendation

TL;DR: Deep Hash Embedding (DHE) as mentioned in this paper replaces the traditional embedding tables by a deep embedding network to compute embeddings on the fly, which can handle high-cardinality features and unseen feature values.
Posted Content

Practical Privacy Preserving POI Recommendation

TL;DR: Li et al. as mentioned in this paper proposed a novel privacy preserving POI recommendation framework, where users' private data (features and actions) are kept on their own side, e.g., Cellphone or Pad, while the public data need to be accessed by all the users to reduce the storage costs of users' devices.
References
More filters
Book ChapterDOI

I and J

Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Posted Content

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: In this paper, the Skip-gram model is used to learn high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships and improve both the quality of the vectors and the training speed.