scispace - formally typeset
Open AccessProceedings ArticleDOI

Deep Neural Networks for YouTube Recommendations

Paul Covington, +2 more
- pp 191-198
TLDR
This paper details a deep candidate generation model and then describes a separate deep ranking model and provides practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.
Abstract
YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.

read more

Citations
More filters
Proceedings ArticleDOI

Click-through rate prediction with the user memory network

TL;DR: Memory Augmented DNN (MA-DNN) as mentioned in this paper proposes to use two external memory vectors for each user, memorizing high-level abstractions of what a user possibly likes and dislikes.
Proceedings ArticleDOI

Theoretical Understandings of Product Embedding for E-commerce Machine Learning

TL;DR: In this article, the authors take an e-commerce-oriented view of product embeddings and reveal a complete theoretical view from both the representation learning and the learning theory perspective.
Proceedings ArticleDOI

Distributed Equivalent Substitution Training for Large-Scale Recommender Systems

TL;DR: Distributed Equivalent Substitution (DES) as discussed by the authors replaces weights-rich operators with the computationally equivalent sub-operators and aggregates partial results instead of transmitting the huge sparse weights directly through the network.
Posted Content

Communication Optimization Strategies for Distributed Deep Learning: A Survey

TL;DR: A comprehensive survey of communication strategies from both algorithm and computer network perspectives is given, including how to reduce the number of communication rounds and transmitted bits per round, and shed light on how to overlap computation and communication.
Book ChapterDOI

A Recurrent Neural Network Survival Model: Predicting Web User Return Time

TL;DR: In this article, a novel RNN survival model was developed to predict return time prediction on a large e-commerce dataset with a superior ability to discriminate between returning and non-returning users than either method applied in isolation.
References
More filters
Book ChapterDOI

I and J

Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Posted Content

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: In this paper, the Skip-gram model is used to learn high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships and improve both the quality of the vectors and the training speed.