scispace - formally typeset
Open AccessProceedings ArticleDOI

Deep Neural Networks for YouTube Recommendations

Paul Covington, +2 more
- pp 191-198
Reads0
Chats0
TLDR
This paper details a deep candidate generation model and then describes a separate deep ranking model and provides practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.
Abstract
YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.

read more

Citations
More filters
Journal ArticleDOI

Music Video Recommendation Based on Link Prediction Considering Local and Global Structures of a Network

TL;DR: By using the LP-LGSN to predict the degrees to which users desire music videos, the proposed method can recommend users’ desired music videos and the experimental results for a real-world dataset constructed from YouTube-8M show the effectiveness of the method.
Book ChapterDOI

How Facebook and Google Accidentally Created a Perfect Ecosystem for Targeted Disinformation.

TL;DR: This chapter provides examples and discusses relevant mechanisms and interactions of optimization for metrics like dwell time, watch time or “engagement” that can promote disinformation and propaganda content.
Proceedings ArticleDOI

Recommendations and User Agency: The Reachability of Collaboratively-Filtered Information

TL;DR: In this paper, the authors consider the information availability problem through the lens of user recourse and propose a computationally efficient audit for top-$N$ linear recommender models, and describe the relationship between model complexity and the effort necessary for users to exert control over their recommendations.
Posted Content

CPR: Understanding and Improving Failure Tolerant Training for Deep Learning Recommendation with Partial Recovery

TL;DR: CPR relaxes the consistency requirement by enabling non-failed nodes to proceed without loading checkpoints when a node fails during training, improving failure-related overheads and suggesting that CPR can speed up training on a real production-scale cluster, without notably degrading the accuracy.
Proceedings ArticleDOI

An Homophily-based Approach for Fast Post Recommendation in Microblogging Systems

TL;DR: After a thorough study of a large Twitter dataset, this work presents a propagation model which relies on homophily to propose post recommendations, and relies on the construction of a similarity graph based on retweet behaviors on top of the Twitter graph.
References
More filters
Book ChapterDOI

I and J

Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Posted Content

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: In this paper, the Skip-gram model is used to learn high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships and improve both the quality of the vectors and the training speed.