scispace - formally typeset
Open AccessProceedings ArticleDOI

Deep Neural Networks for YouTube Recommendations

Paul Covington, +2 more
- pp 191-198
Reads0
Chats0
TLDR
This paper details a deep candidate generation model and then describes a separate deep ranking model and provides practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.
Abstract
YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.

read more

Citations
More filters
Proceedings ArticleDOI

Graph Structure Aware Contrastive Knowledge Distillation for Incremental Learning in Recommender Systems

TL;DR: In this paper, a Graph Structure Aware Contrastive Knowledge Distillation for Incremental Learning in recommender systems is proposed to focus on the rich relational information in the recommendation context, which combines the contrastive distillation formulation with intermediate layer distillation to inject layerlevel supervision.
Proceedings ArticleDOI

On Distributed Adaptive Optimization with Gradient Compression

TL;DR: Convergence analysis of C OMP -AMS shows that such compressed gradient averaging strategy yields same convergence rate as standard AMSGrad, and also exhibits the linear speedup effect w.r.t. the number of local workers.
Journal ArticleDOI

Plausibility of Using a Checklist With YouTube to Facilitate the Discovery of Acute Low Back Pain Self-Management Content: Exploratory Study.

TL;DR: It is suggested that a simple checklist may facilitate the discovery of guideline-concordant ALBP self-management content on YouTube and the clinical contexts in which the use of an ALBP checklist with YouTube is feasible.
Posted Content

Specializing Joint Representations for the task of Product Recommendation

TL;DR: In this article, a unified product embedded representation is proposed for retrieval-based product recommendation, which is optimized for the task of product recommendation by fusing modality-specific product embeddings into a joint product embedding.
Proceedings ArticleDOI

End-to-End Deep Attentive Personalized Item Retrieval for Online Content-sharing Platforms

TL;DR: This paper proposes the end-to-end deep attentive model (EDAM) to deal with personalized item retrieval for online content-sharing platforms using only discrete personal item history and queries, and demonstrates that this approach significantly outperforms several competitive baseline methods.
References
More filters
Book ChapterDOI

I and J

Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Posted Content

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: In this paper, the Skip-gram model is used to learn high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships and improve both the quality of the vectors and the training speed.