scispace - formally typeset
Open AccessProceedings ArticleDOI

Deep Neural Networks for YouTube Recommendations

Paul Covington, +2 more
- pp 191-198
Reads0
Chats0
TLDR
This paper details a deep candidate generation model and then describes a separate deep ranking model and provides practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.
Abstract
YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.

read more

Citations
More filters
Proceedings ArticleDOI

Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold

TL;DR: This work theoretically justify the neural collapse phenomenon for normalized features, and simplifies the empirical loss function in a multi-class classification task into a nonconvex optimization problem over the Riemannian manifold by constraining all features and classi⬁ers over the sphere.
Journal ArticleDOI

User Response Prediction in Online Advertising

TL;DR: In this article, a comprehensive review of user response prediction in online advertising and related recommender applications is provided, focusing on the current progress of machine learning methods used in different online platforms.
Journal ArticleDOI

YouTube, The Great Radicalizer? Auditing and Mitigating Ideological Biases in YouTube Recommendations

TL;DR: A systematic audit of YouTube's recommendation system finds that YouTube’s recommendations do direct users – especially right-leaning users – to ideologically biased and increasingly radical content on both homepages and in up-next recommendations, but bias can be mitigated through an intervention.
Posted Content

Beyond Views: Measuring and Predicting Engagement on YouTube Videos

TL;DR: This paper presents a first large-scale measurement of video-level aggregate engagement from publicly available data streams, on a collection of 5.3 million YouTube videos published over two months in 2016, and defines a new metric, relative engagement, that is calibrated against video properties and strongly correlate with recognized notions of quality.
Proceedings ArticleDOI

Matrix completion with preference ranking for top-n recommendation

TL;DR: A novel matrix completion method that integrates the low rank and preference ranking characteristics of recommendation matrix under a self-recovery model for top-N recommendation and outperforms several state-of-the-art methods is proposed.
References
More filters
Book ChapterDOI

I and J

Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Posted Content

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: In this paper, the Skip-gram model is used to learn high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships and improve both the quality of the vectors and the training speed.