scispace - formally typeset
Open AccessProceedings ArticleDOI

Deep Neural Networks for YouTube Recommendations

Paul Covington, +2 more
- pp 191-198
TLDR
This paper details a deep candidate generation model and then describes a separate deep ranking model and provides practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.
Abstract
YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.

read more

Citations
More filters
Journal ArticleDOI

ParaX: boosting deep learning for big data analytics on many-core CPUs

TL;DR: ParaX designs an ultralight scheduling policy which sufficiently overlaps the access-intensive layers with the computeintensive ones to avoid contention, and proposes a NUMA-aware gradient server mechanism for training which leverages shared memory to substantially reduce the overhead of per-iteration parameter synchronization.
Proceedings ArticleDOI

RSLIME: An Efficient Feature Importance Analysis Approach for Industrial Recommendation Systems

TL;DR: This paper describes the short video recommendation system of iQIYI at a high level and proposes a Recommendation System Boosted Local Interpretable Model-Agnostic Explanations (RSLIME) for real-time feature interpretation and importance evaluation.
Proceedings ArticleDOI

Towards a Personalized Movie Recommendation System: A Deep Learning Approach

TL;DR: Wang et al. as mentioned in this paper proposed a personalized movie recommendation system based on deep neural networks, which uses a deep neural network to process discrete features to fully dig out the various features between the user and the movie.
Proceedings ArticleDOI

UKD: Debiasing Conversion Rate Estimation via Uncertainty-regularized Knowledge Distillation

TL;DR: An uncertainty-regularized knowledge distillation framework to debias CVR estimation via distilling knowledge from unclicked ads, and experiments on billion-scale datasets show that UKD outperforms previous debiasing methods.
Proceedings ArticleDOI

Salary Predictor System for Thailand Labour Workforce using Deep Learning

TL;DR: The purpose of this research is to build the Salary Predictor System to predict monthly salary of employees in Thailand using the Deep Learning approach, which has rapidly increased distinguish attention in machine learning field.
References
More filters
Book ChapterDOI

I and J

Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Posted Content

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: In this paper, the Skip-gram model is used to learn high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships and improve both the quality of the vectors and the training speed.