scispace - formally typeset
Open AccessProceedings ArticleDOI

Deep Neural Networks for YouTube Recommendations

Paul Covington, +2 more
- pp 191-198
Reads0
Chats0
TLDR
This paper details a deep candidate generation model and then describes a separate deep ranking model and provides practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.
Abstract
YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.

read more

Citations
More filters
Proceedings ArticleDOI

How the Design of YouTube Influences User Sense of Agency

TL;DR: In this article, the authors focus on how the internal mechanisms of an app can support user agency, taking the popular YouTube mobile app as a test case, and find that autoplay and recommendations primarily undermine sense of agency, while playlists and search support it.
Proceedings ArticleDOI

Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction

TL;DR: Wang et al. as mentioned in this paper proposed a novel Feature Generation by Convolutional Neural Network (FGCNN) model with two components: Feature Generation and Deep Classifier, which leverages the strength of CNN to generate local patterns and recombine them to generate new features.
Proceedings ArticleDOI

Towards Neural Mixture Recommender for Long Range Dependent User Sequences

TL;DR: A neural Multi-temporal-range Mixture Model (M3) is proposed as a tailored solution to deal with both short-term and long-term dependencies and consistently outperforms state-of-the-art sequential recommendation methods.
Posted Content

A Longitudinal Analysis of YouTube's Promotion of Conspiracy Videos.

TL;DR: A classifier for automatically determining if a video is conspiratorial is developed and a year-long picture of the videos actively promoted by YouTube is obtained to obtain trends of the so-called filter-bubble effect for conspiracy theories.
Posted Content

Artificial Intelligence for Social Good: A Survey

TL;DR: This work quantitatively analyzes the distribution and trend of the AI4SG literature in terms of application domains and AI techniques used and proposes three conceptual methods to systematically group the existing literature and analyze the eight AI4 SG application domains in a unified framework.
References
More filters
Book ChapterDOI

I and J

Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Posted Content

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: In this paper, the Skip-gram model is used to learn high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships and improve both the quality of the vectors and the training speed.