scispace - formally typeset
Open AccessProceedings ArticleDOI

Deep Neural Networks for YouTube Recommendations

Paul Covington, +2 more
- pp 191-198
TLDR
This paper details a deep candidate generation model and then describes a separate deep ranking model and provides practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.
Abstract
YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.

read more

Citations
More filters
Proceedings ArticleDOI

Benchmarking and Analyzing Deep Neural Network Training

TL;DR: This work proposes a new benchmark suite for DNN training, called TBD, and presents a new toolchain for performance analysis for these models that combines the targeted usage of existing performance analysis tools, careful selection of performance metrics, and methodologies to analyze the results.
Proceedings ArticleDOI

Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction

TL;DR: A novel Feature Generation by Convolutional Neural Network (FGCNN) model with two components: Feature Generation and Deep Classifier, which significantly outperforms nine state-of-the-art models on three large-scale datasets.
Proceedings ArticleDOI

FedFast: Going Beyond Average for Faster Training of Federated Recommender Systems

TL;DR: A novel technique is presented, FedFast, to accelerate distributed learning which achieves good accuracy for all users very early in the training process, by sampling from a diverse set of participating clients in each training round and applying an active aggregation method that propagates the updated model to the other clients.
Posted Content

Quantum Neuron: an elementary building block for machine learning on quantum computers

TL;DR: A small quantum circuit is proposed that naturally simulates neurons with threshold activation and defines a building block, the "quantum neuron", that can reproduce a variety of classical neural network constructions while maintaining the ability to process superpositions of inputs and preserve quantum coherence and entanglement.
Proceedings ArticleDOI

An Efficient Adaptive Transfer Neural Network for Social-aware Recommendation

TL;DR: An Efficient Adaptive Transfer Neural Network (EATNN) is proposed that consistently outperforms the state-of-the-art methods on Top-K recommendation task, especially for cold-start users who have few item interactions and shows significant advantages in training efficiency.
References
More filters
Book ChapterDOI

I and J

Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Posted Content

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: In this paper, the Skip-gram model is used to learn high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships and improve both the quality of the vectors and the training speed.