scispace - formally typeset
Open AccessProceedings ArticleDOI

Deep Neural Networks for YouTube Recommendations

Paul Covington, +2 more
- pp 191-198
Reads0
Chats0
TLDR
This paper details a deep candidate generation model and then describes a separate deep ranking model and provides practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.
Abstract
YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.

read more

Citations
More filters
Proceedings ArticleDOI

MERCI: efficient embedding reduction on commodity hardware via sub-query memoization

TL;DR: In this article, the authors propose a memoization framework for efficient embedding reduction by memoizing partial aggregation of correlated embeddings and retrieving the memoized partial result at a low cost.
Posted Content

Information Leakage in Embedding Models

TL;DR: The authors demonstrate that embeddings, in addition to encoding generic semantics, often also present a vector that leaks sensitive information about the input data, such as authorship of text, which can be easily extracted by training an inference model on a handful of labeled embedding vectors.
Proceedings ArticleDOI

DreamShard: Generalizable Embedding Table Placement for Recommender Systems

TL;DR: DreamShard is presented, a reinforcement learning (RL) approach for embedding table placement that substantially outperforms the existing human expert and RNN-based strategies with up to 19% speedup over the strongest baseline on large-scale synthetic tables and the authors' production tables.
Journal ArticleDOI

A Deep Learning-Based Approach for Inappropriate Content Detection and Classification of YouTube Videos

- 01 Jan 2022 - 
TL;DR: In this paper , a novel deep learning-based architecture is proposed for the detection and classification of inappropriate content in videos, which employs an ImageNet pre-trained convolutional neural network (CNN) model known as EfficientNet-B7 to extract video descriptors, which are then fed to bidirectional long short-term memory (BiLSTM) network to learn effective video representations and perform multiclass video classification.
Journal ArticleDOI

Interpretable and Lightweight 3-D Deep Learning Model for Automated ACL Diagnosis

TL;DR: In this article, the authors proposed an interpretable and lightweight 3D deep neural network model that diagnoses anterior cruciate ligament (ACL) tears from a knee MRI exam.
References
More filters
Book ChapterDOI

I and J

Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Posted Content

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: In this paper, the Skip-gram model is used to learn high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships and improve both the quality of the vectors and the training speed.