scispace - formally typeset
Open AccessProceedings ArticleDOI

DeepFM: a factorization-machine based neural network for CTR prediction

TLDR
This paper shows that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions, and combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture.
Abstract
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide & Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Vertical Semi-Federated Learning for Efficient Online Advertising

TL;DR: In this article , a joint privileged learning framework (JPL) is proposed to alleviate the absence of the passive party's feature and adapt to the whole sample space, which achieves the best performance over baseline methods and validate its superiority in the Semi-VFL setting.
Proceedings ArticleDOI

Group-wise Reinforcement Feature Generation for Optimal and Explainable Representation Space Reconstruction

TL;DR: A cascading reinforcement learning method that leverages three cascading Markov Decision Processes to learn optimal generation policies to automate the selection of features and operations and the feature crossing and a group-wise reinforcement generation perspective.
Proceedings ArticleDOI

HIEN: Hierarchical Intention Embedding Network for Click-Through Rate Prediction

TL;DR: A novel approach Hierarchical Intention Embedding Network (HIEN), which considers dependencies of attributes based on bottom-up tree aggregation in the constructed attribute graph and captures user intents for different item attributes as well as item intents based on the proposed hierarchical attention mechanism.
Journal ArticleDOI

Click-through rate prediction using transfer learning with fine-tuned parameters

TL;DR: In this paper , the authors propose an end-to-end transfer learning framework with fine-tuned parameters for CTR prediction, called Automatic Fine-Tuning (AutoFT), which is a set of learnable transfer policies that independently determine how the specific instance-based fine-tuning policies should be trained, which decide the routing in the embedding representations and the high-order feature representations layer by layer in deep CTR model.
Proceedings ArticleDOI

Hierarchically Fusing Long and Short-Term User Interests for Click-Through Rate Prediction in Product Search

TL;DR: A new approach named Hierarchical Interests Fusing Network (HIFN), which consists of four basic modules namely Short-term Interest Extractor (SIE), Long-term interest extractor (LIE), Interest Fusion Module (IFM) and Interest Disentanglement Module (IDM), which is proposed to resolve challenges in personalized product search.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings ArticleDOI

Deep Neural Networks for YouTube Recommendations

TL;DR: This paper details a deep candidate generation model and then describes a separate deep ranking model and provides practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.
Proceedings ArticleDOI

Factorization Machines

TL;DR: Factorization Machines (FM) are introduced which are a new model class that combines the advantages of Support Vector Machines (SVM) with factorization models and can mimic these models just by specifying the input data (i.e. the feature vectors).
Proceedings ArticleDOI

Restricted Boltzmann machines for collaborative filtering

TL;DR: This paper shows how a class of two-layer undirected graphical models, called Restricted Boltzmann Machines (RBM's), can be used to model tabular data, such as user's ratings of movies, and demonstrates that RBM's can be successfully applied to the Netflix data set.
Related Papers (5)