Real-Time Bidding by Reinforcement Learning in Display Advertising
Han Cai,Kan Ren,Weinan Zhang,Kleanthis Malialis,Jun Wang,Yong Yu,Defeng Guo +6 more
- pp 661-670
TLDR
In this article, the authors formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set.Abstract:
The majority of online display ads are served through real-time bidding (RTB) --- each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks. The empirical study on two large-scale real-world datasets and the live A/B testing on a commercial platform have demonstrated the superior performance and high efficiency compared to state-of-the-art methods.read more
Citations
More filters
Posted Content
Efficient Architecture Search by Network Transformation
TL;DR: In this paper, a meta-controller is employed to grow the network depth or layer width with function-preserving transformations, thus saving a large amount of computational cost, which can be reused for further exploration.
Proceedings ArticleDOI
Conversational Recommender System
Yueming Sun,Yi Zhang +1 more
TL;DR: In this paper, a deep reinforcement learning framework is proposed to build a personalized conversational recommendation agent that optimizes a per session based utility function, where a user conversation history is represented as a semi-structured user query with facet-value pairs.
Journal ArticleDOI
Challenges of real-world reinforcement learning: definitions, benchmarks and analysis
Gabriel Dulac-Arnold,Nir Levine,Daniel J. Mankowitz,Jerry Li,Cosmin Paduraru,Sven Gowal,Todd Hester +6 more
TL;DR: This work identifies and formalizes a series of independent challenges that embody the difficulties that must be addressed for RL to be commonly deployed in real-world systems and proposes an as an open-source benchmark.
Proceedings ArticleDOI
Real-Time Bidding with Multi-Agent Reinforcement Learning in Display Advertising
TL;DR: The results show cluster-based bidding would largely outperform single-agent and bandit approaches, and the coordinated bidding achieves better overall objectives than purely self-interested bidding agents.
Journal ArticleDOI
Online Display Advertising Markets: A Literature Review and Future Directions
TL;DR: In this article, the authors propose an idea that advertisers' demand for impressions is matched to publishers' supply of them in the display advertising industry, which is a $50 billion industry in which advertisers' (e.g., P&G, Geico) demand for impression is matched with publishers' demand of them.
References
More filters
Journal ArticleDOI
Mastering the game of Go with deep neural networks and tree search
David Silver,Aja Huang,Chris J. Maddison,Arthur Guez,Laurent Sifre,George van den Driessche,Julian Schrittwieser,Ioannis Antonoglou,Veda Panneershelvam,Marc Lanctot,Sander Dieleman,Dominik Grewe,John Nham,Nal Kalchbrenner,Ilya Sutskever,Timothy P. Lillicrap,Madeleine Leach,Koray Kavukcuoglu,Thore Graepel,Demis Hassabis +19 more
TL;DR: Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.
Journal ArticleDOI
Technical Note : \cal Q -Learning
Chris Watkins,Peter Dayan +1 more
TL;DR: This paper presents and proves in detail a convergence theorem forQ-learning based on that outlined in Watkins (1989), showing that Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action- values are represented discretely.
Journal ArticleDOI
Technical Note Q-Learning
Chris Watkins,Peter Dayan +1 more
TL;DR: In this article, it is shown that Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action values are represented discretely.
Book ChapterDOI