scispace - formally typeset
Open AccessJournal Article

Counterfactual reasoning and learning systems: the example of computational advertising

Reads0
Chats0
TLDR
This work shows how to leverage causal inference to understand the behavior of complex learning systems interacting with their environment and predict the consequences of changes to the system and allow both humans and algorithms to select the changes that would have improved the system performance.
Abstract
This work shows how to leverage causal inference to understand the behavior of complex learning systems interacting with their environment and predict the consequences of changes to the system. Such predictions allow both humans and algorithms to select the changes that would have improved the system performance. This work is illustrated by experiments on the ad placement system associated with the Bing search engine.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Introduction to Bandits in Recommender Systems

TL;DR: The aim of this tutorial is to provide a brief introduction to the bandit problem with an overview of the various applications of bandit algorithms in recommendation.
Posted ContentDOI

Learning and Forgetting Using Reinforced Bayesian Change Detection

TL;DR: A model of behavioural automatization that is based on adaptive forgetting that encompasses many aspects of Reinforcement Learning (RL), such as Temporal Difference RL and counterfactual learning, and accounts for the reduced computational cost of automatic behaviour.
Posted Content

CADET: A Systematic Method For Debugging Misconfigurations using Counterfactual Reasoning.

TL;DR: CADET (short for Causal Debugging Toolkit) is proposed that enables users to identify, explain, and fix the root cause of non-functional faults early and in a principled fashion and compares with state-of-the-art configuration optimization and ML-based debugging approaches.
Posted Content

Policy Evaluation with Latent Confounders via Optimal Balance

TL;DR: It is shown that unlike the unconfounded case no single set of weights can give unbiased evaluation for all outcome models, yet a new algorithm is proposed that can still provably guarantee consistency by instead minimizing an adversarial balance objective.
Posted Content

Causal Deep Information Bottleneck.

TL;DR: This work proposes estimating the causal effect from the perspective of the information bottleneck principle by explicitly identifying a low-dimensional representation of latent confounding, and proves theoretically that the proposed model can be used to recover the average causal effect.
References
More filters
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
MonographDOI

Causality: models, reasoning, and inference

TL;DR: The art and science of cause and effect have been studied in the social sciences for a long time as mentioned in this paper, see, e.g., the theory of inferred causation, causal diagrams and the identification of causal effects.
Journal ArticleDOI

Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning

TL;DR: This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units that are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reInforcement tasks, and they do this without explicitly computing gradient estimates.
Book

Introduction to Reinforcement Learning

TL;DR: In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning.