Recursive partitioning for heterogeneous causal effects
Susan Athey,Guido W. Imbens +1 more
Reads0
Chats0
TLDR
This paper provides a data-driven approach to partition the data into subpopulations that differ in the magnitude of their treatment effects, and proposes an “honest” approach to estimation, whereby one sample is used to construct the partition and another to estimate treatment effects for each subpopulation.Abstract:
In this paper we propose methods for estimating heterogeneity in causal effects in experimental and observational studies and for conducting hypothesis tests about the magnitude of differences in treatment effects across subsets of the population. We provide a data-driven approach to partition the data into subpopulations that differ in the magnitude of their treatment effects. The approach enables the construction of valid confidence intervals for treatment effects, even with many covariates relative to the sample size, and without “sparsity” assumptions. We propose an “honest” approach to estimation, whereby one sample is used to construct the partition and another to estimate treatment effects for each subpopulation. Our approach builds on regression tree methods, modified to optimize for goodness of fit in treatment effects and to account for honest estimation. Our model selection criterion anticipates that bias will be eliminated by honest estimation and also accounts for the effect of making additional splits on the variance of treatment effect estimates within each subpopulation. We address the challenge that the “ground truth” for a causal effect is not observed for any individual unit, so that standard approaches to cross-validation must be modified. Through a simulation study, we show that for our preferred method honest estimation results in nominal coverage for 90% confidence intervals, whereas coverage ranges between 74% and 84% for nonhonest approaches. Honest estimation requires estimating the model with a smaller sample size; the cost in terms of mean squared error of treatment effects for our preferred method ranges between 7–22%.read more
Citations
More filters
Journal ArticleDOI
Addressing Unobserved Selection Bias in Accounting Studies: The Bias Minimization Method
TL;DR: The minimum-biased estimator (MBE) as mentioned in this paper can be used to analyze the robustness of regression or propensity score-matched treatment estimates to unobserved selection bias in accounting studies.
Journal ArticleDOI
Estimation and evaluation of linear individualized treatment rules to guarantee performance.
TL;DR: A robust machine learning method is proposed to estimate a linear treatment rule that is guaranteed to achieve optimal reward among the class of all linear rules and is shown to provide a large benefit for mildly depressed and severely depressed patients but manifests a lack offit for moderately depressed patients.
Journal ArticleDOI
Estimating individual treatment effects by gradient boosting trees
Shonosuke Sugasawa,Hisashi Noma +1 more
TL;DR: An effective machine learning method to estimate ITEs using the gradient boosting trees (GBT) that can flexibly capture the relationship between clinical outcome and possibly high‐dimensional covariates, and it would be useful for identifying subpopulations of patients who would benefit from the treatment.
Journal ArticleDOI
The role of machine learning analytics and metrics in retailing research
TL;DR: It is believed that machine learning can enhance customer experience and, accordingly, it is proposed that the explanatory and machine learning approaches need not be mutually exclusive.
Proceedings Article
Reconsidering Generative Objectives For Counterfactual Reasoning
TL;DR: This work presents a novel generative Bayesian estimation framework that integrates representation learning, adversarial matching and causal estimation, and derives a reformulated variational bound that explicitly targets the causal effect estimation rather than specific predictive goals.
References
More filters
Journal ArticleDOI
Random Forests
TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Journal ArticleDOI
Regression Shrinkage and Selection via the Lasso
TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book
The Nature of Statistical Learning Theory
TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Statistical learning theory
TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Journal ArticleDOI
The central role of the propensity score in observational studies for causal effects
TL;DR: The authors discusses the central role of propensity scores and balancing scores in the analysis of observational studies and shows that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates.
Related Papers (5)
Estimation and Inference of Heterogeneous Treatment Effects using Random Forests
Stefan Wager,Susan Athey +1 more