scispace - formally typeset
Open AccessJournal ArticleDOI

Recursive partitioning for heterogeneous causal effects

Susan Athey, +1 more
- 05 Jul 2016 - 
- Vol. 113, Iss: 27, pp 7353-7360
Reads0
Chats0
TLDR
This paper provides a data-driven approach to partition the data into subpopulations that differ in the magnitude of their treatment effects, and proposes an “honest” approach to estimation, whereby one sample is used to construct the partition and another to estimate treatment effects for each subpopulation.
Abstract
In this paper we propose methods for estimating heterogeneity in causal effects in experimental and observational studies and for conducting hypothesis tests about the magnitude of differences in treatment effects across subsets of the population. We provide a data-driven approach to partition the data into subpopulations that differ in the magnitude of their treatment effects. The approach enables the construction of valid confidence intervals for treatment effects, even with many covariates relative to the sample size, and without “sparsity” assumptions. We propose an “honest” approach to estimation, whereby one sample is used to construct the partition and another to estimate treatment effects for each subpopulation. Our approach builds on regression tree methods, modified to optimize for goodness of fit in treatment effects and to account for honest estimation. Our model selection criterion anticipates that bias will be eliminated by honest estimation and also accounts for the effect of making additional splits on the variance of treatment effect estimates within each subpopulation. We address the challenge that the “ground truth” for a causal effect is not observed for any individual unit, so that standard approaches to cross-validation must be modified. Through a simulation study, we show that for our preferred method honest estimation results in nominal coverage for 90% confidence intervals, whereas coverage ranges between 74% and 84% for nonhonest approaches. Honest estimation requires estimating the model with a smaller sample size; the cost in terms of mean squared error of treatment effects for our preferred method ranges between 7–22%.

read more

Citations
More filters
Posted Content

Why Do Defaults Affect Behavior? Experimental Evidence from Afghanistan

TL;DR: This paper found evidence that the default effect is driven largely by a combination of present-biased preferences and the cognitive cost of calculating alternate savings scenarios, and that default assignment also causes employees to develop savings habits that outlive their experiment: they are more likely to believe that savings is important, less likely to report being too financially constrained to save, and more likely make an active decision to save at the end of their trial.
Posted Content

Aggregating Distributional Treatment Effects: A Bayesian Hierarchical Analysis of the Microcredit Literature

TL;DR: The authors developed methods to aggregate evidence on distributional treatment effects from multiple studies conducted in different settings, and applied them to the microcredit literature and found that microcredit has negligible impact on the distribution of various household outcomes below the 75th percentile.
Posted Content

A comparison of methods for model selection when estimating individual treatment effects

TL;DR: This work provides a didactic framework that elucidates the relationships between the different approaches and compare them all using a variety of simulations of both randomized and observational data, and shows that researchers estimating heterogenous treatment effects need not limit themselves to a single model-fitting algorithm.
Journal ArticleDOI

Targeting Policy-Compliers with Machine Learning: An Application to a Tax Rebate Programme in Italy

TL;DR: This paper proposes an application of ML targeting that uses the massive tax rebate scheme introduced in Italy in 2014 to target the policy-compliers.
Journal ArticleDOI

Uplift Modeling for preventing student dropout in higher education

TL;DR: The results demonstrate the virtues of uplift modeling in tailoring retention efforts in higher education over conventional predictive modeling approaches.
References
More filters
Journal ArticleDOI

Random Forests

TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Journal ArticleDOI

The central role of the propensity score in observational studies for causal effects

Paul R. Rosenbaum, +1 more
- 01 Apr 1983 - 
TL;DR: The authors discusses the central role of propensity scores and balancing scores in the analysis of observational studies and shows that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates.