Identification and Estimation of Local Average Treatment Effects
TLDR
In this article, the authors investigated conditions sufficient for identification of average treatment effects using instrumental variables and showed that the existence of valid instruments is not sufficient to identify any meaningful average treatment effect.Abstract:
We investigate conditions sufficient for identification of average treatment effects using instrumental variables. First we show that the existence of valid instruments is not sufficient to identify any meaningful average treatment effect. We then establish that the combination of an instrument and a condition on the relation between the instrument and the participation status is sufficient for identification of a local average treatment effect for those who can be induced to change their participation status by changing the value of the instrument. Finally we derive the probability limit of the standard IV estimator under these conditions. It is seen to be a weighted average of local average treatment effects.(This abstract was borrowed from another version of this item.)read more
Citations
More filters
Journal Article
Identification of Causal effects Using Instrumental Variables
TL;DR: In this paper, a framework for causal inference in settings where assignment to a binary treatment is ignorable, but compliance with the assignment is not perfect so that the receipt of treatment is nonignorable.
Journal ArticleDOI
Identification of Causal Effects Using Instrumental Variables
TL;DR: It is shown that the instrumental variables (IV) estimand can be embedded within the Rubin Causal Model (RCM) and that under some simple and easily interpretable assumptions, the IV estimand is the average causal effect for a subgroup of units, the compliers.
Posted Content
Regression Discontinuity Designs in Economics
TL;DR: In this article, the authors provide an introduction and user guide to regression discontinuity (RD) design for empirical researchers, including the basic theory behind RD design, details when RD is likely to be valid or invalid given economic incentives.
Book ChapterDOI
The Economics and Econometrics of Active Labor Market Programs
TL;DR: In this paper, the authors examine the impacts of active labor market policies, such as job training, job search assistance, and job subsidies, and the methods used to evaluate their effectiveness.
Journal ArticleDOI
Recent developments in the econometrics of program evaluation
TL;DR: In the last two decades, much research has been done on the econometric and statistical analysis of such causal effects as discussed by the authors, which has reached a level of maturity that makes it an important tool in many areas of empirical research in economics, including labor economics, public finance, development economics, industrial organization, and other areas in empirical microeconomics.
References
More filters
Journal ArticleDOI
Estimating causal effects of treatments in randomized and nonrandomized studies.
TL;DR: A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented in this paper, where the objective is to specify the benefits of randomization in estimating causal effects of treatments.
Journal Article
Identification of Causal effects Using Instrumental Variables
TL;DR: In this paper, a framework for causal inference in settings where assignment to a binary treatment is ignorable, but compliance with the assignment is not perfect so that the receipt of treatment is nonignorable.
Journal ArticleDOI
Does Compulsory School Attendance Affect Schooling and Earnings
TL;DR: This paper found that the season of birth is related to educational attainment and earnings, and that roughly 25 percent of potential dropouts remain in school because of compulsory schooling laws. But, they did not study the effect of compulsory attendance laws on educational attainment.
Posted Content
Evaluating the Econometric Evaluations of Training Programs with Experimental Data
TL;DR: The National Supported Work Program employed an experimental design that randomly assigned some participants into a treatment group, receiving training, and the rest into a control group receiving no training as mentioned in this paper, and the difference between the post-training earnings of the two groups provided an unbiased estimate of the impact of the program.