scispace - formally typeset
Search or ask a question
Author

Joshua D. Angrist

Bio: Joshua D. Angrist is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Instrumental variable & Earnings. The author has an hindex of 89, co-authored 304 publications receiving 59505 citations. Previous affiliations of Joshua D. Angrist include Hebrew University of Jerusalem & Boston University.


Papers
More filters
Posted Content
01 Jan 2009
TL;DR: The core methods in today's econometric toolkit are linear regression for statistical control, instrumental variables methods for the analysis of natural experiments, and differences-in-differences methods that exploit policy changes.
Abstract: The core methods in today's econometric toolkit are linear regression for statistical control, instrumental variables methods for the analysis of natural experiments, and differences-in-differences methods that exploit policy changes. In the modern experimentalist paradigm, these techniques address clear causal questions such as: Do smaller classes increase learning? Should wife batterers be arrested? How much does education raise wages? Mostly Harmless Econometrics shows how the basic tools of applied econometrics allow the data to speak. In addition to econometric essentials, Mostly Harmless Econometrics covers important new extensions--regression-discontinuity designs and quantile regression--as well as how to get standard errors right. Joshua Angrist and Jorn-Steffen Pischke explain why fancier econometric techniques are typically unnecessary and even dangerous. The applied econometric methods emphasized in this book are easy to use and relevant for many areas of contemporary social science. An irreverent review of econometric essentials A focus on tools that applied researchers use most Chapters on regression-discontinuity designs, quantile regression, and standard errors Many empirical examples A clear and concise resource with wide applications

7,192 citations

Journal Article
TL;DR: In this paper, a framework for causal inference in settings where assignment to a binary treatment is ignorable, but compliance with the assignment is not perfect so that the receipt of treatment is nonignorable.

4,129 citations

Journal ArticleDOI
TL;DR: It is shown that the instrumental variables (IV) estimand can be embedded within the Rubin Causal Model (RCM) and that under some simple and easily interpretable assumptions, the IV estimand is the average causal effect for a subgroup of units, the compliers.
Abstract: We outline a framework for causal inference in settings where assignment to a binary treatment is ignorable, but compliance with the assignment is not perfect so that the receipt of treatment is nonignorable. To address the problems associated with comparing subjects by the ignorable assignment—an “intention-to-treat analysis”—we make use of instrumental variables, which have long been used by economists in the context of regression models with constant treatment effects. We show that the instrumental variables (IV) estimand can be embedded within the Rubin Causal Model (RCM) and that under some simple and easily interpretable assumptions, the IV estimand is the average causal effect for a subgroup of units, the compliers. Without these assumptions, the IV estimand is simply the ratio of intention-to-treat causal estimands with no interpretation as an average causal effect. The advantages of embedding the IV approach in the RCM are that it clarifies the nature of critical assumptions needed for a...

3,978 citations

ReportDOI
TL;DR: In this article, the authors investigated conditions sufficient for identification of average treatment effects using instrumental variables and showed that the existence of valid instruments is not sufficient to identify any meaningful average treatment effect.
Abstract: We investigate conditions sufficient for identification of average treatment effects using instrumental variables. First we show that the existence of valid instruments is not sufficient to identify any meaningful average treatment effect. We then establish that the combination of an instrument and a condition on the relation between the instrument and the participation status is sufficient for identification of a local average treatment effect for those who can be induced to change their participation status by changing the value of the instrument. Finally we derive the probability limit of the standard IV estimator under these conditions. It is seen to be a weighted average of local average treatment effects.(This abstract was borrowed from another version of this item.)

2,940 citations

Journal ArticleDOI
TL;DR: This paper found that the season of birth is related to educational attainment and earnings, and that roughly 25 percent of potential dropouts remain in school because of compulsory schooling laws. But, they did not study the effect of compulsory attendance laws on educational attainment.
Abstract: We establish that season of birth is related to educational attainment because of school start age policy and compulsory school attendance laws. Individuals born in the beginning of the year start school at an older age, and can therefore drop out after completing less schooling than individuals born near the end of the year. Roughly 25 percent of potential dropouts remain in school because of compulsory schooling laws. We estimate the impact of compulsory schooling on earnings by using quarter of birth as an instrument for education. The instrumental variables estimate of the return to education is close to the ordinary least squares estimate, suggesting that there is little bias in conventional estimates. Every developed country in the world has a compulsory schooling requirement, yet little is known about the effect these laws have on educational attainment and earnings.1 This paper exploits an unusual natural experiment to estimate the impact of compulsory schooling laws in the United States. The experiment stems from the fact that children born in different months of the year start school at different ages, while compulsory schooling laws generally require students to remain in school until their sixteenth or seventeenth birthday. In effect, the interaction of school-entry requirements and compulsory schooling laws compel students born

2,475 citations


Cited by
More filters
Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

Journal ArticleDOI
TL;DR: Acemoglu, Johnson, and Robinson as discussed by the authors used estimates of potential European settler mortality as an instrument for institutional variation in former European colonies today, and they followed the lead of Curtin who compiled data on the death rates faced by European soldiers in various overseas postings.
Abstract: In Acemoglu, Johnson, and Robinson, henceforth AJR, (2001), we advanced the hypothesis that the mortality rates faced by Europeans in different parts of the world after 1500 affected their willingness to establish settlements and choice of colonization strategy. Places that were relatively healthy (for Europeans) were—when they fell under European control—more likely to receive better economic and political institutions. In contrast, places where European settlers were less likely to go were more likely to have “extractive” institutions imposed. We also posited that this early pattern of institutions has persisted over time and influences the extent and nature of institutions in the modern world. On this basis, we proposed using estimates of potential European settler mortality as an instrument for institutional variation in former European colonies today. Data on settlers themselves are unfortunately patchy—particularly because not many went to places they believed, with good reason, to be most unhealthy. We therefore followed the lead of Curtin (1989 and 1998) who compiled data on the death rates faced by European soldiers in various overseas postings. 1 Curtin’s data were based on pathbreaking data collection and statistical work initiated by the British military in the mid-nineteenth century. These data became part of the foundation of both contemporary thinking about public health (for soldiers and for civilians) and the life insurance industry (as actuaries and executives considered the

6,495 citations

Journal ArticleDOI
TL;DR: Propensity score matching (PSM) has become a popular approach to estimate causal treatment effects as discussed by the authors, but empirical examples can be found in very diverse fields of study, and each implementation step involves a lot of decisions and different approaches can be thought of.
Abstract: Propensity score matching (PSM) has become a popular approach to estimate causal treatment effects. It is widely applied when evaluating labour market policies, but empirical examples can be found in very diverse fields of study. Once the researcher has decided to use PSM, he is confronted with a lot of questions regarding its implementation. To begin with, a first decision has to be made concerning the estimation of the propensity score. Following that one has to decide which matching algorithm to choose and determine the region of common support. Subsequently, the matching quality has to be assessed and treatment effects and their standard errors have to be estimated. Furthermore, questions like 'what to do if there is choice-based sampling?' or 'when to measure effects?' can be important in empirical studies. Finally, one might also want to test the sensitivity of estimated treatment effects with respect to unobserved heterogeneity or failure of the common support condition. Each implementation step involves a lot of decisions and different approaches can be thought of. The aim of this paper is to discuss these implementation issues and give some guidance to researchers who want to use PSM for evaluation purposes.

5,510 citations

ReportDOI
TL;DR: In this paper, the authors developed asymptotic distribution theory for instrumental variable regression when the partial correlation between the instruments and a single included endogenous variable is weak, here modeled as local to zero.
Abstract: This paper develops asymptotic distribution theory for instrumental variable regression when the partial correlation between the instruments and a single included endogenous variable is weak, here modeled as local to zero. Asymptotic representations are provided for various instrumental variable statistics, including the two-stage least squares (TSLS) and limited information maximum- likelihood (LIML) estimators and their t-statistics. The asymptotic distributions are found to provide good approximations to sampling distributions with just 20 observations per instrument. Even in large samples, TSLS can be badly biased, but LIML is, in many cases, approximately median unbiased. The theory suggests concrete quantitative guidelines for applied work. These guidelines help to interpret Angrist and Krueger's (1991) estimates of the returns to education: whereas TSLS estimates with many instruments approach the OLS estimate of 6%, the more reliable LIML and TSLS estimates with fewer instruments fall between 8% and 10%, with a typical confidence interval of (6%, 14%).

5,249 citations

Posted Content
TL;DR: A theme of the text is the use of artificial regressions for estimation, reference, and specification testing of nonlinear models, including diagnostic tests for parameter constancy, serial correlation, heteroscedasticity, and other types of mis-specification.
Abstract: Offering a unifying theoretical perspective not readily available in any other text, this innovative guide to econometrics uses simple geometrical arguments to develop students' intuitive understanding of basic and advanced topics, emphasizing throughout the practical applications of modern theory and nonlinear techniques of estimation. One theme of the text is the use of artificial regressions for estimation, reference, and specification testing of nonlinear models, including diagnostic tests for parameter constancy, serial correlation, heteroscedasticity, and other types of mis-specification. Explaining how estimates can be obtained and tests can be carried out, the authors go beyond a mere algebraic description to one that can be easily translated into the commands of a standard econometric software package. Covering an unprecedented range of problems with a consistent emphasis on those that arise in applied work, this accessible and coherent guide to the most vital topics in econometrics today is indispensable for advanced students of econometrics and students of statistics interested in regression and related topics. It will also suit practising econometricians who want to update their skills. Flexibly designed to accommodate a variety of course levels, it offers both complete coverage of the basic material and separate chapters on areas of specialized interest.

4,284 citations