scispace - formally typeset
Search or ask a question
Author

Alberto Abadie

Bio: Alberto Abadie is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Estimator & Matching (statistics). The author has an hindex of 43, co-authored 89 publications receiving 21077 citations. Previous affiliations of Alberto Abadie include National Bureau of Economic Research & Harvard University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors investigated the economic effects of conflict, using the terrorist conflict in the Basque Country as a case study, and found that after the outbreak of terrorism in the late 1960's, per capita GDP in the basque country declined about 10 percentage points relative to a synthetic control region without terrorism.
Abstract: This article investigates the economic effects of conflict, using the terrorist conflict in the Basque Country as a case study. We find that, after the outbreak of terrorism in the late 1960's, per capita GDP in the Basque Country declined about 10 percentage points relative to a synthetic control region without terrorism. In addition, we use the 1998-1999 truce as a natural experiment. We find that stocks of firms with a significant part of their business in the Basque Country showed a positive relative performance when truce became credible, and a negative relative performance at the end of the cease-fire.

3,128 citations

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the applicability of synthetic control methods to comparative case studies and found that, following Proposition 99, tobacco consumption fell markedly in California relative to a comparable synthetic control region, and that by the year 2000 annual per-capita cigarette sales in California were about 26 packs lower than what they would have been in the absence of Proposition 99.
Abstract: Building on an idea in Abadie and Gardeazabal (2003), this article investigates the application of synthetic control methods to comparative case studies. We discuss the advantages of these methods and apply them to study the effects of Proposition 99, a large-scale tobacco control program that California implemented in 1988. We demonstrate that, following Proposition 99, tobacco consumption fell markedly in California relative to a comparable synthetic control region. We estimate that by the year 2000 annual per-capita cigarette sales in California were about 26 packs lower than what they would have been in the absence of Proposition 99. Using new inferential methods proposed in this article, we demonstrate the significance of our estimates. Given that many policy interventions and events of interest in social sciences take place at an aggregate level (countries, regions, cities, etc.) and affect a small number of aggregate units, the potential applicability of synthetic control methods to comparative cas...

2,815 citations

Journal ArticleDOI
TL;DR: In this article, the authors developed new methods for analyzing the large sample properties of matching estimators and established a number of new results, such as the following: Matching estimators with replacement with a fixed number of matches are not N 1/2 -consistent.
Abstract: Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. The absence of formal results in this area may be partly due to the fact that standard asymptotic expansions do not apply to matching estimators with a fixed number of matches because such estimators are highly nonsmooth functionals of the data. In this article we develop new methods for analyzing the large sample properties of matching estimators and establish a number of new results. We focus on matching with replacement with a fixed number of matches. First, we show that matching estimators are not N 1/2 -consistent in general and describe conditions under which matching estimators do attain N 1/2 -consistency. Second, we show that even in settings where matching estimators are N 1/2 -consistent, simple matching estimators with a fixed number of matches do not attain the semiparametric efficiency bound. Third, we provide a consistent estimator for the large sample variance that does not require consistent nonparametric estimation of unknown functions. Software for implementing these methods is available in Matlab, Stata, and R.

2,207 citations

Journal ArticleDOI
TL;DR: The difference-in-differences estimator is based on the simple idea that simple comparisons of pre-treatment and post-treatment outcomes for those individuals exposed to a treatment are likely to be contaminated by temporal trends in the outcome variable or by the effect of events, other than the treatment, that occurred between both periods as mentioned in this paper.
Abstract: The use of natural experiments to evaluate treatment effects in the absence of truly experimental data has gained wide acceptance in empirical research in economics and other social sciences. Simple comparisons of pre-treatment and post-treatment outcomes for those individuals exposed to a treatment are likely to be contaminated by temporal trends in the outcome variable or by the effect of events, other than the treatment, that occurred between both periods. However, when only a fraction of the population is exposed to the treatment, an untreated comparison group can be used to identify temporal variation in the outcome that is not due to treatment exposure. The difference-in-differences (DID) estimator is based on this simple idea. Card and Krueger (1994) assess the employment effects of a raise in the minimum wage in New Jersey using a neighbouring state, Pennsylvania, to identify the variation in employment that New Jersey would have experienced in the absence of a raise in the minimum wage. Other applications of DID include studies of the effects of immigration on native wages and employment (Card, 1990), the effects of temporary disability benefits on time out of work after an injury (Meyer, Viscusi and Durbin, 1995), and the effect of anti-takeover laws on firms' leverage (Garvey and Hanka, 1999). It is well known that the conventional DID estimator is based on strong assumptions.

1,694 citations

Journal ArticleDOI
TL;DR: In this article, the authors present an implementation of matching estimators for average treatment effects in Stata, which allows to estimate the average effect for all units or only for the treated or control units; to choose the number of matches; specify the distance metric; select a bias adjustment; and to use heteroskedastic robust variance estimators.
Abstract: T his paper presents an implementation of matching estimators for average treatment effects in Stata. The nnmatch command allows you to estimate the average effect for all units or only for the treated or control units; to choose the number of matches; to specify the distance metric; to select a bias adjustment; and to use heteroskedastic-robust variance estimators.

1,371 citations


Cited by
More filters
Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

Journal ArticleDOI
TL;DR: In this article, the authors randomly generate placebo laws in state-level data on female wages from the Current Population Survey and use OLS to compute the DD estimate of its "effect" as well as the standard error of this estimate.
Abstract: Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the autocorrelation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre”- and “post”-period and explicitly takes into account the effective sample size works well even for small numbers of states.

9,397 citations

Journal ArticleDOI
TL;DR: Propensity score matching (PSM) has become a popular approach to estimate causal treatment effects as discussed by the authors, but empirical examples can be found in very diverse fields of study, and each implementation step involves a lot of decisions and different approaches can be thought of.
Abstract: Propensity score matching (PSM) has become a popular approach to estimate causal treatment effects. It is widely applied when evaluating labour market policies, but empirical examples can be found in very diverse fields of study. Once the researcher has decided to use PSM, he is confronted with a lot of questions regarding its implementation. To begin with, a first decision has to be made concerning the estimation of the propensity score. Following that one has to decide which matching algorithm to choose and determine the region of common support. Subsequently, the matching quality has to be assessed and treatment effects and their standard errors have to be estimated. Furthermore, questions like 'what to do if there is choice-based sampling?' or 'when to measure effects?' can be important in empirical studies. Finally, one might also want to test the sensitivity of estimated treatment effects with respect to unobserved heterogeneity or failure of the common support condition. Each implementation step involves a lot of decisions and different approaches can be thought of. The aim of this paper is to discuss these implementation issues and give some guidance to researchers who want to use PSM for evaluation purposes.

5,510 citations

Journal ArticleDOI
TL;DR: This work proposes an entirely non-recursive variational mode decomposition model, where the modes are extracted concurrently and is a generalization of the classic Wiener filter into multiple, adaptive bands.
Abstract: During the late 1990s, Huang introduced the algorithm called Empirical Mode Decomposition, which is widely used today to recursively decompose a signal into different modes of unknown but separate spectral bands. EMD is known for limitations like sensitivity to noise and sampling. These limitations could only partially be addressed by more mathematical attempts to this decomposition problem, like synchrosqueezing, empirical wavelets or recursive variational decomposition. Here, we propose an entirely non-recursive variational mode decomposition model, where the modes are extracted concurrently. The model looks for an ensemble of modes and their respective center frequencies, such that the modes collectively reproduce the input signal, while each being smooth after demodulation into baseband. In Fourier domain, this corresponds to a narrow-band prior. We show important relations to Wiener filter denoising. Indeed, the proposed method is a generalization of the classic Wiener filter into multiple, adaptive bands. Our model provides a solution to the decomposition problem that is theoretically well founded and still easy to understand. The variational model is efficiently optimized using an alternating direction method of multipliers approach. Preliminary results show attractive performance with respect to existing mode decomposition models. In particular, our proposed model is much more robust to sampling and noise. Finally, we show promising practical decomposition results on a series of artificial and real data.

4,111 citations

Journal ArticleDOI
TL;DR: A structure for thinking about matching methods and guidance on their use is provided, coalescing the existing research (both old and new) and providing a summary of where the literature on matching methods is now and where it should be headed.
Abstract: When estimating causal effects using observational data, it is desirable to replicate a randomized experiment as closely as possible by obtaining treated and control groups with similar covariate distributions. This goal can often be achieved by choosing well-matched samples of the original treated and control groups, thereby reducing bias due to the covariates. Since the 1970's, work on matching methods has examined how to best choose treated and control subjects for comparison. Matching methods are gaining popularity in fields such as economics, epidemiology, medicine, and political science. However, until now the literature and related advice has been scattered across disciplines. Researchers who are interested in using matching methods-or developing methods related to matching-do not have a single place to turn to learn about past and current research. This paper provides a structure for thinking about matching methods and guidance on their use, coalescing the existing research (both old and new) and providing a summary of where the literature on matching methods is now and where it should be headed.

3,952 citations