Open AccessBook
Econometric Analysis of Cross Section and Panel Data
TLDR
This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).Abstract:
The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.read more
Citations
More filters
Journal ArticleDOI
How Much Should We Trust Differences-In-Differences Estimates?
TL;DR: In this article, the authors randomly generate placebo laws in state-level data on female wages from the Current Population Survey and use OLS to compute the DD estimate of its "effect" as well as the standard error of this estimate.
Journal ArticleDOI
Estimating Standard Errors in Finance Panel Data Sets: Comparing Approaches
TL;DR: In this article, the authors examine the different methods used in the literature and explain when the different approaches yield the same (and correct) standard errors and when they diverge, and give researchers guidance for their use.
Journal ArticleDOI
Some practical guidance for the implementation of propensity score matching
Marco Caliendo,Sabine Kopeinig +1 more
TL;DR: Propensity score matching (PSM) has become a popular approach to estimate causal treatment effects as discussed by the authors, but empirical examples can be found in very diverse fields of study, and each implementation step involves a lot of decisions and different approaches can be thought of.
Journal ArticleDOI
A Note on the Theme of Too Many Instruments
TL;DR: This article reviewed the evidence on the effects of instrument proliferation, and described and simulated simple ways to control it, and illustrated the dangers by replicating Forbes [American Economic Review (2000) Vol. 90, pp. 869-887] on income inequality and Levine et al. [Journal of Monetary Economics] (2000] Vol. 46, pp 31-77] on financial sector development.
Journal ArticleDOI
A Note on the Theme of Too Many Instruments
TL;DR: In this paper, the authors review the evidence on the effects of instrument proliferation, and describes and simulates simple ways to control it, and illustrate the dangers by replicating two early applications to economic growth: Forbes (2000) on income inequality and Levine, Loayza, and Beck (2000).
References
More filters
Book ChapterDOI
Regression Models and Life-Tables
TL;DR: The analysis of censored failure times is considered in this paper, where the hazard function is taken to be a function of the explanatory variables and unknown regression coefficients multiplied by an arbitrary and unknown function of time.
Journal ArticleDOI
Some Tests of Specification for Panel Data: Monte Carlo Evidence and an Application to Employment Equations.
Manuel Arellano,Stephen Bond +1 more
TL;DR: In this article, the generalized method of moments (GMM) estimator optimally exploits all the linear moment restrictions that follow from the assumption of no serial correlation in the errors, in an equation which contains individual effects, lagged dependent variables and no strictly exogenous variables.
Journal ArticleDOI
A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity
TL;DR: In this article, a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic is presented, which does not depend on a formal model of the structure of the heteroSkewedness.
Journal ArticleDOI
Sample Selection Bias as a Specification Error
TL;DR: In this article, the bias that results from using non-randomly selected samples to estimate behavioral relationships as an ordinary specification error or "omitted variables" bias is discussed, and the asymptotic distribution of the estimator is derived.
Journal ArticleDOI
The central role of the propensity score in observational studies for causal effects
TL;DR: The authors discusses the central role of propensity scores and balancing scores in the analysis of observational studies and shows that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates.