scispace - formally typeset
Search or ask a question
Author

James J. Heckman

Bio: James J. Heckman is an academic researcher from University of Chicago. The author has contributed to research in topics: Earnings & Human capital. The author has an hindex of 175, co-authored 766 publications receiving 156816 citations. Previous affiliations of James J. Heckman include University College Dublin & American Bar Association.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the bias that results from using non-randomly selected samples to estimate behavioral relationships as an ordinary specification error or "omitted variables" bias is discussed, and the asymptotic distribution of the estimator is derived.
Abstract: Sample selection bias as a specification error This paper discusses the bias that results from using non-randomly selected samples to estimate behavioral relationships as an ordinary specification error or «omitted variables» bias. A simple consistent two stage estimator is considered that enables analysts to utilize simple regression methods to estimate behavioral functions by least squares methods. The asymptotic distribution of the estimator is derived.

23,995 citations

Journal ArticleDOI
TL;DR: This paper decompose the conventional measure of evaluation bias into several components and find that bias due to selection on unobservables, commonly called selection bias in econometrics, is empirically less important than other components, although it is still a sizeable fraction of the estimated programme impact.
Abstract: This paper considers whether it is possible to devise a nonexperimental procedure for evaluating a prototypical job training programme. Using rich nonexperimental data, we examine the performance of a two-stage evaluation methodology that (a) estimates the probability that a person participates in a programme and (b) uses the estimated probability in extensions of the classical method of matching. We decompose the conventional measure of programme evaluation bias into several components and find that bias due to selection on unobservables, commonly called selection bias in econometrics, is empirically less important than other components, although it is still a sizeable fraction of the estimated programme impact. Matching methods applied to comparison groups located in the same labour markets as participants and administered the same questionnaire eliminate much of the bias as conventionally measured, but the remaining bias is a considerable fraction of experimentally-determined programme impact estimates. We test and reject the identifying assumptions that justify the classical method of matching. We present a nonparametric conditional difference-in-differences extension of the method of matching that is consistent with the classical index-sufficient sample selection model and is not rejected by our tests of identifying assumptions. This estimator is effective in eliminating bias, especially when it is due to temporally-invariant omitted variables.

5,069 citations

Journal ArticleDOI
TL;DR: In this article, a rigorous distribution theory for kernel-based matching is presented, and the method of matching is extended to more general conditions than the ones assumed in the statistical literature on the topic.
Abstract: This paper develops the method of matching as an econometric evaluation estimator. A rigorous distribution theory for kernel-based matching is presented. The method of matching is extended to more general conditions than the ones assumed in the statistical literature on the topic. We focus on the method of propensity score matching and show that it is not necessarily better, in the sense of reducing the variance of the resulting estimator, to use the propensity score method even if propensity score is known. We extend the statistical literature on the propensity score by considering the case when it is estimated both parametrically and nonparametrically. We examine the benefits of separability and exclusion restrictions in improving the efficiency of the estimator. Our methods also apply to the econometric selection bias estimator. Matching is a widely-used method of evaluation. It is based on the intuitively attractive idea of contrasting the outcomes of programme participants (denoted Y1) with the outcomes of "comparable" nonparticipants (denoted Y0). Differences in the outcomes between the two groups are attributed to the programme. Let 1 and 11 denote the set of indices for nonparticipants and participants, respectively. The following framework describes conventional matching methods as well as the smoothed versions of these methods analysed in this paper. To estimate a treatment effect for each treated person iecI, outcome Yli is compared to an average of the outcomes Yoj for matched persons je10 in the untreated sample. Matches are constructed on the basis of observed characteristics X in Rd. Typically, when the observed characteristics of an untreated person are closer to those of the treated person ieI1, using a specific distance measure, the untreated person gets a higher weight in constructing the match. The estimated gain for each person i in the treated sample is

3,861 citations

Book ChapterDOI
TL;DR: In this paper, the authors examine the impacts of active labor market policies, such as job training, job search assistance, and job subsidies, and the methods used to evaluate their effectiveness.
Abstract: Policy makers view public sector-sponsored employment and training programs and other active labor market policies as tools for integrating the unemployed and economically disadvantaged into the work force. Few public sector programs have received such intensive scrutiny, and been subjected to so many different evaluation strategies. This chapter examines the impacts of active labor market policies, such as job training, job search assistance, and job subsidies, and the methods used to evaluate their effectiveness. Previous evaluations of policies in OECD countries indicate that these programs usually have at best a modest impact on participants’ labor market prospects. But at the same time, they also indicate that there is considerable heterogeneity in the impact of these programs. For some groups, a compelling case can be made that these policies generate high rates of return, while for other groups these policies have had no impact and may have been harmful. Our discussion of the methods used to evaluate these policies has more general interest. We believe that the same issues arise generally in the social sciences and are no easier to address elsewhere. As a result, a major focus of this chapter is on the methodological lessons learned from evaluating these programs. One of the most important of these lessons is that there is no inherent method of choice for conducting program evaluations. The choice between experimental and non-experimental methods or among alternative econometric estimators should be guided by the underlying economic models, the available data, and the questions being addressed. Too much emphasis has been placed on formulating alternative econometric methods for correcting for selection bias and too little given to the quality of the underlying data. Although it is expensive, obtaining better data is the only way to solve the evaluation problem in a convincing way. However, better data are not synonymous with social experiments. © 1999 Elsevier Science B.V. All rights reserved.

3,352 citations


Cited by
More filters
Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

Journal ArticleDOI
TL;DR: In this article, the bias that results from using non-randomly selected samples to estimate behavioral relationships as an ordinary specification error or "omitted variables" bias is discussed, and the asymptotic distribution of the estimator is derived.
Abstract: Sample selection bias as a specification error This paper discusses the bias that results from using non-randomly selected samples to estimate behavioral relationships as an ordinary specification error or «omitted variables» bias. A simple consistent two stage estimator is considered that enables analysts to utilize simple regression methods to estimate behavioral functions by least squares methods. The asymptotic distribution of the estimator is derived.

23,995 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

ReportDOI
TL;DR: In this paper, the authors show that the stock of human capital determines the rate of growth, that too little human capital is devoted to research in equilibrium, that integration into world markets will increase growth rates, and that having a large population is not sufficient to generate growth.
Abstract: Growth in this model is driven by technological change that arises from intentional investment decisions made by profit-maximizing agents. The distinguishing feature of the technology as an input is that it is neither a conventional good nor a public good; it is a nonrival, partially excludable good. Because of the nonconvexity introduced by a nonrival good, price-taking competition cannot be supported. Instead, the equilibrium is one with monopolistic competition. The main conclusions are that the stock of human capital determines the rate of growth, that too little human capital is devoted to research in equilibrium, that integration into world markets will increase growth rates, and that having a large population is not sufficient to generate growth.

12,469 citations

Posted Content
TL;DR: In this paper, the authors show that the stock of human capital determines the rate of growth, that too little human capital is devoted to research in equilibrium, that integration into world markets will increase growth rates, and that having a large population is not sufficient to generate growth.
Abstract: Growth in this model is driven by technological change that arises from intentional investment decisions made by profit maximizing agents. The distinguishing feature of the technology as an input is that it is neither a conventional good nor a public good; it is a nonrival, partially excludable good. Because of the nonconvexity introduced by a nonrival good, price-taking competition cannot be supported, and instead, the equilibriumis one with monopolistic competition. The main conclusions are that the stock of human capital determines the rate of growth, that too little human capital is devoted to research in equilibrium, that integration into world markets will increase growth rates, and that having a large population is not sufficient to generate growth.

11,095 citations