scispace - formally typeset
Search or ask a question
Author

Richard Robb

Other affiliations: University of Chicago
Bio: Richard Robb is an academic researcher from Columbia University. The author has contributed to research in topics: Nonparametric statistics & Consistent estimator. The author has an hindex of 7, co-authored 15 publications receiving 2732 citations. Previous affiliations of Richard Robb include University of Chicago.

Papers
More filters
Journal ArticleDOI
TL;DR: Methods for estimating the impact of training on earnings when non-random selection characterizes the enrollment of persons into training are presented and the robustness of the estimators to choice-based sampling and contamination bias is examined.

1,020 citations

Book ChapterDOI
01 Jan 1986
TL;DR: The inability of social scientists to use laboratory methods to independently vary treatments to eliminate or isolate spurious channels of causation places a fundamental limitation on the possibility of objective knowledge in the social sciences as mentioned in this paper.
Abstract: Social scientists never have access to true experimental data of the type sometimes available to laboratory scientists.1 Our inability to use laboratory methods to independently vary treatments to eliminate or isolate spurious channels of causation places a fundamental limitation on the possibility of objective knowledge in the social sciences. In place of laboratory experimental variation, social scientists use subjective thought experiments. Assumptions replace data. In the jargon of modern econometrics, minimal identifying assumptions are invoked.

288 citations

Book ChapterDOI
01 Jan 1985
TL;DR: The literature on the determinants of earnings suggest an earnings function for individual i which depends on age ai, year t, “vintage” or “cohort” schooling level si, and experience ei as mentioned in this paper.
Abstract: The literature on the determinants of earnings suggest an earnings function for individual i which depends on age ai, year t, “vintage” or “cohort” schooling level si, and experience ei. Adopting a linear function to facilitate exposition we may write $${Y_i}(t,{a_i},{c_i},{e_i},{s_i}) = {\alpha _0} + {\alpha _1}{a_i} + {\alpha _2}t + {\alpha _3}{e_i} + {\alpha _4}{s_i} + {\alpha _5}{c_i}$$ (1) where ei is experience, usually defined for males as age minus schooling, (ei = ai – si),1 and Yi may be any monotone transformation of earnings.

272 citations

Journal Article
TL;DR: In this paper, the authors present nonparametric methods for testing the hypothesis that duration data can be represented by a mixture of exponential distributions, and a consistent estimator for the number of points of support of a discrete mixture is developed.
Abstract: This article presents nonparametric methods for testing the hypothesis that duration data can be represented by a mixture of exponential distributions. Both Bayesian and classical tests are developed. A variety of apparently distinct models can be written in mixture of exponentials form. This raises a fundamental identification problem. A consistent estimator for the number of points of support of a discrete mixture is developed. A consistent method-of-moments estimator for the mixing distribution is derived from the testing criteria and is evaluated in a Monte Carlo study.

53 citations


Cited by
More filters
Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

Book
01 Jan 2003
TL;DR: In this paper, the authors describe the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation, and compare simulation-assisted estimation procedures, including maximum simulated likelihood, method of simulated moments, and methods of simulated scores.
Abstract: This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum simulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. No other book incorporates all these fields, which have arisen in the past 20 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing.

7,768 citations

Journal ArticleDOI
TL;DR: Propensity score matching (PSM) has become a popular approach to estimate causal treatment effects as discussed by the authors, but empirical examples can be found in very diverse fields of study, and each implementation step involves a lot of decisions and different approaches can be thought of.
Abstract: Propensity score matching (PSM) has become a popular approach to estimate causal treatment effects. It is widely applied when evaluating labour market policies, but empirical examples can be found in very diverse fields of study. Once the researcher has decided to use PSM, he is confronted with a lot of questions regarding its implementation. To begin with, a first decision has to be made concerning the estimation of the propensity score. Following that one has to decide which matching algorithm to choose and determine the region of common support. Subsequently, the matching quality has to be assessed and treatment effects and their standard errors have to be estimated. Furthermore, questions like 'what to do if there is choice-based sampling?' or 'when to measure effects?' can be important in empirical studies. Finally, one might also want to test the sensitivity of estimated treatment effects with respect to unobserved heterogeneity or failure of the common support condition. Each implementation step involves a lot of decisions and different approaches can be thought of. The aim of this paper is to discuss these implementation issues and give some guidance to researchers who want to use PSM for evaluation purposes.

5,510 citations

Journal ArticleDOI
TL;DR: This paper decompose the conventional measure of evaluation bias into several components and find that bias due to selection on unobservables, commonly called selection bias in econometrics, is empirically less important than other components, although it is still a sizeable fraction of the estimated programme impact.
Abstract: This paper considers whether it is possible to devise a nonexperimental procedure for evaluating a prototypical job training programme. Using rich nonexperimental data, we examine the performance of a two-stage evaluation methodology that (a) estimates the probability that a person participates in a programme and (b) uses the estimated probability in extensions of the classical method of matching. We decompose the conventional measure of programme evaluation bias into several components and find that bias due to selection on unobservables, commonly called selection bias in econometrics, is empirically less important than other components, although it is still a sizeable fraction of the estimated programme impact. Matching methods applied to comparison groups located in the same labour markets as participants and administered the same questionnaire eliminate much of the bias as conventionally measured, but the remaining bias is a considerable fraction of experimentally-determined programme impact estimates. We test and reject the identifying assumptions that justify the classical method of matching. We present a nonparametric conditional difference-in-differences extension of the method of matching that is consistent with the classical index-sufficient sample selection model and is not rejected by our tests of identifying assumptions. This estimator is effective in eliminating bias, especially when it is due to temporally-invariant omitted variables.

5,069 citations

Posted Content
TL;DR: In this article, the authors developed an estimation algorithm that takes into account the relationship between productivity on the one hand, and both input demand and survival on the other, guided by a dynamic equilibrium model that generates the exit and input demand equations needed to correct for the simultaneity and selection problems.
Abstract: Technological change and deregulation have caused a major restructuring of the telecommunications equipment industry over the last two decades. We estimate the parameters of a production function for the equipment industry and then use those estimates to analyze the evolution of plant-level productivity over this period. The restructuring involved significant entry and exit and large changes in the sizes of incumbents. Since firms choices on whether to liquidate and the on the quantities of inputs demanded should they continue depend on their productivity, we develop an estimation algorithm that takes into account the relationship between productivity on the one hand, and both input demand and survival on the other. The algorithm is guided by a dynamic equilibrium model that generates the exit and input demand equations needed to correct for the simultaneity and selection problems. A fully parametric estimation algorithm based on these decision rules would be both computationally burdensome and require a host of auxiliary assumptions. So we develop a semiparametric technique which is both consistent with a quite general version of the theoretical framework and easy to use. The algorithm produces markedly different estimates of both production function parameters and of productivity movements than traditional estimation procedures. We find an increase in the rate of industry productivity growth after deregulation. This in spite of the fact that there was no increase in the average of the plants' rates of productivity growth, and there was actually a fall in our index of the efficiency of the allocation of variable factors conditional on the existing distribution of fixed factors. Deregulation was, however, followed by a reallocation of capital towards more productive establishments (by a down sizing, often shutdown, of unproductive plants and by a disproportionate growth of productive establishments) which more than offset the other factors' negative impacts on aggregate productivity.

4,380 citations