scispace - formally typeset

Journal ArticleDOI

Estimating causal effects of treatments in randomized and nonrandomized studies.

01 Oct 1974-Journal of Educational Psychology (American Psychological Association)-Vol. 66, Iss: 5, pp 688-701

Abstract: A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented. The objective is to specify the benefits of randomization in estimating causal effects of treatments. The basic conclusion is that randomization should be employed whenever possible but that the use of carefully controlled nonrandomized data to estimate causal effects is a reasonable and necessary procedure in many cases. Recent psychological and educational literature has included extensive criticism of the use of nonrandomized studies to estimate causal effects of treatments (e.g., Campbell & Erlebacher, 1970). The implication in much of this literature is that only properly randomized experiments can lead to useful estimates of causal effects. If taken as applying to all fields of study, this position is untenable. Since the extensive use of randomized experiments is limited to the last half century,8 and in fact is not used in much scientific investigation today,4 one is led to the conclusion that most scientific "truths" have been established without using randomized experiments. In addition, most of us successfully determine the causal effects of many of our everyday actions, even interpersonal behaviors, without the benefit of randomization. Even if the position that causal effects of treatments can only be well established from randomized experiments is taken as applying only to the social sciences in which
Citations
More filters

Book
01 Jan 2001-
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,263 citations


Journal ArticleDOI
01 Apr 1983-Biometrika
Abstract: : The results of observational studies are often disputed because of nonrandom treatment assignment. For example, patients at greater risk may be overrepresented in some treatment group. This paper discusses the central role of propensity scores and balancing scores in the analysis of observational studies. The propensity score is the (estimated) conditional probability of assignment to a particular treatment given a vector of observed covariates. Both large and small sample theory show that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates. Applications include: matched sampling on the univariate propensity score which is equal percent bias reducing under more general conditions than required for discriminant matching, multivariate adjustment by subclassification on balancing scores where the same subclasses are used to estimate treatment effects for all outcome variables and in all subpopulations, and visual representation of multivariate adjustment by a two-dimensional plot. (Author)

20,430 citations


BookDOI
Abstract: While heated arguments between practitioners of qualitative and quantitative research have begun to test the very integrity of the social sciences, Gary King, Robert Keohane, and Sidney Verba have produced a farsighted and timely book that promises to sharpen and strengthen a wide range of research performed in this field. These leading scholars, each representing diverse academic traditions, have developed a unified approach to valid descriptive and causal inference in qualitative research, where numerical measurement is either impossible or undesirable. Their book demonstrates that the same logic of inference underlies both good quantitative and good qualitative research designs, and their approach applies equally to each. Providing precepts intended to stimulate and discipline thought, the authors explore issues related to framing research questions, measuring the accuracy of data and uncertainty of empirical inferences, discovering causal effects, and generally improving qualitative research. Among the specific topics they address are interpretation and inference, comparative case studies, constructing causal theories, dependent and explanatory variables, the limits of random selection, selection bias, and errors in measurement. Mathematical notation is occasionally used to clarify concepts, but no prior knowledge of mathematics or statistics is assumed. The unified logic of inference that this book explicates will be enormously useful to qualitative researchers of all traditions and substantive fields.

6,057 citations


Journal ArticleDOI
Peter C. Austin1Institutions (1)
TL;DR: The propensity score is a balancing score: conditional on the propensity score, the distribution of observed baseline covariates will be similar between treated and untreated subjects, and different causal average treatment effects and their relationship with propensity score analyses are described.
Abstract: The propensity score is the probability of treatment assignment conditional on observed baseline characteristics. The propensity score allows one to design and analyze an observational (nonrandomized) study so that it mimics some of the particular characteristics of a randomized controlled trial. In particular, the propensity score is a balancing score: conditional on the propensity score, the distribution of observed baseline covariates will be similar between treated and untreated subjects. I describe 4 different propensity score methods: matching on the propensity score, stratification on the propensity score, inverse probability of treatment weighting using the propensity score, and covariate adjustment using the propensity score. I describe balance diagnostics for examining whether the propensity score model has been adequately specified. Furthermore, I discuss differences between regression-based methods and propensity score-based methods for the analysis of observational data. I describe different causal average treatment effects and their relationship with propensity score analyses.

5,778 citations


Cites methods from "Estimating causal effects of treatm..."

  • ...I first describe the potential outcomes framework, which has also been described as the Rubin Causal Model (Rubin, 1974)....

    [...]

  • ...Because propensity score methods allow one to mimic some of the characteristics of an RCT in the context of an observational study, I begin this article by describ- framework, which has also been described as the Rubin Causal Model (Rubin, 1974)....

    [...]


Journal ArticleDOI
Marco Caliendo, Sabine Kopeinig1Institutions (1)
Abstract: Propensity score matching (PSM) has become a popular approach to estimate causal treatment effects. It is widely applied when evaluating labour market policies, but empirical examples can be found in very diverse fields of study. Once the researcher has decided to use PSM, he is confronted with a lot of questions regarding its implementation. To begin with, a first decision has to be made concerning the estimation of the propensity score. Following that one has to decide which matching algorithm to choose and determine the region of common support. Subsequently, the matching quality has to be assessed and treatment effects and their standard errors have to be estimated. Furthermore, questions like 'what to do if there is choice-based sampling?' or 'when to measure effects?' can be important in empirical studies. Finally, one might also want to test the sensitivity of estimated treatment effects with respect to unobserved heterogeneity or failure of the common support condition. Each implementation step involves a lot of decisions and different approaches can be thought of. The aim of this paper is to discuss these implementation issues and give some guidance to researchers who want to use PSM for evaluation purposes.

4,814 citations


Cites methods from "Estimating causal effects of treatm..."

  • ...Rubin (1974) , Rosenbaum and Rubin (1983, 1985a) or Lechner (1998)....

    [...]

  • ...The standard framework in evaluation analysis to formalise this problem is the potential outcome approach or Roy-Rubin-model (Roy (1951), Rubin (1974) )....

    [...]

  • ...The standard framework in evaluation analysis to formalize this problem is the potential outcome approach or Roy–Rubin model (Roy, 1951; Rubin, 1974)....

    [...]

  • ...See e.g. Rubin (1974), Rosenbaum and Rubin (1983, 1985a) or Lechner (1998)....

    [...]


References
More filters

Book
01 Jan 1925-
Abstract: The prime object of this book is to put into the hands of research workers, and especially of biologists, the means of applying statistical tests accurately to numerical data accumulated in their own laboratories or available in the literature.

11,300 citations


Book
Leonard J. Savage1Institutions (1)
01 Jan 1954-

7,540 citations


Book
01 Jan 1959-
Abstract: The General Decision Problem.- The Probability Background.- Uniformly Most Powerful Tests.- Unbiasedness: Theory and First Applications.- Unbiasedness: Applications to Normal Distributions.- Invariance.- Linear Hypotheses.- The Minimax Principle.- Multiple Testing and Simultaneous Inference.- Conditional Inference.- Basic Large Sample Theory.- Quadratic Mean Differentiable Families.- Large Sample Optimality.- Testing Goodness of Fit.- General Large Sample Methods.

6,474 citations


Book
01 Jan 1950-

5,819 citations


Journal ArticleDOI
Henry Scheffé1Institutions (1)
01 Jun 1960-Soil Science
Abstract: Originally published in 1959, this classic volume has had a major impact on generations of statisticians. Newly issued in the Wiley Classics Series, the book examines the basic theory of analysis of variance by considering several different mathematical models. Part I looks at the theory of fixed-effects models with independent observations of equal variance, while Part II begins to explore the analysis of variance in the case of other models.

5,719 citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
202210
2021674
2020678
2019592
2018518
2017486