scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Estimating causal effects of treatments in randomized and nonrandomized studies.

01 Oct 1974-Journal of Educational Psychology (American Psychological Association)-Vol. 66, Iss: 5, pp 688-701
TL;DR: A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented in this paper, where the objective is to specify the benefits of randomization in estimating causal effects of treatments.
Abstract: A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented. The objective is to specify the benefits of randomization in estimating causal effects of treatments. The basic conclusion is that randomization should be employed whenever possible but that the use of carefully controlled nonrandomized data to estimate causal effects is a reasonable and necessary procedure in many cases. Recent psychological and educational literature has included extensive criticism of the use of nonrandomized studies to estimate causal effects of treatments (e.g., Campbell & Erlebacher, 1970). The implication in much of this literature is that only properly randomized experiments can lead to useful estimates of causal effects. If taken as applying to all fields of study, this position is untenable. Since the extensive use of randomized experiments is limited to the last half century,8 and in fact is not used in much scientific investigation today,4 one is led to the conclusion that most scientific "truths" have been established without using randomized experiments. In addition, most of us successfully determine the causal effects of many of our everyday actions, even interpersonal behaviors, without the benefit of randomization. Even if the position that causal effects of treatments can only be well established from randomized experiments is taken as applying only to the social sciences in which
Citations
More filters
Journal ArticleDOI
TL;DR: A general analytical framework for combining moderation and mediation that integrates moderated regression analysis and path analysis is presented that clarifies how moderator variables influence the paths that constitute the direct, indirect, and total effects of mediated models.
Abstract: Studies that combine moderation and mediation are prevalent in basic and applied psychology research. Typically, these studies are framed in terms of moderated mediation or mediated moderation, both of which involve similar analytical approaches. Unfortunately, these approaches have important shortcomings that conceal the nature of the moderated and the mediated effects under investigation. This article presents a general analytical framework for combining moderation and mediation that integrates moderated regression analysis and path analysis. This framework clarifies how moderator variables influence the paths that constitute the direct, indirect, and total effects of mediated models. The authors empirically illustrate this framework and give step-by-step instructions for estimation and interpretation. They summarize the advantages of their framework over current approaches, explain how it subsumes moderated mediation and mediated moderation, and describe how it can accommodate additional moderator and mediator variables, curvilinear relationships, and structural equation models with latent variables.

3,624 citations


Cites background or methods from "Estimating causal effects of treatm..."

  • ...One useful approach to evaluate causality for models such as those in our framework has been developed by Rubin and colleagues (Holland, 1986, 1988; Little & Rubin, 2000; Rubin, 1974, 1978) and has been called the Rubin causal model (RCM)....

    [...]

  • ...Conditions for establishing causality have been discussed extensively (Holland, 1986, 1988; Little & Rubin, 2000; Marini & Singer, 1988; Pearl, 2000; Rubin, 1974, 1978; Shadish, Cook, & Campbell, 2002; Sobel, 1996; West, Biesanz, & Pitts, 2000) and have implications for interpreting results from…...

    [...]

  • ...One useful approach to evaluate causality for models such as those in our framework has been developed by Rubin and colleagues (Holland, 1986, 1988; Little & Rubin, 2000; Rubin, 1974, 1978 ) and has been called the Rubin causal model (RCM)....

    [...]

  • ...Without randomization, this assumption is difficult to defend, although it becomes more tenable when individuals are matched on variables that might be confounded with the predictor or when such variables are statistically controlled (Rubin, 1974, 1978)....

    [...]

  • ...Without randomization, this assumption is difficult to defend, although it becomes more tenable when individuals are matched on variables that might be confounded with the predictor or when such variables are statistically controlled ( Rubin, 1974, 1978 )....

    [...]

Journal ArticleDOI
TL;DR: A unified approach is proposed that makes it possible for researchers to preprocess data with matching and then to apply the best parametric techniques they would have used anyway and this procedure makes parametric models produce more accurate and considerably less model-dependent causal inferences.
Abstract: Although published works rarely include causal estimates from more than a few model specifications, authors usually choose the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices of control variables, functional forms, and other modeling assumptions, how can researchers ensure that the few estimates presented are accurate or representative? How do readers know that publications are not merely demonstrations that it is possible to find a specification that fits the author's favorite hypothesis? And how do we evaluate or even define statistical properties like unbiasedness or mean squared error when no unique model or estimator even exists? Matching methods, which offer the promise of causal inference with fewer assumptions, constitute one possible way forward, but crucial results in this fast-growing methodological literature are often grossly misinterpreted. We explain how to avoid these misinterpretations and propose a unified approach that makes it possible for researchers to preprocess data with matching (such as with the easy-to-use software we offer) and then to apply the best parametric techniques they would have used anyway. This procedure makes parametric models produce more accurate and considerably less model-dependent causal inferences.

3,601 citations


Cites background from "Estimating causal effects of treatm..."

  • ...…bþ XicÞ; l0ðXiÞ[E½Yið0Þ j Ti 5 0; Xi 5 gðaþ XicÞ: ð7Þ We can produce estimates of these quantities by assuming independence over observations and forming a likelihood function 5These last two assumptions are sometimes known as the ‘‘stable unit treatment value assumption’’ or SUTVA (Rubin 1974)....

    [...]

  • ...3.1.1), but key aspects of the ideas originate with many others, especially Neyman (1923), Fisher (1935), Cox (1958), Rubin (1974), and Holland (1986) in statistics; Roy (1951) and Quandt (1972) in econometrics; and Lewis (1973) in philosophy....

    [...]

Book ChapterDOI
TL;DR: In this paper, the authors examine the impacts of active labor market policies, such as job training, job search assistance, and job subsidies, and the methods used to evaluate their effectiveness.
Abstract: Policy makers view public sector-sponsored employment and training programs and other active labor market policies as tools for integrating the unemployed and economically disadvantaged into the work force. Few public sector programs have received such intensive scrutiny, and been subjected to so many different evaluation strategies. This chapter examines the impacts of active labor market policies, such as job training, job search assistance, and job subsidies, and the methods used to evaluate their effectiveness. Previous evaluations of policies in OECD countries indicate that these programs usually have at best a modest impact on participants’ labor market prospects. But at the same time, they also indicate that there is considerable heterogeneity in the impact of these programs. For some groups, a compelling case can be made that these policies generate high rates of return, while for other groups these policies have had no impact and may have been harmful. Our discussion of the methods used to evaluate these policies has more general interest. We believe that the same issues arise generally in the social sciences and are no easier to address elsewhere. As a result, a major focus of this chapter is on the methodological lessons learned from evaluating these programs. One of the most important of these lessons is that there is no inherent method of choice for conducting program evaluations. The choice between experimental and non-experimental methods or among alternative econometric estimators should be guided by the underlying economic models, the available data, and the questions being addressed. Too much emphasis has been placed on formulating alternative econometric methods for correcting for selection bias and too little given to the quality of the underlying data. Although it is expensive, obtaining better data is the only way to solve the evaluation problem in a convincing way. However, better data are not synonymous with social experiments. © 1999 Elsevier Science B.V. All rights reserved.

3,352 citations

Journal ArticleDOI
TL;DR: The statistical similarities among mediation, confounding, and suppression are described and methods to determine the confidence intervals for confounding and suppression effects are proposed based on methods developed for mediated effects.
Abstract: This paper describes the statistical similarities among mediation, confounding, and suppression. Each is quantified by measuring the change in the relationship between an independent and a dependent variable after adding a third variable to the analysis. Mediation and confounding are identical statistically and can be distinguished only on conceptual grounds. Methods to determine the confidence intervals for confounding and suppression effects are proposed based on methods developed for mediated effects. Although the statistical estimation of effects and standard errors is the same, there are important conceptual differences among the three types of effects.

3,285 citations


Cites background from "Estimating causal effects of treatm..."

  • ...Holland (1988) and Robins and Greenland (1992) have proposed several alternatives to establishing mediation and confounding based on Rubin’s causal model (Rubin, 1974) and related methods....

    [...]

Journal ArticleDOI
TL;DR: In the last two decades, much research has been done on the econometric and statistical analysis of such causal effects as discussed by the authors, which has reached a level of maturity that makes it an important tool in many areas of empirical research in economics, including labor economics, public finance, development economics, industrial organization, and other areas in empirical microeconomics.
Abstract: Many empirical questions in economics and other social sciences depend on causal effects of programs or policies. In the last two decades, much research has been done on the econometric and statistical analysis of such causal effects. This recent theoreti- cal literature has built on, and combined features of, earlier work in both the statistics and econometrics literatures. It has by now reached a level of maturity that makes it an important tool in many areas of empirical research in economics, including labor economics, public finance, development economics, industrial organization, and other areas of empirical microeconomics. In this review, we discuss some of the recent developments. We focus primarily on practical issues for empirical research- ers, as well as provide a historical overview of the area and give references to more technical research.

3,175 citations

References
More filters
Book
01 Jan 1925
TL;DR: The prime object of as discussed by the authors is to put into the hands of research workers, and especially of biologists, the means of applying statistical tests accurately to numerical data accumulated in their own laboratories or available in the literature.
Abstract: The prime object of this book is to put into the hands of research workers, and especially of biologists, the means of applying statistical tests accurately to numerical data accumulated in their own laboratories or available in the literature.

11,308 citations

Book
01 Jan 1954

7,545 citations

Book
01 Jan 1959
TL;DR: The general decision problem, the Probability Background, Uniformly Most Powerful Tests, Unbiasedness, Theory and First Applications, and UNbiasedness: Applications to Normal Distributions, Invariance, Linear Hypotheses as discussed by the authors.
Abstract: The General Decision Problem.- The Probability Background.- Uniformly Most Powerful Tests.- Unbiasedness: Theory and First Applications.- Unbiasedness: Applications to Normal Distributions.- Invariance.- Linear Hypotheses.- The Minimax Principle.- Multiple Testing and Simultaneous Inference.- Conditional Inference.- Basic Large Sample Theory.- Quadratic Mean Differentiable Families.- Large Sample Optimality.- Testing Goodness of Fit.- General Large Sample Methods.

6,480 citations

Book
01 Jan 1950

5,820 citations

Journal ArticleDOI
TL;DR: In this paper, the basic theory of analysis of variance by considering several different mathematical models is examined, including fixed-effects models with independent observations of equal variance and other models with different observations of variance.
Abstract: Originally published in 1959, this classic volume has had a major impact on generations of statisticians. Newly issued in the Wiley Classics Series, the book examines the basic theory of analysis of variance by considering several different mathematical models. Part I looks at the theory of fixed-effects models with independent observations of equal variance, while Part II begins to explore the analysis of variance in the case of other models.

5,728 citations