scispace - formally typeset
Search or ask a question
Book

Natural Experiments in the Social Sciences: A Design-Based Approach

08 Oct 2012-
TL;DR: In this article, the authors explore the role of qualitative evidence in the design of natural experiments and evaluate the credibility of the model and how relevant the intervention is to the final result.
Abstract: 1. Introduction: why natural experiments? Part I. Discovering Natural Experiments: 2. Standard natural experiments 3. Regression-discontinuity designs 4. Instrumental-variables designs Part II. Analyzing Natural Experiments: 5. Simplicity and transparency: keys to quantitative analysis 6. Sampling processes and standard errors 7. The central role of qualitative evidence Part III. Evaluating Natural Experiments: 8. How plausible is as-if random? 9. How credible is the model? 10. How relevant is the intervention? Part IV. Conclusion: 11. Building strong research designs through multi-method research.
Citations
More filters
Posted Content
TL;DR: In this paper, the authors investigated conditions sufficient for identification of average treatment effects using instrumental variables and showed that the existence of valid instruments is not sufficient to identify any meaningful average treatment effect.
Abstract: We investigate conditions sufficient for identification of average treatment effects using instrumental variables. First we show that the existence of valid instruments is not sufficient to identify any meaningful average treatment effect. We then establish that the combination of an instrument and a condition on the relation between the instrument and the participation status is sufficient for identification of a local average treatment effect for those who can be induced to change their participation status by changing the value of the instrument. Finally we derive the probability limit of the standard IV estimator under these conditions. It is seen to be a weighted average of local average treatment effects.

3,154 citations

01 Jan 1994
TL;DR: Green and Shapiro as discussed by the authors assess rational choice theory where it is reputed to be most successful: the study of collective action, the behavior of political parties and politicians, and such phenomena as voting cycles and Prisoner's dilemma.
Abstract: This is the first comprehensive critical evaluation of the use of rational choice theory in political science. Writing in an accessible and nontechnical style, Donald P. Green and Ian Shapiro assess rational choice theory where it is reputed to be most successful: the study of collective action, the behavior of political parties and politicians, and such phenomena as voting cycles and Prisoner's Dilemmas. In their hard-hitting critique, Green and Shapiro demonstrate that the much heralded achievements of rational choice theory are in fact deeply suspect and that fundamental rethinking is needed if rational choice theorists are to contribute to the understanding of politics. In their final chapters, they anticipate and respond to a variety of possible rational choice responses to their arguments, thereby initiating a dialogue that is bound to continue for some time.

883 citations

Journal ArticleDOI
TL;DR: A research agenda on nature contact and health is proposed, identifying principal domains of research and key questions that, if answered, would provide the basis for evidence-based public health interventions.
Abstract: Background: At a time of increasing disconnectedness from nature, scientific interest in the potential health benefits of nature contact has grown. Research in recent decades has yielded substantia...

653 citations

Journal ArticleDOI
29 Jan 2018
TL;DR: In this paper, causal inference based on observational data is used to investigate whether correlation does not imply causality, even though the research question at hand involves causality; but often, observational data are the only option.
Abstract: Correlation does not imply causation; but often, observational data are the only option, even though the research question at hand involves causality. This article discusses causal inference based ...

434 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, a different approach to problems of multiple significance testing is presented, which calls for controlling the expected proportion of falsely rejected hypotheses -the false discovery rate, which is equivalent to the FWER when all hypotheses are true but is smaller otherwise.
Abstract: SUMMARY The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach to problems of multiple significance testing is presented. It calls for controlling the expected proportion of falsely rejected hypotheses -the false discovery rate. This error rate is equivalent to the FWER when all hypotheses are true but is smaller otherwise. Therefore, in problems where the control of the false discovery rate rather than that of the FWER is desired, there is potential for a gain in power. A simple sequential Bonferronitype procedure is proved to control the false discovery rate for independent test statistics, and a simulation study shows that the gain in power is substantial. The use of the new procedure and the appropriateness of the criterion are illustrated with examples.

83,420 citations

Journal ArticleDOI
TL;DR: The authors discusses the central role of propensity scores and balancing scores in the analysis of observational studies and shows that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates.
Abstract: : The results of observational studies are often disputed because of nonrandom treatment assignment. For example, patients at greater risk may be overrepresented in some treatment group. This paper discusses the central role of propensity scores and balancing scores in the analysis of observational studies. The propensity score is the (estimated) conditional probability of assignment to a particular treatment given a vector of observed covariates. Both large and small sample theory show that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates. Applications include: matched sampling on the univariate propensity score which is equal percent bias reducing under more general conditions than required for discriminant matching, multivariate adjustment by subclassification on balancing scores where the same subclasses are used to estimate treatment effects for all outcome variables and in all subpopulations, and visual representation of multivariate adjustment by a two-dimensional plot. (Author)

23,744 citations

Book
01 Jan 1963
TL;DR: A survey drawn from social science research which deals with correlational, ex post facto, true experimental, and quasi-experimental designs and makes methodological recommendations is presented in this article.
Abstract: A survey drawn from social-science research which deals with correlational, ex post facto, true experimental, and quasi-experimental designs and makes methodological recommendations. Bibliogs.

10,916 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that a simple FDR controlling procedure for independent test statistics can also control the false discovery rate when test statistics have positive regression dependency on each of the test statistics corresponding to the true null hypotheses.
Abstract: Benjamini and Hochberg suggest that the false discovery rate may be the appropriate error rate to control in many applied multiple testing problems. A simple procedure was given there as an FDR controlling procedure for independent test statistics and was shown to be much more powerful than comparable procedures which control the traditional familywise error rate. We prove that this same procedure also controls the false discovery rate when the test statistics have positive regression dependency on each of the test statistics corresponding to the true null hypotheses. This condition for positive dependency is general enough to cover many problems of practical interest, including the comparisons of many treatments with a single control, multivariate normal test statistics with positive correlation matrix and multivariate $t$. Furthermore, the test statistics may be discrete, and the tested hypotheses composite without posing special difficulties. For all other forms of dependency, a simple conservative modification of the procedure controls the false discovery rate. Thus the range of problems for which a procedure with proven FDR control can be offered is greatly increased.

9,335 citations

Book
01 Jan 2006
TL;DR: Data Analysis Using Regression and Multilevel/Hierarchical Models is a comprehensive manual for the applied researcher who wants to perform data analysis using linear and nonlinear regression and multilevel models.
Abstract: Data Analysis Using Regression and Multilevel/Hierarchical Models is a comprehensive manual for the applied researcher who wants to perform data analysis using linear and nonlinear regression and multilevel models. The book introduces a wide variety of models, whilst at the same time instructing the reader in how to fit these models using available software packages. The book illustrates the concepts by working through scores of real data examples that have arisen from the authors' own applied research, with programming codes provided for each one. Topics covered include causal inference, including regression, poststratification, matching, regression discontinuity, and instrumental variables, as well as multilevel logistic regression and missing-data imputation. Practical tips regarding building, fitting, and understanding are provided throughout.

9,098 citations