scispace - formally typeset
Search or ask a question
Author

Donald B. Rubin

Other affiliations: University of Chicago, Harvard University, Princeton University  ...read more
Bio: Donald B. Rubin is an academic researcher from Tsinghua University. The author has contributed to research in topics: Causal inference & Missing data. The author has an hindex of 132, co-authored 515 publications receiving 262632 citations. Previous affiliations of Donald B. Rubin include University of Chicago & Harvard University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the stable unit treatment value assumption (SUTVA) is used for causal inference, which is a priori assumption that the value of an outcome variable for each unit when exposed to treatment t will be the same no matter what mechanism is used to assign treatment t to unit u and no matter how many treatments he other units receive.
Abstract: I congratulate my friend Paul Holland on his lucidly clear description of the basic perspective for causal inference referred to as Rubin's model. I have been advocating this general perspective for defining problems of causal inference since Rubin (1974), and with very little modification since Rubin (1978). The one point concerning the definition of causal effects that has continued to evolve in my thinking is the key role of the stable-unit-treatmentvalue assumption (SUTVA, as labeled in Rubin 1980) for deciding which questions are formulated well enough to have causal answers. Under SUTVA, the model's representation foutcomes is adequate. More explicitly, consider the situation with N units indexed by u = 1, .. ., N; T treatments indexed by t = 1, . . . , T; and outcome variable Y, whose possible values are represented by Y,\" (t = 1, . . . , T; u = 1, ... , N). SUTVA is simply the a priori assumption that the value of Y for unit u when exposed to treatment t will be the same no matter what mechanism isused to assign treatment t to unit u and no matter what treatments he other units receive, and this holds for all u = 1, . . . , N and all t = 1, . . . , T. SUTVA is violated when, for example, there xist unrepresented versions of treatments (Y,u depends on which version of treatment t was received) or interference b tween units (Y,1 depends on whether unit u' received treatment t or t').

309 citations

Journal ArticleDOI
TL;DR: It is necessary to select patients suitable for vaginal or laparoscopic mesh placement for these procedures on the basis of prior history and once they provide informed consent for surgery.
Abstract: Clinical Pharmacology & Therapeutics (1995) 57, 6–15; doi:

297 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a procedure for computing significance levels from data sets whose missing values have been multiply imputed data using moment-based statistics, m ≤ 3 repeated imputations, and an F reference distribution.
Abstract: We present a procedure for computing significance levels from data sets whose missing values have been multiply imputed data. This procedure uses moment-based statistics, m ≤ 3 repeated imputations, and an F reference distribution. When m = ∞, we show first that our procedure is essentially the same as the ideal procedure in cases of practical importance and, second, that its deviations from the ideal are basically a function of the coefficient of variation of the canonical ratios of complete to observed information. For small m our procedure's performance is largely governed by this coefficient of variation and the mean of these ratios. Using simulation techniques with small m, we compare our procedure's actual and nominal large-sample significance levels and conclude that it is essentially calibrated and thus represents a definite improvement over previously available procedures. Furthermore, we compare the large-sample power of the procedure as a function of m and other factors, such as the di...

297 citations

Journal ArticleDOI
15 Nov 1995-JAMA
TL;DR: Phenobarbital exposure during early development can have long-term deleterious effects on cognitive performance and Physicians are urged to use increased caution in prescribing such medications during pregnancy.
Abstract: Objective. —To test whether exposure to phenobarbital in utero is associated with deficits in intelligence scores in adult men and whether the magnitude of the postnatal effect is mediated by exposure parameters and/or postnatal environmental factors. Design. —Two double-blind studies were conducted on independent samples of adult men prenatally exposed to phenobarbital and matched control samples using different measures of general intelligence. Based on data from control subjects, regression models were built relating intelligence scores to relevant pre-exposure matching variables and age at testing. Models generated predicted scores for each exposed subject. Group mean differences between the individually predicted and observed scores estimated exposure effects. Setting. —Copenhagen, Denmark. Participants. —Exposed subjects were adult men born at the largest hospital in Copenhagen between 1959 and 1961 who were exposed to phenobarbital during gestation via maternal medical treatment and whose mothers had no history of a central nervous system disorder and no treatment during pregnancy with any other psychopharmacological drug. Study 1 included 33 men and study 2,81 men. Controls were unexposed members of the same birth cohort matched on a wide spectrum of maternal variables recorded prenatally and perinatally. Controls for studies 1 and 2 included 52 and 101 men, respectively. Main Outcome Measures. —In study 1: Wechsler Adult Intelligence Scale (Danish version); in study 2: Danish Military Draft Board Intelligence Test (Borge Priens Prove). Result. —Men exposed prenatally to phenobarbital had significantly lower verbal intelligence scores (approximately 0.5 SD) than predicted. Lower socioeconomic status and being the offspring of an "unwanted" pregnancy increased the magnitude of the negative effects. Exposure that included the last trimester was the most detrimental. Conclusion. —Phenobarbital exposure during early development can have long-term deleterious effects on cognitive performance. Detrimental environmental conditions can interact with prenatal biological insult to magnify negative outcomes. Physicians are urged to use increased caution in prescribing such medications during pregnancy. (JAMA. 1995;274:1518-1525)

296 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a randomized study of the educational system in the inner cities of the United States and its potential causes and solutions, as well as the lack of evidence regarding the true impact of educational initiatives.
Abstract: The precarious state of the educational system in the inner cities of the United States, as well as its potential causes and solutions, have been popular topics of debate in recent years. Part of the difficulty in resolving this debate is the lack of solid empirical evidence regarding the true impact of educational initiatives. The efficacy of so-called “school choice” programs has been a particularly contentious issue. A current multimillion dollar program, the School Choice Scholarship Foundation Program in New York, randomized the distribution of vouchers in an attempt to shed some light on this issue. This is an important time for school choice, because on June 27, 2002 the U.S. Supreme Court upheld the constitutionality of a voucher program in Cleveland that provides scholarships both to secular and religious private schools. Although this study benefits immensely from a randomized design, it suffers from complications common to such research with human subjects: noncompliance with assigned “treatmen...

296 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a model is described in an lmer call by a formula, in this case including both fixed-and random-effects terms, and the formula and data together determine a numerical representation of the model from which the profiled deviance or the profeatured REML criterion can be evaluated as a function of some of model parameters.
Abstract: Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.

50,607 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.

33,234 citations

Journal ArticleDOI
TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

30,570 citations