scispace - formally typeset
Search or ask a question
Author

Donald B. Rubin

Other affiliations: University of Chicago, Harvard University, Princeton University  ...read more
Bio: Donald B. Rubin is an academic researcher from Tsinghua University. The author has contributed to research in topics: Causal inference & Missing data. The author has an hindex of 132, co-authored 515 publications receiving 262632 citations. Previous affiliations of Donald B. Rubin include University of Chicago & Harvard University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors derived an adjusted repeated-imputation degree of freedom, ν m, with the following properties: for fixed m and estimated fraction of missing information, the adjusted degree increases in ν com.
Abstract: An appealing feature of multiple imputation is the simplicity of the rules for combining the multiple complete-data inferences into a final inference, the repeated-imputation inference (Rubin, 1987). This inference is based on a t distribution and is derived from a Bayesian paradigm under the assumption that the complete-data degrees of freedom, ν com , are infinite, but the number of imputations, m, is finite. When ν com is small and there is only a modest proportion of missing data, the calculated repeated-imputation degrees of freedom, ν m , for the t reference distribution can be much larger than ν com , which is clearly inappropriate. Following the Bayesian paradigm, we derive an adjusted degrees of freedom, ν m , with the following three properties: for fixed m and estimated fraction of missing information, ν m monotonically increases in ν com ; ν m is always less than or equal to ν com ; and ν m equals ν m when ν com is infinite. A small simulation study demonstrates the superior frequentist performance when using ν m rather than ν m .

684 citations

Book ChapterDOI
TL;DR: Propensity score matching as mentioned in this paper is a class of multivariate methods used in comparative studies to construct treated and matched control samples that have similar distributions on many covariates, both observed and unobserved.
Abstract: Propensity score matching refers to a class of multivariate methods used in comparative studies to construct treated and matched control samples that have similar distributions on many covariates. This matching is the observational study analog of randomization in ideal experiments, but is far less complete as it can only balance the distribution of observed covariates, whereas randomization balances the distribution of all covariates, both observed and unobserved. An important feature of propensity score matching is that it can be easily combined with model-based regression adjustments or with matching on a subset of special prognostic covariates or combinations of prognostic covariates that have been identified as being especially predictive of the outcome variables. We extend earlier results by developing approximations for the distributions of covariates in matched samples created with linear propensity score methods for the practically important situation where matching uses both the estimat...

679 citations

Journal ArticleDOI
TL;DR: In this paper, the assignment to treatment group is made solely on the basis of the value of a covariate, X, and effort should be concentrated on estimating the conditional expectations of the dependent variable Y given X in the treatment and control groups.
Abstract: When assignment to treatment group is made solely on the basis of the value of a covariate, X, effort should be concentrated on estimating the conditional expectations of the dependent variable Y given X in the treatment and control groups. One then averages the difference between these conditional expectations over the distribution of X in the relevant population. There is no need for concern about “other” sources of bias, e.g., unreliability of X, unmeasured background variables. If the conditional expectations are parallel and linear, the proper regression adjustment is the simple covariance adjustment. However, since the quality of the resulting estimates may be sensitive to the adequacy of the underlying model, it is wise to search for nonparallelism and nonlinearity in these conditional expectations. Blocking on the values of X is also appropriate, although the quality of the resulting estimates may be sensitive to the coarseness of the blocking employed. In order for these techniques to be useful in practice, there must be either substantial overlap in the distribution of X in the treatment groups or strong prior information. INTRODUCTION In some studies, the experimental units are divided into two treatment groups solely on the basis of a covariate, X. By this we mean that if two units have the same value of X either they both must receive the same treatment or they must be randomly assigned (not necessarily with probability 0.5) to treatments.

676 citations

Journal ArticleDOI
TL;DR: In this paper, the authors argue that observational studies have to be carefully designed to approximate randomized experiments, in particular, without examining any final outcome data, and they use the framework of potential outcomes to define causal effects.
Abstract: For obtaining causal inferences that are objective, and therefore have the best chance of revealing scientific truths, carefully designed and executed randomized experiments are generally considered to be the gold standard. Observational studies, in contrast, are generally fraught with problems that compromise any claim for objectivity of the resulting causal inferences. The thesis here is that observational studies have to be carefully designed to approximate randomized experiments, in particular, without examining any final outcome data. Often a candidate data set will have to be rejected as inadequate because of lack of data on key covariates, or because of lack of overlap in the distributions of key covariates between treatment and control groups, often revealed by careful propensity score analyses. Sometimes the template for the approximating randomized experiment will have to be altered, and the use of principal stratification can be helpful in doing this. These issues are discussed and illustrated using the framework of potential outcomes to define causal effects, which greatly clarifies critical issues.

640 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present EM algorithms for both exploratory and confirmatory models for maximum likelihood factor analysis, which are essentially the same for both cases and involve only simple least squares regression operations; the largest matrix inversion required is for aq ×q symmetric matrix whereq is the matrix of factors.
Abstract: The details of EM algorithms for maximum likelihood factor analysis are presented for both the exploratory and confirmatory models. The algorithm is essentially the same for both cases and involves only simple least squares regression operations; the largest matrix inversion required is for aq ×q symmetric matrix whereq is the matrix of factors. The example that is used demonstrates that the likelihood for the factor analysis model may have multiple modes that are not simply rotations of each other; such behavior should concern users of maximum likelihood factor analysis and certainly should cast doubt on the general utility of second derivatives of the log likelihood as measures of precision of estimation.

608 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a model is described in an lmer call by a formula, in this case including both fixed-and random-effects terms, and the formula and data together determine a numerical representation of the model from which the profiled deviance or the profeatured REML criterion can be evaluated as a function of some of model parameters.
Abstract: Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.

50,607 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.

33,234 citations

Journal ArticleDOI
TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

30,570 citations