scispace - formally typeset
Search or ask a question
Author

Donald B. Rubin

Other affiliations: University of Chicago, Harvard University, Princeton University  ...read more
Bio: Donald B. Rubin is an academic researcher from Tsinghua University. The author has contributed to research in topics: Causal inference & Missing data. The author has an hindex of 132, co-authored 515 publications receiving 262632 citations. Previous affiliations of Donald B. Rubin include University of Chicago & Harvard University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors present a Rejoinder on Causal Inference Through Potential Outcomes and Principal Stratification: Application to Studies with ''Censoring'' Due to Death by D. B. Rubin [math.ST/0612783]
Abstract: Rejoinder on Causal Inference Through Potential Outcomes and Principal Stratification: Application to Studies with ``Censoring'' Due to Death by D. B. Rubin [math.ST/0612783]

4 citations

Book ChapterDOI
01 Apr 2015
TL;DR: In this article, the authors discuss three key notions underlying causal inference: potential outcomes, the utility of the related stability assumption, and the central role of the assignment mechanism, which is crucial for inferring causal effects.
Abstract: INTRODUCTION In this introductory chapter we set out our basic framework for causal inference. We discuss three key notions underlying our approach. The first notion is that of potential outcomes , each corresponding to one of the levels of a treatment or manipulation , following the dictum “no causation without manipulation” (Rubin, 1975, p. 238). Each of these potential outcomes is a priori observable, in the sense that it could be observed if the unit were to receive the corresponding treatment level. But, a posteriori , that is, once a treatment is applied, at most one potential outcome can be observed. Second, we discuss the necessity, when drawing causal inferences, of observing multiple units , and the utility of the related stability assumption, which we use throughout most of this book to exploit the presence of multiple units. Finally, we discuss the central role of the assignment mechanism , which is crucial for inferring causal effects, and which serves as the organizing principle for this book. POTENTIAL OUTCOMES In everyday life, causal language is widely used in an informal way. One might say: “My headache went away because I took an aspirin,” or “She got a good job last year because she went to college,” or “She has long hair because she is a girl.” Such comments are typically informed by observations on past exposures, for example, of headache outcomes after taking aspirin or not, or of characteristics of jobs of people with or without college educations, or the typical hair length of boys and girls. As such, these observations generally involve informal statistical analyses, drawing conclusions from associations between measurements of different quantities that vary from individual to individual, commonly called variables or random variables – language apparently first used by Yule (1897). Nevertheless, statistical theory has been relatively silent on questions of causality. Many, especially older, textbooks avoid any mention of the term other than in settings of randomized experiments.

4 citations

Book ChapterDOI
01 Jan 2000
TL;DR: What do a group of statisticians have to contribute to the subject of good costing practices, and in particular the United States Postal Service costing methodology?
Abstract: What do a group of statisticians have to contribute to the subject of good costing practices, and in particular the United States Postal Service (USPS) costing methodology? Credible cost estimates require the collection of high quality information as component inputs. Deciding what information to collect lies in the province of the economist, but exactly how to collect that information, when to collect it, how much of it to collect, as well as a significant part of the overall evaluation of the quality of the information itself, lies in the province of the statistician.2

4 citations

Posted Content
TL;DR: The use of contrast-specific propensity scores (CSPS) is proposed, which allows the creation of treatment groups of units that are balanced with respect to bifurcations of the specified contrasts and the multivariate space spanned by these bifURcations.
Abstract: Basic propensity score methodology is designed to balance multivariate pre-treatment covariates when comparing one active treatment with one control treatment. Practical settings often involve comparing more than two treatments, where more complicated contrasts than the basic treatment-control one,(1,-1), are relevant. Here, we propose the use of contrast-specific propensity scores (CSPS). CSPS allow the creation of treatment groups of units that are balanced with respect to bifurcations of the specified contrasts and the multivariate space spanned by them.

4 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a model is described in an lmer call by a formula, in this case including both fixed-and random-effects terms, and the formula and data together determine a numerical representation of the model from which the profiled deviance or the profeatured REML criterion can be evaluated as a function of some of model parameters.
Abstract: Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.

50,607 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.

33,234 citations

Journal ArticleDOI
TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

30,570 citations