Author
Donald B. Rubin
Other affiliations: University of Chicago, Harvard University, Princeton University ...read more
Bio: Donald B. Rubin is an academic researcher from Tsinghua University. The author has contributed to research in topics: Causal inference & Missing data. The author has an hindex of 132, co-authored 515 publications receiving 262632 citations. Previous affiliations of Donald B. Rubin include University of Chicago & Harvard University.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, the problem of calculating propensity scores when covariates can have missing values was addressed, which can be prognostically important, and the pattern of missing covariates could be prognostic.
Abstract: Investigators in observational studies have no control over treatment assignment. As a result, large differences can exist between the treatment and control groups on observed covariates, which can lead to badly biased estimates of treatment effects. Propensity score methods are an increasingly popular method for balancing the distribution of the covariates in the two groups to reduce this bias; for example, using matching or subclassification, sometimes in combination with model-based adjustment. To estimate propensity scores, which are the conditional probabilities of being treated given a vector of observed covariates, we must model the distribution of the treatment indicator given these observed covariates. Much work has been done in the case where covariates are fully observed. We address the problem of calculating propensity scores when covariates can have missing values. In such cases, which commonly arise in practice, the pattern of missing covariates can be prognostically important, and ...
294 citations
••
TL;DR: A practical example shows that the bias due to incomplete matching can be severe, and moreover, can be avoided entirely by using an appropriate multivariate nearest available matching algorithm, which, in the example, leaves only a small residual biasDue to inexact matching.
Abstract: Observational studies comparing groups of treated and control units are often used to estimate the effects caused by treatments. Matching is a method for sampling a large reservoir of potential controls to produce a control group of modest size that is ostensibly similar to the treated group. In practice, there is a trade-off between the desires to find matches for all treated units and to obtain matched treated-control pairs that are extremely similar to each other. We derive expressions for the bias in the average matched pair difference due to (1) the failure to match all treated units—incomplete matching, and (2) the failure to obtain exact matches—inexact matching. A practical example shows that the bias due to incomplete matching can be severe, and moreover, can be avoided entirely by using an appropriate multivariate nearest available matching algorithm, which in the example, leaves only a small residual bias due to inexact matching.
283 citations
••
277 citations
••
TL;DR: In this paper, the authors illustrate Bayesian and empirical Bayesian techniques that can be used to summarize the evidence in such data about differences among treatments, thereby obtaining improved estimates of the treatment effect in each experiment, including the one having the largest observed effect.
Abstract: Many studies comparing new treatments to standard treatments consist of parallel randomized experiments. In the example considered here, randomized experiments were conducted in eight schools to determine the effectiveness of special coaching programs for the SAT. The purpose here is to illustrate Bayesian and empirical Bayesian techniques that can be used to help summarize the evidence in such data about differences among treatments, thereby obtaining improved estimates of the treatment effect in each experiment, including the one having the largest observed effect. Three main tools are illustrated: 1) graphical techniques for displaying sensitivity within an empirical Bayes framework, 2) simple simulation techniques for generating Bayesian posterior distributions of individual effects and the largest effect, and 3) methods for monitoring the adequacy of the Bayesian model specification by simulating the posterior predictive distribution in hypothetical replications of the same treatments in the same eig...
263 citations
••
TL;DR: The validation method is shown to find errors in software when they exist and, moreover, the validation output can be informative about the nature and location of such errors.
Abstract: This article presents a simulation-based method designed to establish the computational correctness of software developed to fit a specific Bayesian model, capitalizing on properties of Bayesian posterior distributions. We illustrate the validation technique with two examples. The validation method is shown to find errors in software when they exist and, moreover, the validation output can be informative about the nature and location of such errors. We also compare our method with that of an earlier approach.
262 citations
Cited by
More filters
••
TL;DR: In this article, a model is described in an lmer call by a formula, in this case including both fixed-and random-effects terms, and the formula and data together determine a numerical representation of the model from which the profiled deviance or the profeatured REML criterion can be evaluated as a function of some of model parameters.
Abstract: Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.
50,607 citations
••
49,597 citations
•
[...]
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.
38,208 citations
••
TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.
33,234 citations
••
TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.
30,570 citations