scispace - formally typeset
Search or ask a question
Author

Donald B. Rubin

Other affiliations: University of Chicago, Harvard University, Princeton University  ...read more
Bio: Donald B. Rubin is an academic researcher from Tsinghua University. The author has contributed to research in topics: Causal inference & Missing data. The author has an hindex of 132, co-authored 515 publications receiving 262632 citations. Previous affiliations of Donald B. Rubin include University of Chicago & Harvard University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a simple mathematical model for causal inference is proposed and the resolution of Lord's paradox from this perspective has two aspects: first, the descriptive, non-causal conclusions of the two hypothetical statisticians are both correct.
Abstract: Lord's Paradox is analyzed in terms of a simple mathematical model for causal inference. The resolution of Lord's Paradox from this perspective has two aspects. First, the descriptive, non-causal conclusions of the two hypothetical statisticians are both correct. They appear contradictory only because they describe quite different aspects of the data. Second, the causal inferences of the statisticians are neither correct nor incorrect since they are based on different assumptions that our mathematical model makes explicit but neither assumption can be tested using the data set that is described in the example. We identify these differing assumptions and show how each may be used to justify the differing causal conclusions of the two statisticians. In addition to analyzing the classic “diet” example which Lord used to introduce his paradox, we also examine three other examples that appear in the three papers where Lord discusses the paradox and related matters.

130 citations

Journal ArticleDOI
TL;DR: In this paper, the authors developed a likelihood-based approach to estimate the wage effect of the US federally-funded Job Corps training program using principal-strategies, and formulated the estimands in terms of the training program on wages.
Abstract: Government-sponsored job-training programs must be subject to evaluation to assess whether their effectiveness justifies their cost to the public. The evaluation usually focuses on employment and total earnings, although the effect on wages is also of interest, because this effect reflects the increase in human capital due to the training program, whereas the effect on total earnings may be simply reflecting the increased likelihood of employment without any effect on wage rates. Estimating the effects of training programs on wages is complicated by the fact that, even in a randomized experiment, wages are “truncated” (or less accurately “censored”) by nonemployment, that is, they are only observed and well-defined for individuals who are employed. In this article, we develop a likelihood-based approach to estimate the wage effect of the US federally-funded Job Corps training program using “Principal Stratification”. Our estimands are formulated in terms of: (1) the effect of the training program on wages...

130 citations

Journal ArticleDOI
TL;DR: In this article, the authors extend the usual approach to the assessment of test or rater reliability to situations that have previously not been appropriate for the application of this standard (Spearman-Brown) approach.
Abstract: The authors extend the usual approach to the assessment of test or rater reliability to situations that have previously not been appropriate for the application of this standard (Spearman-Brown) approach. Specifically, the authors (a) provide an accurate overall estimate of the reliability of a test (or a panel of raters) comprising 2 or more different kinds of items (or raters), a quite common situation, and (b) provide a simple procedure for constructing the overall instrument when it comprises 2 or more kinds of items, judges, or raters, each with its own costs and its own reliabilities.

126 citations

Journal ArticleDOI
TL;DR: The authors argue that a shift in focus from actual traits to perceptions of them can address both of these issues while facilitating articulation of other critical concepts, particularly the timing of treatment assignment.
Abstract: Despite their ubiquity, observational studies to infer the causal effect of a so-called immutable characteristic, such as race or sex, have struggled for coherence, given the unavailability of a manipulation analogous to a “treatment” in a randomized experiment and the danger of posttreatment bias. We demonstrate that a shift in focus from actual traits to perceptions of them can address both of these issues while facilitating articulation of other critical concepts, particularly the timing of treatment assignment. We illustrate concepts by discussing the designs of various studies of the role of race in trial court death penalty decisions.

122 citations

Journal ArticleDOI
TL;DR: Three statistical models are developed for multiply imputing the missing values of airborne particulate matter and it is expected that these models are useful for creating multiple imputations in a variety of incomplete multivariate time series data sets.
Abstract: Summary. Many chemical and environmental data sets are complicated by the existence of fully missing values or censored values known to lie below detection thresholds. For example, week-long samples of airborne particulate matter were obtained at Alert, NWT, Canada, between 1980 and 1991, where some of the concentrations of 24 particulate constituents were coarsened in the sense of being either fully missing or below detection limits. To facilitate scientific analysis, it is appealing to create complete data by filling in missing values so that standard complete-data methods can be applied. We briefly review commonly used strategies for handling missing values and focus on the multiple-imputation approach, which generally leads to valid inferences when faced with missing data. Three statistical models are developed for multiply imputing the missing values of airborne particulate matter. We expect that these models are useful for creating multiple imputations in a variety of incomplete multivariate time series data sets.

121 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a model is described in an lmer call by a formula, in this case including both fixed-and random-effects terms, and the formula and data together determine a numerical representation of the model from which the profiled deviance or the profeatured REML criterion can be evaluated as a function of some of model parameters.
Abstract: Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.

50,607 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.

33,234 citations

Journal ArticleDOI
TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

30,570 citations