scispace - formally typeset
Search or ask a question
Author

Donald B. Rubin

Other affiliations: University of Chicago, Harvard University, Princeton University  ...read more
Bio: Donald B. Rubin is an academic researcher from Tsinghua University. The author has contributed to research in topics: Causal inference & Missing data. The author has an hindex of 132, co-authored 515 publications receiving 262632 citations. Previous affiliations of Donald B. Rubin include University of Chicago & Harvard University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the installation, commissioning, and characterization of the new injection kicker system in the Muon g − 2 Experiment (E989) at Fermilab was described.
Abstract: We describe the installation, commissioning, and characterization of the new injection kicker system in the Muon g − 2 Experiment (E989) at Fermilab, which makes a precision measurement of the muon magnetic anomaly . Three Blumlein pulsers drive each of the 1.27-m-long non-ferric kicker magnets, which reside in a storage ring vacuum (SRV) that is subjected to a 1.45 T magnetic field. The new system has been redesigned relative to Muon g − 2 ’s predecessor experiment, and we present those details in this manuscript.

2 citations

Journal ArticleDOI
TL;DR: The authors generalizes Rubin's method of least squares estimation of missing values in any analysis of variance, which produces not only least squares estimates of all parameters and the residual mean square, but also the correct least squares standard error and t-test of any contrast as well as the least squares sum of squares and F-test due to any collection of contrasts.
Abstract: This article generalizes Rubin's method of least squares estimation of missing values in any analysis of variance. The general method produces not only least squares estimates of all parameters and the residual mean square, but also the correct least squares standard error and t-test of any contrast as well as the least squares sum of squares and F-test due to any collection of contrasts. The method is noniterative and requires only those subroutines designed to handle complete data plus a subroutine to find the inverse of an m x m symmetric matrix, where m is the number of missing values.

2 citations

Proceedings ArticleDOI
TL;DR: In this article, a network causal inference framework is proposed for influence estimation on social media networks and applied to the real-world problem of characterizing active influence operations on Twitter during the 2017 French presidential elections.
Abstract: Estimating influence on social media networks is an important practical and theoretical problem, especially because this new medium is widely exploited as a platform for disinformation and propaganda. This paper introduces a novel approach to influence estimation on social media networks and applies it to the real-world problem of characterizing active influence operations on Twitter during the 2017 French presidential elections. The new influence estimation approach attributes impact by accounting for narrative propagation over the network using a network causal inference framework applied to data arising from graph sampling and filtering. This causal framework infers the difference in outcome as a function of exposure, in contrast to existing approaches that attribute impact to activity volume or topological features, which do not explicitly measure nor necessarily indicate actual network influence. Cramer-Rao estimation bounds are derived for parameter estimation as a step in the causal analysis, and used to achieve geometrical insight on the causal inference problem. The ability to infer high causal influence is demonstrated on real-world social media accounts that are later independently confirmed to be either directly affiliated or correlated with foreign influence operations using evidence supplied by the U.S. Congress and journalistic reports.

2 citations

01 Jan 2012
TL;DR: Yan et al. as mentioned in this paper presented graphical displays, based on the tipping-point analysis first introduced in Yan et al., that help us visualize the results of a set of sensitivity analyses for missing outcomes in studies that compare two treatments.
Abstract: Assumptions about the missingness mechanism often cannot be assessed empirically, which calls for the sensitivity analyses. However, few studies with missing values are subjected to such analyses due to the lack of clear guidelines on a systematic exploration of alternative assumptions as well as the difficulty of formulating plausible missing not at random (MNAR) models. We present graphical displays, based on the “tipping-point” analysis first introduced in Yan et al. (2009), that help us visualize the results of a set of sensitivity analyses for missing outcomes in studies that compare two treatments. The resulting “enhanced tipping-point displays” provide compact summaries of conclusions drawn from different alternative assumptions about the missingness mechanism simultaneously. A recent use of these enhanced displays in a medical device clinical trial has helped lead to FDA approval.

2 citations

Posted Content
TL;DR: In this article, the authors propose leveraging principal component analysis (PCA) to identify proper subspaces in which Mahalanobis distance should be calculated, which can effectively reduce the dimensionality for high-dimensional cases while capturing most of the information in the covariates.
Abstract: Mahalanobis distance between treatment group and control group covariate means is often adopted as a balance criterion when implementing a rerandomization strategy. However, this criterion may not work well for high-dimensional cases because it balances all orthogonalized covariates equally. Here, we propose leveraging principal component analysis (PCA) to identify proper subspaces in which Mahalanobis distance should be calculated. Not only can PCA effectively reduce the dimensionality for high-dimensional cases while capturing most of the information in the covariates, but it also provides computational simplicity by focusing on the top orthogonal components. We show that our PCA rerandomization scheme has desirable theoretical properties on balancing covariates and thereby on improving the estimation of average treatment effects. We also show that this conclusion is supported by numerical studies using both simulated and real examples.

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a model is described in an lmer call by a formula, in this case including both fixed-and random-effects terms, and the formula and data together determine a numerical representation of the model from which the profiled deviance or the profeatured REML criterion can be evaluated as a function of some of model parameters.
Abstract: Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.

50,607 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.

33,234 citations

Journal ArticleDOI
TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

30,570 citations