scispace - formally typeset
Search or ask a question
Author

Donald B. Rubin

Other affiliations: University of Chicago, Harvard University, Princeton University  ...read more
Bio: Donald B. Rubin is an academic researcher from Tsinghua University. The author has contributed to research in topics: Causal inference & Missing data. The author has an hindex of 132, co-authored 515 publications receiving 262632 citations. Previous affiliations of Donald B. Rubin include University of Chicago & Harvard University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors employ empirical Bayes techniques to obtain admitting equations that are better than the least square admitting equations in two ways: for each law school, the empirical bayes admitting equations are more stable in time than the smallest squares admitting equations.
Abstract: The law school validity studies are primarily concerned with the prediction of first-year average in law school from Law School Aptitude Test score and undergraduate grade point average. Traditionally, a separate admitting equation is estimated in each law school by the method of least squares based on data from students who attended the law school in recent years. These least squares equations can fluctuate rather wildly from year to year. This study employs empirical Bayes techniques to obtain admitting equations that are better than the least squares admitting equations in two ways: for each law school, the empirical Bayes admitting equations are more stable in time than the least squares admitting equations; and the empirical Bayes admitting equations predict student performance more accurately than the least squares admitting equations.

195 citations

Journal ArticleDOI
TL;DR: Multiple imputation is applied to a demographic data set with coarse age measurements for Tanzanian children using a simple naive model and a new, relatively complex model that relates true age to the observed values of heaped age, sex, and anthropometric variables.
Abstract: Multiple imputation is applied to a demographic data set with coarse age measurements for Tanzanian children. The heaped ages are multiply imputed with plausible true ages using (a) a simple naive model and (b) a new, relatively complex model that relates true age to the observed values of heaped age, sex, and anthropometric variables. The imputed true ages are used to create valid inferences under the models and compare inferences across models, thereby revealing sensitivity of inferences to prior specifications, from naive to complex. In addition, diagnostic analyses applied to the imputed data are used to suggest which models appear most appropriate. Because it is not clear just what set of heaping intervals should be used, the models are applied under various assumptions about the heaping: rounding (to the nearest year or half year) versus a combination of rounding and truncation as practiced in the United States, and medium versus wide heaping interval sizes. The most striking conclusions ar...

193 citations

Journal ArticleDOI
TL;DR: In this paper, the authors used the survey data under the missing-at-random assumption for the missing responses for the eventual plebiscite outcome, substantially better than ad hoc methods and a nonignorable model that allows nonresponse to depend on the intended vote.
Abstract: The critical step in the drive toward an independent Slovenia was the plebiscite held in December 1990, at which the citizens of Slovenia voted overwhelmingly in favor of a sovereign and independent state. The Slovenian Public Opinion (SPO) survey of November/December 1990 was used by the government of Slovenia to prepare for the plebiscite. Because the plebiscite counted as “YES voters” only those voters who attended and voted for independence (nonvoters counted as “NO voters”), “Don't Know” survey responses can be thought of as missing data—the true intention of the voter is unknown but must be either “YES” or “NO.” An analysis of the survey data under the missing-at-random assumption for the missing responses provides remarkably accurate estimates of the eventual plebiscite outcome, substantially better than ad hoc methods and a nonignorable model that allows nonresponse to depend on the intended vote.

191 citations

Journal ArticleDOI
TL;DR: Monte Carlo methods are used to study the ability of nearest available Mahalanobis metric matching to make the means of matching variables more similar in matched samples than in random samples.
Abstract: SUMMARY Monte Carlo methods are used to study the ability of nearest available Mahalanobis metric matching to make the means of matching variables more similar in matched samples than in random samples.

188 citations

Book ChapterDOI
TL;DR: This presentation provides a brief overview of the Bayesian approach to the estimation of causal effects of treatments based on the concept of potential outcomes.
Abstract: A central problem in statistics is how to draw inferences about the causal effects of treatments (i.e., interventions) from randomized and nonrandomized data. For example, does the new job-training program really improve the quality of jobs for those trained, or does exposure to that chemical in drinking water increase cancer rates? This presentation provides a brief overview of the Bayesian approach to the estimation of such causal effects based on the concept of potential outcomes.

188 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a model is described in an lmer call by a formula, in this case including both fixed-and random-effects terms, and the formula and data together determine a numerical representation of the model from which the profiled deviance or the profeatured REML criterion can be evaluated as a function of some of model parameters.
Abstract: Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.

50,607 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.

33,234 citations

Journal ArticleDOI
TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

30,570 citations