scispace - formally typeset
Search or ask a question
Author

Donald B. Rubin

Other affiliations: University of Chicago, Harvard University, Princeton University  ...read more
Bio: Donald B. Rubin is an academic researcher from Tsinghua University. The author has contributed to research in topics: Causal inference & Missing data. The author has an hindex of 132, co-authored 515 publications receiving 262632 citations. Previous affiliations of Donald B. Rubin include University of Chicago & Harvard University.


Papers
More filters
OtherDOI
29 Sep 2014

55 citations

Journal ArticleDOI
TL;DR: In this article, two extensions of the general location model are obtained, one replacing the common covariance matrix with different but proportional covariance matrices, where the proportionality constants are to be estimated.
Abstract: SUMMARY The general location model (Olkin & Tate, 1961; Krzanowski, 1980, 1982; Little & Schluchter, 1985) has categorical variables marginally distributed as a multinomial and continuous variables conditionally normally distributed with different means across cells defined by the categorical variables but a common covariance matrix across cells Two extensions of the general location model are obtained The first replaces the common covariance matrix with different but proportional covariance matrices, where the proportionality constants are to be estimated The second replaces the multivariate normal distributions of the first extension with multivariate t distributions, where the degrees of freedom can also vary across cells and are to be estimated The t distribution is just one example of more general ellipsoidally symmetric distributions that can be used in place of the normal These extensions can provide more accurate fits to real data and can be viewed as tools for robust inference Moreover, the models can be very useful for multiple imputation of ignorable missing values Maximum likelihood estimation using the AECM algorithm (Meng & van Dyk, 1997) is presented, as is a monotone-data Gibbs sampling scheme for drawing parameters and missing values from their posterior distributions To illustrate the techniques, a numerical example is presented

54 citations

Journal ArticleDOI
TL;DR: This work shows that the new 'multiple-imputation using two subclassification splines' method appears to be the most efficient and has coverage levels that are closest to nominal, and can estimate finite population average causal effects as well as non-linear causal estimands.
Abstract: Estimation of causal effects in non-randomized studies comprises two distinct phases: design, without outcome data, and analysis of the outcome data according to a specified protocol. Recently, Gutman and Rubin (2013) proposed a new analysis-phase method for estimating treatment effects when the outcome is binary and there is only one covariate, which viewed causal effect estimation explicitly as a missing data problem. Here, we extend this method to situations with continuous outcomes and multiple covariates and compare it with other commonly used methods (such as matching, subclassification, weighting, and covariance adjustment). We show, using an extensive simulation, that of all methods considered, and in many of the experimental conditions examined, our new 'multiple-imputation using two subclassification splines' method appears to be the most efficient and has coverage levels that are closest to nominal. In addition, it can estimate finite population average causal effects as well as non-linear causal estimands. This type of analysis also allows the identification of subgroups of units for which the effect appears to be especially beneficial or harmful.

54 citations

Posted Content
TL;DR: This article conducted a survey of people who played the lottery in the mid-eighties and estimated the effect of lottery winnings on their subsequent earnings, labor supply, consumption, and savings.
Abstract: Knowledge of the effect of unearned income on economic behavior of individuals in general, and on labor supply in particular, is of great importance to policy makers. Estimation of income effects, however, is a difficult problem because income is not randomly assigned and exogenous changes in income are difficult to identify. Here we exploit the randomized assignment of large amounts of money over long periods of time through lotteries. We carried out a survey of people who played the lottery in the mid-eighties and estimate the effect of lottery winnings on their subsequent earnings, labor supply, consumption, and savings. We find that winning a modest prize ($15,000 per year for twenty years) does not affect labor supply or earnings substantially. Winning such a prize does not considerably reduce savings. Winning a much larger prize ($80,000 rather than $15,000 per year) reduces labor supply as measured by hours, as well as participation and social security earnings; elasticities for hours and earnings are around -0.20 and for participation around -0.14. Winning a large versus modest amount also leads to increased expenditures on cars and larger home values, although mortgages values appear to increase by approximately the same amount. Winning $80,000 increases overall savings, although savings in retirement accounts are not significantly affected. The results do not vary much by gender, age, or prior employment status. There is some evidence that for those with zero earnings prior to winning the lottery there is a positive effect of winning a small prize on subsequent labor market participation.

54 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a model is described in an lmer call by a formula, in this case including both fixed-and random-effects terms, and the formula and data together determine a numerical representation of the model from which the profiled deviance or the profeatured REML criterion can be evaluated as a function of some of model parameters.
Abstract: Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.

50,607 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.

33,234 citations

Journal ArticleDOI
TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

30,570 citations