scispace - formally typeset
Search or ask a question
Author

Donald B. Rubin

Other affiliations: University of Chicago, Harvard University, Princeton University  ...read more
Bio: Donald B. Rubin is an academic researcher from Tsinghua University. The author has contributed to research in topics: Causal inference & Missing data. The author has an hindex of 132, co-authored 515 publications receiving 262632 citations. Previous affiliations of Donald B. Rubin include University of Chicago & Harvard University.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper argued for a categorical conceptualisation of temperamental characteristics and applied a finite mixture model appropriate to this view to two sets of longitudinal observations of infants and young children, and provided a good description of the observed predictive relation between behavioural profiles of children at 4 months and the degree of behavioural signs of fear at 14 months.
Abstract: Temperamental characteristics can be conceptualised as continuous dimensions or qualitative categories. The continuous versus categorical question concerns the underlying temperamental characteristics and not the measured variables, which can be recorded in either continuous or categorical forms. This paper argues for a categorical conceptualisation of temperamental characteristics and applies a finite mixture model appropriate to this view to two sets of longitudinal observations of infants and young children. This statistical approach provides a good description of the observed predictive relation between behavioural profiles of children at 4 months and the degree of behavioural signs of fear at 14 months. An advantage of the mixture model approach to this data, relative to more standard approaches to developmental data, is that because it takes into account an a-priori theory, it can be used to address improvements and refinements to theories and experimental designs in a straightforward manner.

31 citations

Journal ArticleDOI
TL;DR: A classifier that detects reported IO accounts with 96% precision, 79% recall, and 96% AUPRC is presented, demonstrated on real social media data collected for the 2017 French presidential election and known IO accounts disclosed by Twitter.
Abstract: The weaponization of digital communications and social media to conduct disinformation campaigns at immense scale, speed, and reach presents new challenges to identify and counter hostile influence operations (IOs). This paper presents an end-to-end framework to automate detection of disinformation narratives, networks, and influential actors. The framework integrates natural language processing, machine learning, graph analytics, and a network causal inference approach to quantify the impact of individual actors in spreading IO narratives. We demonstrate its capability on real-world hostile IO campaigns with Twitter datasets collected during the 2017 French presidential elections and known IO accounts disclosed by Twitter over a broad range of IO campaigns (May 2007 to February 2020), over 50,000 accounts, 17 countries, and different account types including both trolls and bots. Our system detects IO accounts with 96% precision, 79% recall, and 96% area-under-the precision-recall (P-R) curve; maps out salient network communities; and discovers high-impact accounts that escape the lens of traditional impact statistics based on activity counts and network centrality. Results are corroborated with independent sources of known IO accounts from US Congressional reports, investigative journalism, and IO datasets provided by Twitter.

31 citations

Journal ArticleDOI
TL;DR: The formulation presented here, although described for the problem of estimating excess health care expenditures due to the alleged misconduct of the tobacco industry, is more general and can be applied to any outcome, such as mortality, morbidity, or income from excise taxes, as well as to any situation in which consequences due to alleged misconduct or due to hypothetical programmes are to be estimated.
Abstract: An important application of statistics in recent years has been to address the causal effects of smoking. There is little doubt that there are health risks associated with smoking. However, more general issues concern the causal effects due to the alleged misconduct of the tobacco industry or due to programmes designed to curtail tobacco use. To address any such causal question, assumptions must be made. Although some of the issues are well known in the statistical and epidemiological literature, there does not appear to be a unified treatment that provides prescriptive guidance on the estimation of these causal effects with explication of the needed assumptions. A 'conduct attributable fraction' is derived, which allows for arbitrary changes in smoking and non-smoking health care expenditure related factors in a counterfactual world without the alleged misconduct, and therefore generalizes the traditional 'smoking attributable fraction'. The formulation presented here, although described for the problem of estimating excess health care expenditures due to the alleged misconduct of the tobacco industry, is more general. It can be applied to any outcome, such as mortality, morbidity, or income from excise taxes, as well as to any situation in which consequences due to alleged misconduct (for example, of two entities, such as the tobacco and the asbestos industries) or due to hypothetical programmes (for example, extra smoking reduction initiatives) are to be estimated.

31 citations

01 May 1991
TL;DR: In this paper, a new cell shape was proposed to reduce the impedances of the dangerous higher order modes (HOMs) propagation out of the cavity via the beam pipe, which is specially shaped.
Abstract: To achieve luminosities of 30-100 times CESR, 1-2 A of current must be stored. A CESR B-factory parameter list calls for 50 MV for two rings, to be supplied by 16 cells operating at 10 MV/m gradient. With a new cell shape, the impedances of the dangerous higher order modes (HOM) are drastically reduced. All HOMs propagate out of the cavity via the beam pipe, which is specially shaped. This allows HOM power couplers to be placed completely outside the cryostat. A ferrite absorber on the beam pipe lowers all Qs to approximately 100, which is sufficient to avoid multibunch instabilities without feedback systems. A waveguide input coupler on the beam-pipe provides Qext as low as 5*10/sup 4/, with a C- slot shaped iris that has a negligible effect on the cavity loss parameter.<>

30 citations

Book ChapterDOI
01 Sep 2006
TL;DR: Cochran was not only a wonderfully creative and insightful statistician with major written contributions to many areas, including to the field of nonrandomized, observational studies, but personally was a fabulous teacher and PhD adviser, whose influence on many is still strongly felt.
Abstract: Bill Cochran was not only a wonderfully creative and insightful statistician, with major written contributions to many areas, including to the field of nonrandomized, observational studies, but personally was a fabulous teacher and PhD adviser, whose influence on many is still strongly felt. This brief presentation will describe some of the major themes of his work in this area, and how these permeate modern thinking on the design, analysis, and evaluation of observational studies.

30 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a model is described in an lmer call by a formula, in this case including both fixed-and random-effects terms, and the formula and data together determine a numerical representation of the model from which the profiled deviance or the profeatured REML criterion can be evaluated as a function of some of model parameters.
Abstract: Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.

50,607 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.

33,234 citations

Journal ArticleDOI
TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

30,570 citations