scispace - formally typeset
Search or ask a question

Showing papers by "Duncan J. Watts published in 2019"


Journal ArticleDOI
TL;DR: It is concluded that rigorously evaluating policies or treatment via pragmatic randomized trials may provoke greater objection than simply implementing those same policies or treatments untested.
Abstract: Randomized experiments have enormous potential to improve human welfare in many domains, including healthcare, education, finance, and public policy. However, such "A/B tests" are often criticized on ethical grounds even as similar, untested interventions are implemented without objection. We find robust evidence across 16 studies of 5,873 participants from three diverse populations spanning nine domains-from healthcare to autonomous vehicle design to poverty reduction-that people frequently rate A/B tests designed to establish the comparative effectiveness of two policies or treatments as inappropriate even when universally implementing either A or B, untested, is seen as appropriate. This "A/B effect" is as strong among those with higher educational attainment and science literacy and among relevant professionals. It persists even when there is no reason to prefer A to B and even when recipients are treated unequally and randomly in all conditions (A, B, and A/B). Several remaining explanations for the effect-a belief that consent is required to impose a policy on half of a population but not on the entire population; an aversion to controlled but not to uncontrolled experiments; and a proxy form of the illusion of knowledge (according to which randomized evaluations are unnecessary because experts already do or should know "what works")-appear to contribute to the effect, but none dominates or fully accounts for it. We conclude that rigorously evaluating policies or treatments via pragmatic randomized trials may provoke greater objection than simply implementing those same policies or treatments untested.

46 citations


01 Apr 2019
TL;DR: In this article, the authors introduce a conceptual and methodological framework for applying machine learning prediction models to large corpora of digitized historical archives, and find that although such models can correctly identify some historically important documents, they tend to overpredict historical significance while also failing to identify many documents that will later be deemed important, where both types of error increase monotonically with the number of documents under consideration.
Abstract: Can events be accurately described as historic at the time they are happening? Claims of this sort are in effect predictions about the evaluations of future historians; that is, that they will regard the events in question as significant. Here we provide empirical evidence in support of earlier philosophical arguments that such claims are likely to be spurious and that, conversely, many events that will one day be viewed as historic attract little attention at the time. We introduce a conceptual and methodological framework for applying machine learning prediction models to large corpora of digitized historical archives. We find that although such models can correctly identify some historically important documents, they tend to overpredict historical significance while also failing to identify many documents that will later be deemed important, where both types of error increase monotonically with the number of documents under consideration. On balance, we conclude that historical significance is extremely difficult to predict, consistent with other recent work on intrinsic limits to predictability in complex social systems. However, the results also indicate the feasibility of developing ‘artificial archivists’ to identify potentially historic documents in very large digital corpora.

6 citations


Journal ArticleDOI
TL;DR: In response to the authors' article, Mislavsky et al. claim that “experiment aversion” does not exist because they found no evidence of it in their own research on low-stakes corporate experiments, and because their studies used between- rather than within-subjects designs.
Abstract: In response to our article (1), Mislavsky et al. (2) claim that “experiment aversion” does not exist because they found no evidence of it in their own research on low-stakes corporate experiments (3) and because our studies used between- rather than within-subjects designs. First, as we noted, we do not expect (and did not ourselves find) an A/B effect in every scenario, and we called for research on how the effect might vary across contexts. Second, we deliberately used a between-subjects design to maximize external validity: Universal implementation of policies usually occurs without mention of foregone alternatives, whereas A/B tests inherently acknowledge those alternatives. The belief that experiments deprive people of potentially beneficial interventions, but universally implemented policies do not, is not a “confound” to be avoided (2) but, rather, a key … [↵][1]1To whom correspondence may be addressed. Email: michellenmeyer{at}gmail.com. [1]: #xref-corresp-1-1

3 citations


Journal ArticleDOI
TL;DR: This paper found that fake news comprises only about 1% of overall news consumption and 0.15% of Americans' daily media diet, while a supermajority of Americans consume little or no news online at all.
Abstract: “Fake news,” broadly defined as deliberately false or misleading information masquerading as legitimate news, is frequently asserted to be pervasive on the web, and on social media in particular, with serious consequences for public opinion, political polarization, and ultimately democracy. Using a unique multimode data set that comprises a nationally representative sample of mobile, desktop, and television consumption across all categories of media content, we refute this conventional wisdom on three levels. First, news consumption of any sort is heavily outweighed by other forms of media consumption, comprising at most 14.2% of Americans’ daily media diets. Second, to the extent that Americans do consume news, it is overwhelmingly from television, which accounts for roughly five times as much as news consumption as online, while a supermajority of Americans consume little or no news online at all. Third, fake news comprises only about 1% of overall news consumption and 0.15% of Americans’ daily media diet. Although consumption data alone cannot determine that online misinformation in any dose is not dangerous to democracy, our results suggest that the origins of public mis-informedness and polarization are more likely to lie in the content of ordinary news--especially on television--or alternatively in the avoidance of news altogether as they are in overt fakery.

2 citations