scispace - formally typeset
Proceedings ArticleDOI

Detecting Network Effects: Randomizing Over Randomized Experiments

TLDR
A new experimental design is leverage for testing whether SUTVA holds, without making any assumptions on how treatment effects may spill over between the treatment and the control group, and the proposed methodology can be applied to settings in which a network is not necessarily observed but, if available, can be used in the analysis.
Abstract
Randomized experiments, or A/B tests, are the standard approach for evaluating the causal effects of new product features, i.e., treatments. The validity of these tests rests on the "stable unit treatment value assumption" (SUTVA), which implies that the treatment only affects the behavior of treated users, and does not affect the behavior of their connections. Violations of SUTVA, common in features that exhibit network effects, result in inaccurate estimates of the causal effect of treatment. In this paper, we leverage a new experimental design for testing whether SUTVA holds, without making any assumptions on how treatment effects may spill over between the treatment and the control group. To achieve this, we simultaneously run both a completely randomized and a cluster-based randomized experiment, and then we compare the difference of the resulting estimates. We present a statistical test for measuring the significance of this difference and offer theoretical bounds on the Type I error rate. We provide practical guidelines for implementing our methodology on large-scale experimentation platforms. Importantly, the proposed methodology can be applied to settings in which a network is not necessarily observed but, if available, can be used in the analysis. Finally, we deploy this design to LinkedIn's experimentation platform and apply it to two online experiments, highlighting the presence of network effects and bias in standard A/B testing approaches in a real-world setting.

read more

Citations
More filters
Journal Article

Planning of experiments

TL;DR: This book is one of the most important contributions to scientific methodology of the authors' generation and the lessons the author has to teach are well epitomized.
Proceedings ArticleDOI

How A/B Tests Could Go Wrong: Automatic Diagnosis of Invalid Online Experiments

TL;DR: This paper shared how it mined through historical A/B tests and identified the most common causes for invalid tests, ranging from biased design, self-selection bias to attempting to generalize A-B test result beyond the experiment population and time frame, and developed scalable algorithms to automatically detect invalid A/ B tests and diagnose the root cause of invalidity.
Journal ArticleDOI

Testing for arbitrary interference on experimentation platforms

TL;DR: An experimental design strategy for testing whether the classic assumption of no interference among users, under which the outcome of one user does not depend on the treatment assigned to other users, is rarely tenable on such platforms is introduced.
Proceedings Article

Variance Reduction in Bipartite Experiments through Correlation Clustering

TL;DR: A novel clustering objective and a corresponding algorithm that partitions a bipartite graph so as to maximize the statistical power of a bipartsite experiment on that graph are introduced.
Journal Article

Limiting bias from test-control interference in online marketplace experiments

TL;DR: Using a simulation built on top of data from Airbnb, the use of methods from the network interference literature for online marketplace experimentation are considered and suggest that experiment design and analysis techniques are promising tools for reducing bias due to test-control interference in marketplace experiments.
References
More filters
Journal ArticleDOI

Birds of a Feather: Homophily in Social Networks

TL;DR: The homophily principle as mentioned in this paper states that similarity breeds connection, and that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics.
Journal ArticleDOI

Community detection in graphs

TL;DR: A thorough exposition of community structure, or clustering, is attempted, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists.
Journal ArticleDOI

Community detection in graphs

TL;DR: A thorough exposition of the main elements of the clustering problem can be found in this paper, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.
Journal ArticleDOI

Multilevelk-way Partitioning Scheme for Irregular Graphs

TL;DR: This paper presents and study a class of graph partitioning algorithms that reduces the size of the graph by collapsing vertices and edges, they find ak-way partitioning of the smaller graph, and then they uncoarsen and refine it to construct ak- way partitioning for the original graph.
Journal ArticleDOI

Community Structure in Large Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters

TL;DR: This paper employs approximation algorithms for the graph-partitioning problem to characterize as a function of size the statistical and structural properties of partitions of graphs that could plausibly be interpreted as communities, and defines the network community profile plot, which characterizes the "best" possible community—according to the conductance measure—over a wide range of size scales.
Related Papers (5)
Trending Questions (1)
How to measure network effects?

The paper proposes a new experimental design to measure network effects by simultaneously running both a completely randomized and a cluster-based randomized experiment and comparing the difference in the resulting estimates.