scispace - formally typeset
Open AccessPosted Content

Interplay between Topology and Social Learning over Weak Graphs.

TLDR
This work examines a distributed learning problem where the agents of a network form their beliefs about certain hypotheses of interest by means of a diffusion strategy and examines the feasibility of topology learning for two useful classes of problems.
Abstract
We consider a social learning problem, where a network of agents is interested in selecting one among a finite number of hypotheses. We focus on weakly-connected graphs where the network is partitioned into a sending part and a receiving part. The data collected by the agents might be heterogeneous. For example, some sub-networks might intentionally generate data from a fake hypothesis in order to influence other agents. The social learning task is accomplished via a diffusion strategy where each agent: i) updates individually its belief using its private data; ii) computes a new belief by exponentiating a linear combination of the log-beliefs of its neighbors. First, we examine what agents learn over weak graphs (social learning problem). We obtain analytical formulas for the beliefs at the different agents, which reveal how the agents' detection capability and the network topology interact to influence the beliefs. In particular, the formulas allow us to predict when a leader-follower behavior is possible, where some sending agents can control the mind of the receiving agents by forcing them to choose a particular hypothesis. Second, we consider the dual or reverse learning problem that reveals how agents learned: given a stream of beliefs collected at a receiving agent, we would like to discover the global influence that any sending component exerts on this receiving agent (topology learning problem). A remarkable and perhaps unexpected interplay between social and topology learning is observed: given $H$ hypotheses and $S$ sending components, topology learning can be feasible when $H\geq S$. The latter being only a necessary condition, we examine the feasibility of topology learning for two useful classes of problems. The analysis reveals that a critical element to enable faithful topology learning is the diversity in the statistical models of the sending sub-networks.

read more

Citations
More filters
Journal ArticleDOI

Graph Learning Under Partial Observability

TL;DR: In this article, the authors examine the inverse problem and consider the reverse question: How much information does observing the behavior at the nodes of a graph convey about the underlying topology? For large-scale networks, the difficulty in addressing such inverse problems is compounded by the fact that usually only a limited fraction of the nodes can be probed.
Posted Content

Social Learning with Partial Information Sharing

TL;DR: In this article, the authors consider the case in which agents will only share their beliefs regarding one hypothesis of interest, with the purpose of evaluating its validity, and draw conditions under which this policy does not affect truth learning.
Posted Content

Non-Bayesian Social Learning on Random Digraphs with Aperiodically Varying Network Connectivity

TL;DR: It is shown by proof and an example that if the network of influences is balanced in a certain sense, then asymptotic learning occurs almost surely even in the absence of uniform strong connectivity.
Posted Content

Models for the Diffusion of Beliefs in Social Networks: An Overview

TL;DR: Compared models for social learning and opinion diffusion in the context of economics and social science with those used in signal processing over networks of sensors, explaining how learning emerges or fails to emerge in both scenarios are compared.
Posted Content

Adaptive Social Learning

TL;DR: This work proposes an Adaptive Social Learning (ASL) strategy, which relies on a small step-size parameter to tune the adaptation degree, and establishes that the ASL strategy achieves consistent learning under standard global identifiability assumptions.
References
More filters
Book

Elements of information theory

TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Book

Matrix Analysis

TL;DR: In this article, the authors present results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrate their importance in a variety of applications, such as linear algebra and matrix theory.
Journal ArticleDOI

Reaching a Consensus

TL;DR: In this article, the authors consider a group of individuals who must act together as a team or committee, and assume that each individual in the group has his own subjective probability distribution for the unknown value of some parameter.
Journal ArticleDOI

Distributed detection with multiple sensors Part I. Fundamentals

TL;DR: In this paper basic results on distributed detection are reviewed and the parallel and the serial architectures are considered in some detail and the decision rules obtained from their optimization based an the Neyman-Pearson criterion and the Bayes formulation are discussed.
Journal ArticleDOI

Distributed optimization over time-varying directed graphs

TL;DR: This work develops a broadcast-based algorithm, termed the subgradient-push, which steers every node to an optimal value under a standard assumption of subgradient boundedness, which converges at a rate of O (ln t/√t), where the constant depends on the initial values at the nodes, the sub gradient norms, and, more interestingly, on both the consensus speed and the imbalances of influence among the nodes.
Related Papers (5)