scispace - formally typeset
Search or ask a question

Showing papers by "Ali H. Sayed published in 2018"


Journal ArticleDOI
01 Jun 2018
TL;DR: An asynchronous, decentralized algorithm for consensus optimization that involves both primal and dual variables, uses fixed step-size parameters, and provably converges to the exact solution under a random agent assumption and both bounded and unbounded delay assumptions.
Abstract: We propose an asynchronous, decentralized algorithm for consensus optimization. The algorithm runs over a network in which the agents communicate with their neighbors and perform local computation. In the proposed algorithm, each agent can compute and communicate independently at different times, for different durations, with the information it has even if the latest information from its neighbors is not yet available. Such an asynchronous algorithm reduces the time that agents would otherwise waste idle because of communication delays or because their neighbors are slower. It also eliminates the need for a global clock for synchronization. Mathematically, the algorithm involves both primal and dual variables, uses fixed step-size parameters, and provably converges to the exact solution under a random agent assumption and both bounded and unbounded delay assumptions. When running synchronously, the algorithm performs just as well as existing competitive synchronous algorithms such as PG-EXTRA, which diverges without synchronization. Numerical experiments confirm the theoretical findings and illustrate the performance of the proposed algorithm.

100 citations


Proceedings ArticleDOI
15 Apr 2018
TL;DR: The objective of this work is to propose diffusion strategies for adaptively learning from streaming graph signals that are able to process dynamic or streaming data.
Abstract: Most works on graph signal processing assume static graph signals, which is a limitation even in comparison to traditional DSP techniques where signals are modeled as sequences that evolve over time For broader applicability, it is necessary to develop techniques that are able to process dynamic or streaming data Many earlier works on adaptive networks have addressed problems related to this challenge by developing effective strategies that are particularly well-suited to data streaming into graphs We are thus faced with two paradigms: one where signals are modeled as static and sitting on the graph nodes, and another where signals are modeled as dynamic and streaming into the graph nodes The objective of this work is to blend these concepts and propose diffusion strategies for adaptively learning from streaming graph signals

41 citations


Proceedings ArticleDOI
04 Jun 2018
TL;DR: An online algorithm is developed that is able to learn the underlying graph structure from observations of the signal evolution and is adaptive in nature and able to respond to changes in the graph structure and the perturbation statistics.
Abstract: Graphs provide a powerful framework to represent high-dimensional but structured data, and to make inferences about relationships between subsets of the data. In this work we consider graph signals that evolve dynamically according to a heat diffusion process and are subject to persistent perturbations. We develop an online algorithm that is able to learn the underlying graph structure from observations of the signal evolution. The algorithm is adaptive in nature and in particular able to respond to changes in the graph structure and the perturbation statistics.

28 citations


Book ChapterDOI
01 Jan 2018
TL;DR: This chapter focuses on adaptive learning solutions where agents are able to track drifts in the underlying models, and examines performance limits under both estimation and detection formulations.
Abstract: In this chapter, we review the foundations of statistical inference over adaptive networks by considering two canonical problems: distributed estimation and distributed detection. In the former setting, agents cooperate to estimate a model of interest while in the second setting, the agents cooperate to detect a state of nature. We focus on adaptive learning solutions where agents are able to track drifts in the underlying models, and examine performance limits under both estimation and detection formulations. Special attention is paid to the detailed characterization of the steady-state performance. Certain universal laws are highlighted and compared against known laws for estimation and detection in traditional (centralized or decentralized, nonadaptive) inferential systems.

26 citations


Posted Content
TL;DR: In this paper, a fully decentralized multi-agent algorithm for policy evaluation is proposed, which combines off-policy learning, eligibility traces and linear function approximation, and achieves linear convergence with O(1)$ memory requirements.
Abstract: This work develops a fully decentralized multi-agent algorithm for policy evaluation. The proposed scheme can be applied to two distinct scenarios. In the first scenario, a collection of agents have distinct datasets gathered following different behavior policies (none of which is required to explore the full state space) in different instances of the same environment and they all collaborate to evaluate a common target policy. The network approach allows for efficient exploration of the state space and allows all agents to converge to the optimal solution even in situations where neither agent can converge on its own without cooperation. The second scenario is that of multi-agent games, in which the state is global and rewards are local. In this scenario, agents collaborate to estimate the value function of a target team policy. The proposed algorithm combines off-policy learning, eligibility traces and linear function approximation. The proposed algorithm is of the variance-reduced kind and achieves linear convergence with $O(1)$ memory requirements. The linear convergence of the algorithm is established analytically, and simulations are used to illustrate the effectiveness of the method.

22 citations


Journal ArticleDOI
TL;DR: The analysis will show that, despite the coupled dynamics that arises in a networked scenario, the agents are still able to attain linear convergence in the stochastic case; they are also able to reach agreement within O(μ) of the optimizer.

21 citations


Journal ArticleDOI
TL;DR: Interestingly, the results show that the steady-state performance of the learning strategy is not always degraded, while the convergence rate suffers some degradation, which provides yet another indication of the resilience and robustness of adaptive distributed strategies.
Abstract: This paper examines the mean-square error performance of diffusion stochastic algorithms under a generalized coordinate-descent scheme. In this setting, the adaptation step by each agent is limited to a random subset of the coordinates of its stochastic gradient vector. The selection of coordinates varies randomly from iteration to iteration and from agent to agent across the network. Such schemes are useful in reducing computational complexity at each iteration in power-intensive large data applications. They are also useful in modeling situations where some partial gradient information may be missing at random. Interestingly, the results show that the steady-state performance of the learning strategy is not always degraded, while the convergence rate suffers some degradation. The results provide yet another indication of the resilience and robustness of adaptive distributed strategies.

18 citations


Proceedings ArticleDOI
01 Dec 2018
TL;DR: An effective distributed first-order algorithm is developed, which requires sharing dual variables only and takes advantage of the constraint sparsity, and is shown to converge to the exact minimizer under sufficiently small constant step sizes.
Abstract: In this work, a distributed multi-agent optimization problem is studied where different subsets of agents are coupled with each other through affine constraints. Moreover, each agent is only aware of its own contribution to the constraints and only knows which neighboring agents share constraints with it. An effective distributed first-order algorithm is developed, which requires sharing dual variables only and takes advantage of the constraint sparsity. The algorithm is shown to converge to the exact minimizer under sufficiently small constant step sizes. A simulation is given to illustrate the effect of the constraint structure and advantages of the proposed algorithm.

16 citations


Proceedings ArticleDOI
15 Apr 2018
TL;DR: This paper provides the first theoretical guarantee of linear convergence under random reshuffling for SAGA and proposes a new amortized variance-reduced gradient (AVRG) algorithm with constant storage requirements compared to SAG a and with balanced gradient computations compared toSVRG.
Abstract: Several useful variance-reduced stochastic gradient algorithms, such as SVRG, SAGA, Finito, and SAG, have been proposed to minimize empirical risks with linear convergence properties to the exact minimizers. The existing convergence results assume uniform data sampling with replacement. However, it has been observed that random reshuffling can deliver superior performance and, yet, no formal proofs or guarantees of exact convergence exist for variance-reduced algorithms under random reshuffling. This paper makes two contributions. First, it resolves this open issue and provides the first theoretical guarantee of linear convergence under random reshuffling for SAGA; the argument is also adaptable to other variance-reduced algorithms. Second, under random reshuffling, the paper proposes a new amortized variance-reduced gradient (AVRG) algorithm with constant storage requirements compared to SAGA and with balanced gradient computations compared to SVRG. AVRG is also shown analytically to converge linearly.

15 citations


Proceedings ArticleDOI
01 Jun 2018
TL;DR: This work considers a diffusion network responding to streaming data, and studies the problem of identifying the topology of a subnetwork of observable agents by tracking their output measurements.
Abstract: This work considers a diffusion network responding to streaming data, and studies the problem of identifying the topology of a subnetwork of observable agents by tracking their output measurements. Topology inference from indirect and/or incomplete datasets (network tomography) is in general an ill-posed problem. Under an appropriate Erdos-Renyi random graph model for the unobserved part, the problem of network tomography is well-posed in the thermodynamic limit: when the number of network agents grows to infinity, any arbitrary subnetwork topology associated with the observed agents can be recovered with high probability.

14 citations


Proceedings ArticleDOI
01 Jan 2018
TL;DR: A graph diffusion LMS-Newton algorithm is introduced and a computationally efficient preconditioned diffusion strategy is proposed and studied and its performance is studied.
Abstract: Graph filters, defined as polynomial functions of a graph-shift operator (GSO), play a key role in signal processing over graphs. In this work, we are interested in the adaptive and distributed estimation of graph filter coefficients from streaming graph signals. To this end, diffusion LMS strategies can be employed. However, most popular GSOs such as those based on the graph Laplacian matrix or the adjacency matrix are not energy preserving. This may result in a large eigenvalue spread and a slow convergence of the graph diffusion LMS. To address this issue and improve the transient performance, we introduce a graph diffusion LMS-Newton algorithm. We also propose a computationally efficient preconditioned diffusion strategy and we study its performance.

Proceedings ArticleDOI
01 Apr 2018
TL;DR: This work establishes, under reasonable conditions, that consistent tomography is possible, namely, that it is possible to reconstruct the interaction profile of the observable portion of the network, with negligible error as the network size increases.
Abstract: This work studies the problem of inferring from streaming data whether an agent is directly influenced by another agent over an adaptive network of interacting agents. Agent i influences agent j if they are connected, and if agent j uses the information from agent i to update its inference. The solution of this inference task is challenging for at least two reasons. First, only the output of the learning algorithm is available to the external observer and not the raw data. Second, only observations from a fraction of the network agents is available, with the total number of agents itself being also unknown. This work establishes, under reasonable conditions, that consistent tomography is possible, namely, that it is possible to reconstruct the interaction profile of the observable portion of the network, with negligible error as the network size increases. We characterize the decaying behavior of the error with the network size, and provide a set of numerical experiments to illustrate the results.

Proceedings ArticleDOI
01 Jun 2018
TL;DR: This work considers the problem of reconstructing the topology of a network of interacting agents via observations of the state-evolution of the agents, and explores the possibility of reconstructioning a larger network via repeated application of the local tomography algorithm to smaller network portions.
Abstract: This work considers the problem of reconstructing the topology of a network of interacting agents via observations of the state-evolution of the agents. Observations from only a subset of the nodes are collected, and the information is used to infer their local connectivity (local tomography). Recent results establish that, under suitable conditions on the network model, local tomography is achievable with high probability as the network size scales to infinity [1], [2]. Motivated by these results, we explore the possibility of reconstructing a larger network via repeated application of the local tomography algorithm to smaller network portions. A divide-and-conquer strategy is developed and tested numerically on some illustrative examples.

Proceedings ArticleDOI
23 Jun 2018
TL;DR: This paper formulates a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph.
Abstract: This paper formulates a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. The smoothness requirement softens the transition in the tasks among adjacent nodes and allows incorporating information about the graph structure into the solution of the inference problem. A diffusion strategy is devised that responds to streaming data and employs stochastic approximations in place of actual gradient vectors, which are generally unavailable. We show, under conditions on the step-size parameter, that the adaptive strategy induces a contraction mapping and leads to small estimation errors on the order of the small step-size. A graph spectral filtering interpretation is provided for the optimization framework.

Posted Content
TL;DR: This article develops mechanisms by which influential agents can lead receiving agents to adopt certain beliefs and examines whether receiving agents can be driven to arbitrary beliefs and whether the network structure limits the scope of control by the influential agents.
Abstract: In diffusion social learning over weakly-connected graphs, it has been shown recently that influential agents shape the beliefs of non-influential agents. This paper analyzes this mechanism more closely and addresses two main questions. First, the article examines how much freedom influential agents have in controlling the beliefs of the receiving agents, namely, whether receiving agents can be driven to arbitrary beliefs and whether the network structure limits the scope of control by the influential agents. Second, even if there is a limit to what influential agents can accomplish, this article develops mechanisms by which they can lead receiving agents to adopt certain beliefs. These questions raise interesting possibilities about belief control over networked agents. Once addressed, one ends up with design procedures that allow influential agents to drive other agents to endorse particular beliefs regardless of their local observations or convictions. The theoretical findings are illustrated by means of examples.

Posted Content
TL;DR: A multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph, and the influence of the network topology and the regularization strength on the network performance is revealed.
Abstract: This paper formulates a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. The smoothness condition softens the transition in the tasks among adjacent nodes and allows incorporating information about the graph structure into the solution of the inference problem. A diffusion strategy is devised that responds to streaming data and employs stochastic approximations in place of actual gradient vectors, which are generally unavailable. The approach relies on minimizing a global cost consisting of the aggregate sum of individual costs regularized by a term that promotes smoothness. We show in this Part I of the work, under conditions on the step-size parameter, that the adaptive strategy induces a contraction mapping and leads to small estimation errors on the order of the small step-size. The results in the accompanying Part II will reveal explicitly the influence of the network topology and the regularization strength on the network performance and will provide insights into the design of effective multitask strategies for distributed inference over networks.

Book ChapterDOI
01 Jan 2018
TL;DR: This chapter discusses distributed Kalman and particle filtering algorithms for state estimation in decentralized multiagent networks and shows how the agents can construct local estimates of the state trajectory through a cooperative process of interactions.
Abstract: This chapter discusses distributed Kalman and particle filtering algorithms for state estimation in decentralized multiagent networks It is assumed that the spatially distributed agents acquire local measurements with information about a time-varying state described by some underlying state-space model The agents seek to estimate the time-varying state in a decentralized manner They are only allowed to interact locally by sharing data or estimates with their immediate neighbors It is shown how the agents can construct local estimates of the state trajectory through a cooperative process of interactions Both diffusion- and consensus-based strategies are presented

Proceedings ArticleDOI
01 Oct 2018
TL;DR: This work shows that, even under dense connectivity, the Granger estimator ensures an identifiability gap that enables the discrimination between connected and disconnected nodes within the observable subnetwork.
Abstract: This work examines the problem of graph learning over a diffusion network when measurements can only be gathered from a limited fraction of agents (latent regime). Under this setting, most works in the literature rely on a degree of sparsity to provide guarantees of consistent graph recovery. This work moves away from this condition and shows that, even under dense connectivity, the Granger estimator ensures an identifiability gap that enables the discrimination between connected and disconnected nodes within the observable subnetwork.

Posted Content
17 Oct 2018
TL;DR: A fully decentralized algorithm for policy evaluation with off-policy learning, linear function approximation, and $O(n)$ complexity in both computation and memory requirements is developed.
Abstract: This work develops a fully decentralized multi-agent algorithm for policy evaluation. The proposed scheme can be applied to two distinct scenarios. In the first scenario, a collection of agents have distinct datasets gathered following different behavior policies (none of which is required to explore the full state space) in different instances of the same environment and they all collaborate to evaluate a common target policy. The network approach allows for efficient exploration of the state space and allows all agents to converge to the optimal solution even in situations where neither agent can converge on its own without cooperation. The second scenario is that of multi-agent games, in which the state is global and rewards are local. In this scenario, agents collaborate to estimate the value function of a target team policy. The proposed algorithm combines off-policy learning, eligibility traces and linear function approximation. The proposed algorithm is of the variance-reduced kind and achieves linear convergence with $O(1)$ memory requirements. The linear convergence of the algorithm is established analytically, and simulations are used to illustrate the effectiveness of the method.

Book ChapterDOI
01 Jan 2018
TL;DR: In this paper, the diffusion LMS algorithm is extended to deal with structured criteria built upon groups of variables, leading to a flexible framework that can encode various relationships in the parameters to estimate.
Abstract: Considering groups of parameters rather than individual parameters can be beneficial for estimation accuracy if structural relationships between parameters exist (e.g., spatial, hierarchical, or related to the physics of the problem). Group sparsity-estimators are typical examples that benefit from such prior information. Building on this principle, we show that the diffusion LMS algorithm used for distributed inference over adaptive networks can be extended to deal with structured criteria built upon groups of variables, leading to a flexible framework that can encode various relationships in the parameters to estimate. We also introduce online strategies to group the parameters to estimate in an unsupervised manner, and to promote or inhibit collaborations between nodes depending on whether these groups are locally or globally applicable. Simulations illustrate the theoretical findings and the estimation strategies.

Posted Content
TL;DR: The results reveal explicitly the influence of the network topology and the regularization strength on the network performance and provide insights into the design of effective multitask strategies for distributed inference over networks.
Abstract: Part I of this paper formulated a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. A diffusion strategy was devised that responds to streaming data and employs stochastic approximations in place of actual gradient vectors, which are generally unavailable. The approach relied on minimizing a global cost consisting of the aggregate sum of individual costs regularized by a term that promotes smoothness. We examined the first-order, the second-order, and the fourth-order stability of the multitask learning algorithm. The results identified conditions on the step-size parameter, regularization strength, and data characteristics in order to ensure stability. This Part II examines steady-state performance of the strategy. The results reveal explicitly the influence of the network topology and the regularization strength on the network performance and provide insights into the design of effective multitask strategies for distributed inference over networks.

Posted Content
TL;DR: This work considers a set of binary random variables and proposes an efficient algorithm for learning the Kolmogorov model, which shows its first-order optimality, despite the combinatorial nature of the learning problem.
Abstract: We summarize our recent findings, where we proposed a framework for learning a Kolmogorov model, for a collection of binary random variables. More specifically, we derive conditions that link outcomes of specific random variables, and extract valuable relations from the data. We also propose an algorithm for computing the model and show its first-order optimality, despite the combinatorial nature of the learning problem. We apply the proposed algorithm to recommendation systems, although it is applicable to other scenarios. We believe that the work is a significant step toward interpretable machine learning.

Proceedings ArticleDOI
19 Apr 2018
TL;DR: This work develops an effective distributed algorithm for the solution of stochastic optimization problems that involve partial coupling among both local constraints and local cost functions and establishes mean-square-error convergence of the resulting strategy for sufficiently small step-sizes and large penalty factors.
Abstract: This work develops an effective distributed algorithm for the solution of stochastic optimization problems that involve partial coupling among both local constraints and local cost functions. While the collection of networked agents is interested in discovering a global model, the individual agents are sensing data that is only dependent on parts of the model. Moreover, different agents may be dependent on different subsets of the model. In this way, cooperation is justified and also necessary to enable recovery of the global information. In view of the local constraints, we show how to relax the optimization problem to a penalized form, and how to enable cooperation among neighboring agents. We establish mean-square-error convergence of the resulting strategy for sufficiently small step-sizes and large penalty factors. We also illustrate performance by means of simulations.

Posted Content
TL;DR: Walkman as discussed by the authors is a decentralized algorithm that uses a fixed step size and converges faster than the existing random walk incremental algorithm, which is also communication efficient, since each iteration uses only one link to communicate the latest information for an agent to another.
Abstract: This paper addresses consensus optimization problems in a multi-agent network, where all agents collaboratively find a minimizer for the sum of their private functions. We develop a new decentralized algorithm in which each agent communicates only with its neighbors. State-of-the-art decentralized algorithms use communications between either all pairs of adjacent agents or a random subset of them at each iteration. Another class of algorithms uses a random walk incremental strategy, which sequentially activates a succession of nodes; these incremental algorithms require diminishing step sizes to converge to the solution, so their convergence is relatively slow. In this work, we propose a random walk algorithm that uses a fixed step size and converges faster than the existing random walk incremental algorithms. Our algorithm is also communication efficient. Each iteration uses only one link to communicate the latest information for an agent to another. Since this communication rule mimics a man walking around the network, we call our new algorithm Walkman. We establish convergence for convex and nonconvex objectives. For decentralized least squares, we derive a linear rate of convergence and obtain a better communication complexity than those of other decentralized algorithms. Numerical experiments verify our analysis results.

Posted Content
21 Mar 2018
TL;DR: The analysis establishes analytically that random reshuffling outperforms uniform sampling by showing explicitly that iterates approach a smaller neighborhood of size $O(\mu^2)$ around the minimizer rather than $O(mu)$.
Abstract: In empirical risk optimization, it has been observed that stochastic gradient implementations that rely on random reshuffling of the data achieve better performance than implementations that rely on sampling the data uniformly. Recent works have pursued justifications for this behavior by examining the convergence rate of the learning process under diminishing step-sizes. This work focuses on the constant step-size case and strongly convex loss function. In this case, convergence is guaranteed to a small neighborhood of the optimizer albeit at a linear rate. The analysis establishes analytically that random reshuffling outperforms uniform sampling by showing explicitly that iterates approach a smaller neighborhood of size $O(\mu^2)$ around the minimizer rather than $O(\mu)$. Furthermore, we derive an analytical expression for the steady-state mean-square-error performance of the algorithm, which helps clarify in greater detail the differences between sampling with and without replacement. We also explain the periodic behavior that is observed in random reshuffling implementations.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: This work proposes an extended diffusion preconditioned LMS strategy allowing the nodes to perform automatic network clustering and illustrates the effectiveness of the proposed unsupervised method for clustering nodes into clusters and collaborative estimation.
Abstract: In this work, we consider the problem of estimating the coefficients of linear shift-invariant FIR graph filters. We assume hybrid node-varying graph filters where the network is decomposed into clusters of nodes and, within each cluster, all nodes have the same filter coefficients to estimate. We assume that there is no prior information on the clusters composition and that the nodes do not know which other nodes share the same estimation task. We are interested in distributed, adaptive, and collaborative solutions. In order to limit the cooperation between clustered agents sharing the same estimation task, we propose an extended diffusion preconditioned LMS strategy allowing the nodes to perform automatic network clustering. Simulation results illustrate the effectiveness of the proposed unsupervised method for clustering nodes into clusters and collaborative estimation.

Proceedings ArticleDOI
01 Sep 2018
TL;DR: This work develops a fully decentralized variance-reduced learning algorithm for multi-agent networks where nodes store and process the data locally and are only allowed to communicate with their immediate neighbors.
Abstract: This work develops a fully decentralized variance-reduced learning algorithm for multi-agent networks where nodes store and process the data locally and are only allowed to communicate with their immediate neighbors. In the proposed algorithm, there is no need for a central or master unit while the objective is to enable the dispersed nodes to learn the exact global model despite their limited localized interactions. The resulting algorithm is shown to have low memory requirement, guaranteed linear convergence, robustness to failure of links or nodes and scalability to the network size. Moreover, the decentralized nature of the solution makes large-scale machine learning problems more tractable and also scalable since data is stored and processed locally at the nodes.

Posted Content
TL;DR: In this paper, the authors examined how much freedom influential agents have in controlling the beliefs of the receiving agents and whether the network structure limits the scope of control by the influential agents.
Abstract: In diffusion social learning over weakly-connected graphs, it has been shown recently that influential agents end up shaping the beliefs of non-influential agents. This paper analyzes this control mechanism more closely and addresses two main questions. First, the article examines how much freedom influential agents have in controlling the beliefs of the receiving agents. That is, the derivations clarify whether receiving agents can be driven to arbitrary beliefs and whether the network structure limits the scope of control by the influential agents. Second, even if there is a limit to what influential agents can accomplish, this article develops mechanisms by which these agents can lead receiving agents to adopt certain beliefs. These questions raise interesting possibilities about belief control over networked agents. Once addressed, one ends up with design procedures that allow influential agents to drive other agents to endorse particular beliefs regardless of their local observations or convictions. The theoretical findings are illustrated by means of several examples.

Proceedings ArticleDOI
04 Jun 2018
TL;DR: This work studies the problem of learning under both large data and large feature space scenarios, and proposes new and effective distributed solutions with guaranteed convergence to the minimizer by combining a dynamic diffusion construction, a pipeline strategy, and variance-reduced techniques.
Abstract: This work studies the problem of learning under both large data and large feature space scenarios. The feature information is assumed to be spread across agents in a network, where each agent observes some of the features. Through local cooperation, the agents are supposed to interact with each other to solve the inference problem and converge towards the global minimizer of the empirical risk. We study this problem exclusively in the primal domain, and propose new and effective distributed solutions with guaranteed convergence to the minimizer. This is achieved by combining a dynamic diffusion construction, a pipeline strategy, and variance-reduced techniques. Simulation results illustrate the conclusions.

Posted Content
TL;DR: This work proposes a distributed decision-making algorithm that helps agents in the network reach agreement about which model to track and interact with each other in order to enhance the network performance.
Abstract: In important applications involving multi-task networks with multiple objectives, agents in the network need to decide between these multiple objectives and reach an agreement about which single objective to follow for the network. In this work we propose a distributed decision-making algorithm. The agents are assumed to observe data that may be generated by different models. Through localized interactions, the agents reach agreement about which model to track and interact with each other in order to enhance the network performance. We investigate the approach for both static and mobile networks. The simulations illustrate the performance of the proposed strategies.