scispace - formally typeset
Search or ask a question

Showing papers by "John Augustine published in 2019"


Proceedings ArticleDOI
17 Jun 2019
TL;DR: This paper introduces the Node-Capacitated Clique model as an abstract communication model, which allows for the study of the effect of nodes having limited communication capacity on the complexity of distributed graph computations.
Abstract: In this paper, we study distributed graph algorithms in networks in which the nodes have a limited communication capacity. Many distributed systems are built on top of an underlying networking infrastructure, for example by using a virtual communication topology known as an overlay network. Although this underlying network might allow each node to directly communicate with a large number of other nodes, the amount of communication that a node can perform in a fixed amount of time is typically much more limited. We introduce the Node-Capacitated Clique model as an abstract communication model, which allows us to study the effect of nodes having limited communication capacity on the complexity of distributed graph computations. In this model, the n nodes of a network are connected as a clique and communicate in synchronous rounds. In each round, every node can exchange messages of $O(log n)$ bits with at most $O(log n)$ other nodes. When solving a graph problem, the input graph G is defined on the same set of n nodes, where each node knows which other nodes are its neighbors in G. To initiate research on the Node-Capacitated Clique model, we present distributed algorithms for the Minimum Spanning Tree (MST), BFS Tree, Maximal Independent Set, Maximal Matching, and Vertex Coloring problems. We show that even with only $O(log n)$ concurrent interactions per node, the MST problem can still be solved in polylogarithmic time. In all other cases, the runtime of our algorithms depends linearly on the arboricity of G, which is a constant for many important graph families such as planar graphs.

24 citations


Posted Content
TL;DR: The impact of hybrid communication on the complexity of computing shortest paths in the graph given by the local connections is studied and it is shown that for every constant $\varepsilon>0$, it is possible to compute an exact solution in time and approximate solutions can be computed in time.
Abstract: We introduce a communication model for hybrid networks, where nodes have access to two different communication modes: a local mode where communication is only possible between specific pairs of nodes, and a global mode where communication between any pair of nodes is possible. This can be motivated, for instance, by wireless networks in which we combine direct device-to-device communication (e.g., using WiFi) with communication via a shared infrastructure (like base stations, the cellular network, or satellites). Typically, communication over short-range connections is cheaper and can be done at a much higher rate. Hence, we are focusing here on the LOCAL model (in which the nodes can exchange an unbounded amount of information in each round) for the local connections while for the global communication we assume the so-called node-capacitated clique model, where in each round every node can exchange only $O(\log n)$-bit messages with just $O(\log n)$ other nodes. In order to explore the power of combining local and global communication, we study the impact of hybrid communication on the complexity of computing shortest paths in the graph given by the local connections. Specifically, for the all-pairs shortest paths problem (APSP), we show that an exact solution can be computed in time $\tilde O\big(n^{2/3}\big)$ and that approximate solutions can be computed in time $\tilde \Theta\big(\!\sqrt{n}\big)$. For the single-source shortest paths problem (SSSP), we show that an exact solution can be computed in time $\tilde O\big(\!\sqrt{\mathsf{SPD}}\big)$, where $\mathsf{SPD}$ denotes the shortest path diameter. We further show that a $\big(1\!+\!o(1)\big)$-approximate solution can be computed in time $\tilde O\big(n^{1/3}\big)$. Additionally, we show that for every constant $\varepsilon>0$, it is possible to compute an $O(1)$-approximate solution in time $\tilde O(n^\varepsilon)$.

16 citations


Journal ArticleDOI
TL;DR: In this paper, the minmax regret version of the k-sink location problem on a path with uniform capacities was solved in O(n) time, where n is the number of vertices in the path.
Abstract: In Dynamic flow networks an edge’s capacity is the amount of flow (items) that can enter it in unit time. All flow must be moved to sinks and congestion occurs if flow has to wait at a vertex for other flow to leave. In the uniform capacity variant all edges have the same capacity. The minmax k-sink location problem is to place k sinks minimizing the maximum time before all items initially on vertices can be evacuated to a sink. The minmax regret version introduces uncertainty into the input; the amount at each source is now only specified to within a range. The problem is to find a universal solution (placement of k sinks) whose regret (difference from optimal for a given input) is minimized over all inputs consistent with the given range restrictions. The previous best algorithms for the minmax regret version of the k-sink location problem on a path with uniform capacities ran in O(n) time for $$k=1$$ , $$O(n \log ^ 4 n)$$ time for $$k=2$$ and $${\varOmega }(n^{k+1})$$ for $$ k >2$$ . A major bottleneck to improving those solutions was that minmax regret seemed an inherently global property. This paper derives new combinatorial insights that allow decomposition into local problems. This permits designing two new algorithms. The first runs in $$O(n^3 \log n)$$ time independent of k and the second in $$O( n k^2 \log ^{k+1} n)$$ time. These improve all previous algorithms for $$k >1$$ and, for $$k > 2$$ , are the first polynomial time algorithms known.

9 citations


Posted Content
TL;DR: It is proved that, in general, one cannot hope to design Byzantine protocols that have communication cost that is significantly smaller than the cost of the Byzantine adversary.
Abstract: Motivated, in part, by the rise of permissionless systems such as Bitcoin where arbitrary nodes (whose identities are not known apriori) can join and leave at will, we extend established research in scalable Byzantine agreement to a more practical model where each node (initially) does not know the identity of other nodes. A node can send to new destinations only by sending to random (or arbitrary) nodes, or responding (if it chooses) to messages received from those destinations. We assume a synchronous and fully-connected network, with a full-information, but static Byzantine adversary. A general drawback of existing Byzantine protocols is that the communication cost incurred by the honest nodes may not be proportional to those incurred by the Byzantine nodes; in fact, they can be significantly higher. Our goal is to design Byzantine protocols for fundamental problems which are {\em resource competitive}, i.e., the number of bits sent by honest nodes is not much more than those sent by Byzantine nodes. We describe a randomized scalable algorithm to solve Byzantine agreement, leader election, and committee election in this model. Our algorithm sends an expected $O((T+n)\log n)$ bits and has latency $O(polylog(n))$, where $n$ is the number of nodes, and $T$ is the minimum of $n^2$ and the number of bits sent by adversarially controlled nodes. The algorithm is resilient to $(1/4-\epsilon)n$ Byzantine nodes for any fixed $\epsilon > 0$, and succeeds with high probability. Our work can be considered as a first application of resource-competitive analysis to fundamental Byzantine problems. To complement our algorithm we also show lower bounds for resource-competitive Byzantine agreement. We prove that, in general, one cannot hope to design Byzantine protocols that have communication cost that is significantly smaller than the cost of the Byzantine adversary.

4 citations


Posted Content
TL;DR: In this paper, the cost of distributed MST construction where each edge has a latency and a capacity, along with the weight is studied, and it is shown that the bottleneck parameter in determining the running time of an algorithm is the total weight of the constructed MST.
Abstract: We study the cost of distributed MST construction in the setting where each edge has a latency and a capacity, along with the weight. Edge latencies capture the delay on the links of the communication network, while capacity captures their throughput (in this case, the rate at which messages can be sent). Depending on how the edge latencies relate to the edge weights, we provide several tight bounds on the time and messages required to construct an MST. When edge weights exactly correspond with the latencies, we show that, perhaps interestingly, the bottleneck parameter in determining the running time of an algorithm is the total weight $W$ of the MST (rather than the total number of nodes $n$, as in the standard CONGEST model). That is, we show a tight bound of $\tilde{\Theta}(D + \sqrt{W/c})$ rounds, where $D$ refers to the latency diameter of the graph, $W$ refers to the total weight of the constructed MST and edges have capacity $c$. The proposed algorithm sends $\tilde{O}(m+W)$ messages, where $m$, the total number of edges in the network graph under consideration, is a known lower bound on message complexity for MST construction. We also show that $\Omega(W)$ is a lower bound for fast MST constructions. When the edge latencies and the corresponding edge weights are unrelated, and either can take arbitrary values, we show that (unlike the sub-linear time algorithms in the standard CONGEST model, on small diameter graphs), the best time complexity that can be achieved is $\tilde{\Theta}(D+n/c)$. However, if we restrict all edges to have equal latency $\ell$ and capacity $c$ while having possibly different weights (weights could deviate arbitrarily from $\ell$), we give an algorithm that constructs an MST in $\tilde{O}(D + \sqrt{n\ell/c})$ time. In each case, we provide nearly matching upper and lower bounds.

1 citations


Posted Content
TL;DR: An overlay design called Sparse Robust Addressable Network (Spartan) that can tolerate heavy adversarial churn and can be built efficiently in a fully distributed manner within O(\log n) rounds is presented.
Abstract: A Peer-to-Peer (P2P) network is a dynamic collection of nodes that connect with each other via virtual overlay links built upon an underlying network (usually, the Internet). P2P networks are highly dynamic and can experience very heavy churn, i.e., a large number of nodes join/leave the network continuously. Thus, building and maintaining a stable overlay network is an important problem that has been studied extensively for two decades. In this paper, we present our \Pe overlay network called Sparse Robust Addressable Network (Spartan). Spartan can be quickly and efficiently built in a fully distributed fashion within $O(\log n)$ rounds. Furthermore, the Spartan overlay structure can be maintained, again, in a fully distributed manner despite adversarially controlled churn (i.e., nodes joining and leaving) and significant variation in the number of nodes. Moreover, new nodes can join a committee within $O(1)$ rounds and leaving nodes can leave without any notice. The number of nodes in the network lies in $[n, fn]$ for any fixed $f\ge 1$. Up to $\epsilon n$ nodes (for some small but fixed $\epsilon > 0$) can be adversarially added/deleted within {\em any} period of $P$ rounds for some $P \in O(\log \log n)$. Despite such uncertainty in the network, Spartan maintains $\Theta(n/\log n)$ committees that are stable and addressable collections of $\Theta(\log n)$ nodes each for $O(polylog(n))$ rounds with high probability. Spartan's committees are also capable of performing sustained computation and passing messages between each other. Thus, any protocol designed for static networks can be simulated on Spartan with minimal overhead. This makes Spartan an ideal platform for developing applications. We experimentally show that Spartan will remain robust as long as each committee, on average, contains 24 nodes for networks of size up to $10240$.

Posted Content
TL;DR: In this paper, the authors revisited the widely researched gathering problem for two robots in a scenario which allows randomization in the asynchronous scheduling model and classified the problems based on the capability of the adversary to control the parameters such as wait time, computation delay and the speed of robots.
Abstract: This paper revisits the widely researched \textit{gathering} problem for two robots in a scenario which allows randomization in the asynchronous scheduling model. The scheduler is considered to be the adversary which determines the activation schedule of the robots. The adversary comes in two flavors, namely, oblivious and adaptive, based on the knowledge of the outcome of random bits. The robots follow \textit{wait-look-compute-move} cycle. In this paper, we classify the problems based on the capability of the adversary to control the parameters such as wait time, computation delay and the speed of robots and check the feasibility of gathering in terms of adversarial knowledge and capabilities. The main contributions include the possibility of gathering for an oblivious adversary with (i) zero computation delay; (ii) the sum of wait time and computation delay is more than a positive value. We complement the possibilities with an impossibility. We show that it is impossible for the robots to gather against an adaptive adversary with non-negative wait time and non-negative computation delay. Finally, we also extend our algorithm for multiple robots with merging.