scispace - formally typeset
Search or ask a question

Showing papers by "John Augustine published in 2015"


Proceedings ArticleDOI
17 Oct 2015
TL;DR: The main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a constant degree graph with high expansion even under continuous high adversarial churn.
Abstract: Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy churn (i.e., Nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a constant degree graph with high expansion even under continuous high adversarial churn. Our protocol can tolerate a churn rate of up to O(n/polylog(n)) per round (where n is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only O(polylog(n)) overhead for topology maintenance: only polylogarithmic(in n) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.

47 citations


Book ChapterDOI
07 Oct 2015
TL;DR: This work studies the fundamental Byzantine leader election problem in dynamic networks where the topology can change from round to round and nodes can also experience heavy churn, and proposes a scalable solution that is scalable, fully-distributed, lightweight, and simple to implement.
Abstract: We study the fundamental Byzantine leader election problem in dynamic networks where the topology can change from round to round and nodes can also experience heavy churn i.e., nodes can join and leave the network continuously over time. We assume the full information model where the Byzantine nodes have complete knowledge about the entire state of the network at every round including random choices made by all the nodes, have unbounded computational power and can deviate arbitrarily from the protocol. The churn is controlled by an adversary that has complete knowledge and control over which nodes join and leave and at what times and also may rewire the topology in every round and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is an $$O\log ^3 n$$ round algorithm that achieves Byzantine leader election under the presence of up to $$O{n}^{1/2 - \varepsilon }$$ Byzantine nodes for a small constant $$\epsilon > 0$$ and a churn of up to $$O\sqrt{n}/{\text {polylog}}n$$ nodes per round where n is the stable network size. The algorithm elects a leader with probability at least $$1-n^{-\varOmega 1}$$ and guarantees that it is an honest node with probability at least $$1-n^{-\varOmega 1}$$; assuming the algorithm succeeds, the leader's identity will be known to a $$1-o1$$ fraction of the honest nodes. Our algorithm is fully-distributed, lightweight, and is simple to implement. It is also scalable, as it runs in polylogarithmic in n time and requires nodes to send and receive messages of only polylogarithmic size per round. To the best of our knowledge, our algorithm is the first scalable solution for Byzantine leader election in a dynamic network with a high rate of churn; our protocol can also be used to solve Byzantine agreement in a straightforward way. We also show how to implement an almost-everywhere public coin with constant bias in a dynamic network with Byzantine nodes and provide a mechanism for enabling honest nodes to store information reliably in the network, which might be of independent interest.

37 citations


Journal ArticleDOI
TL;DR: An O ( n log ? n ) time algorithm for the minimax regret 1-sink location problem in dynamic path networks with uniform capacity, where n is the number of vertices in the network.

32 citations


Proceedings ArticleDOI
09 Mar 2015
TL;DR: It is demonstrated that disproportionate gains are possible through a simple devise for injecting inexactness or approximation into the hardware architecture of a computing system with a general purpose template including a complete memory hierarchy, and energy savings possible through this approach in the context of large and challenging applications are demonstrated.
Abstract: In this paper, we demonstrate that disproportionate gains are possible through a simple devise for injecting inexactness or approximation into the hardware architecture of a computing system with a general purpose template including a complete memory hierarchy. The focus of the study is on energy savings possible through this approach in the context of large and challenging applications. We choose two such from different ends of the computing spectrum---the IGCM model for weather and climate modeling which embodies significant features of a high-performance computing workload, and the ubiquitous PageRank algorithm used in Internet search. In both cases, we are able to show in the affirmative that an inexact system outperforms its exact counterpart in terms of its efficiency quantified through the relative metric of operations per virtual Joule (OPVJ)---a relative metric that is not tied to particular hardware technology. As one example, the IGCM application can be used to achieve savings through inexactness of (almost) a factor of 3 in energy without compromising the quality of the forecast, quantified through the forecast error metric, in a noticeable manner. As another example finding, we show that in the case of PageRank, an inexact system is able to outperform its exact counterpart by close to a factor of 1.5 using the OPVJ metric.

30 citations


Journal ArticleDOI
TL;DR: This paper studies the dynamics of coalition formation under bounded rationality, considers settings whereby each team’s profit is given by a submodular function and proposes three profit-sharing schemes, each of which is based on the concept of marginal utility.
Abstract: An important task in the analysis of multiagent systems is to understand how groups of selfish players can form coalitions, i.e., work together in teams. In this paper, we study the dynamics of coalition formation under bounded rationality. We consider settings whereby each team’s profit is given by a submodular function and propose three profit-sharing schemes, each of which is based on the concept of marginal utility. The agents are assumed to be myopic, i.e., they keep changing teams as long as they can increase their payoff by doing so. We study the properties (such as closeness to Nash equilibrium or total profit) of the states that result after a polynomial number of such moves, and prove bounds on the price of anarchy and the price of stability of the corresponding games.

22 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce a rigorous framework for modeling churn in a dynamic distributed network and provide a fast algorithm to reach agreement even with en nodes churning per time step.

7 citations


Proceedings ArticleDOI
25 May 2015
TL;DR: A randomized algorithm is presented that can, in O(log n) rounds, detect and reach consensus about the health of the leader (i.e., whether it is able to maintain good communication with rest of the network) and, in the event that the network decides that the leader's ability to communicate is unhealthy, a new leader is re-elected in a further O( log2n) rounds.
Abstract: We investigate the problem of electing a leader in a sparse but well-connected synchronous dynamic network in which up to a fraction of the nodes chosen adversarially can leave/join the network per time step. At this churn rate, all nodes in the network can be replaced by new nodes in a constant number of rounds. Moreover, the adversary can shield a fraction of the nodes (which may include the leader) by repeatedly churning their neighbourhood and thus hinder their communication with the rest of the network. However, empirical studies in peer-to-peer networks have shown that a significant fraction of the nodes are usually stable and well connected. It is, therefore, natural to take advantage of such stability and well-connectedness to establish a leader that can maintain good communication with rest of the nodes. Since the dynamics could change eventually, it is also essential to re-elect a new leader whenever the current leader has either left the network or is not well-connected with rest of the nodes. In such re-elections, care must be taken to avoid premature and spurious leader election resulting in more than one leader present in the network at the same time. We assume a broadcast based communication model in which each node can send up to O(log3 n) bits per round and is unaware of its receivers a priori. We present a randomized algorithm that can, in O(log n) rounds, detect and reach consensus about the health of the leader (i.e., whether it is able to maintain good communication with rest of the network). In the event that the network decides that the leader's ability to communicate is unhealthy, a new leader is re-elected in a further O(log2 n) rounds. Our running times hold with high probability, and, furthermore, we are guaranteed with high probability that there is at most one leader at any time.

6 citations


Proceedings ArticleDOI
09 Mar 2015
TL;DR: Surprisingly, IMAD allowed us to design entirely error-free algorithms while achieving energy gain factors of 1.5 and 5 in the context of sorting and string matching when compared to their traditional (textbook) algorithms.
Abstract: It is increasingly accepted that energy savings can be achieved by trading the accuracy of a computing system for energy gains---quite often significantly. This approach is referred to as inexact or approximate computing. Given that a significant portion of the energy in a modern general purpose processor is spent on moving data to and from storage, and that increasingly data movement contributes significantly to activity during the execution of applications, it is important to be able to develop techniques and methodologies for inexact computing in this context. To accomplish this to its fullest level, it is important to start with algorithmic specifications and alter their intrinsic design to take advantage of inexactness. This calls for a new approach to inexact memory aware algorithm design (IMAD) or co-design. In this paper, we provide the theoretical foundations which include novel models as well as technical results in the form of upper and lower bounds for IMAD in the context of universally understood and canonical problems: variations of sorting, and string matching. Surprisingly, IMAD allowed us to design entirely error-free algorithms while achieving energy gain factors of 1.5 and 5 in the context of sorting and string matching when compared to their traditional (textbook) algorithms. IMAD is also amenable to theoretical analysis and we present several asymptotic bounds on energy gains.

3 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the case where the connectivity requirements are posed by selfish users who have agreed to share the cost of the network to be established according to a well-defined rule.
Abstract: The efficient design of networks has been an important engineering task that involves challenging combinatorial optimization problems. Typically, a network designer has to select among several alternatives which links to establish so that the resulting network satisfies a given set of connectivity requirements and the cost of establishing the network links is as low as possible. The Minimum Spanning Tree problem, which is well-understood, is a nice example. In this paper, we consider the natural scenario in which the connectivity requirements are posed by selfish users who have agreed to share the cost of the network to be established according to a well-defined rule. The design proposed by the network designer should now be consistent not only with the connectivity requirements but also with the selfishness of the users. Essentially, the users are players in a so-called network design game and the network designer has to propose a design that is an equilibrium for this game. As it is usually the case when selfishness comes into play, such equilibria may be suboptimal. In this paper, we consider the following question: can the network designer enforce particular designs as equilibria or guarantee that efficient designs are consistent with users' selfishness by appropriately subsidizing some of the network links? In an attempt to understand this question, we formulate corresponding optimization problems and present positive and negative results.

3 citations


Proceedings ArticleDOI
04 Oct 2015
TL;DR: This paper presents a new technique called inexact computing or approximate computing, which aims push the envelope to achieve energy gains at the cost of an increased inexactness or error in the system.
Abstract: In the last two decades, energy has become a crucial resource whose consumption must be minimized while designing computing systems. This has affected every aspect of computing ranging from large scale supercomputers and data centers to small scale (but high volume) embedded systems comprising filters, digital signal processors (DSPs), and accelerators. Energy constraints play a particularly crucial role in battery operated devices (cell phones, wearables) and other energy constrained systems like unmanned aerial vehicles and sensor networks. In this backdrop of efforts to improve energy efficiency, a very interesting technique aimed at trading error for energy gains emerged over a decade ago. Typical computing systems are engineered to be exact. Energy gains have been achieved while compromising soft constraints like running time. This new technique called inexact computing or approximate computing aims push the envelope to achieve energy gains at the cost of an increased inexactness or error in the system. This radical shift has often yielded surprisingly significant energy gains with little to no side effects because many algorithms and applications (like DSP applications, big data applications, large scale numerical models) are inherently tolerant to error.

1 citations


Proceedings Article
09 Mar 2015
TL;DR: Surprisingly, IMAD allowed us to design entirely error-free algorithms while achieving energy gain factors of 1.5 and 5 in the context of sorting and string matching when compared to their traditional (textbook) algorithms.
Abstract: It is increasingly accepted that energy savings can be achieved by trading the accuracy of a computing system for energy gains — quite often significantly. This approach is referred to as inexact or approximate computing. Given that a significant portion of the energy in a modern general purpose processor is spent on moving data to and from storage, and that increasingly data movement contributes significantly to activity during the execution of applications, it is important to be able to develop techniques and methodologies for inexact computing in this context. To accomplish this to its fullest level, it is important to start with algorithmic specifications and alter their intrinsic design to take advantage of inexactness. This calls for a new approach to inexact memory aware algorithm design (IMAD) or co-design. In this paper, we provide the theoretical foundations which include novel models as well as technical results in the form of upper and lower bounds for IMAD in the context of universally understood and canonical problems: variations of sorting, and string matching. Surprisingly, IMAD allowed us to design entirely error-free algorithms while achieving energy gain factors of 1.5 and 5 in the context of sorting and string matching when compared to their traditional (textbook) algorithms. IMAD is also amenable to theoretical analysis and we present several asymptotic bounds on energy gains.