scispace - formally typeset
Search or ask a question
Author

Baruch Awerbuch

Bio: Baruch Awerbuch is an academic researcher from Johns Hopkins University. The author has contributed to research in topics: Distributed algorithm & Competitive analysis. The author has an hindex of 68, co-authored 235 publications receiving 15895 citations. Previous affiliations of Baruch Awerbuch include Tel Aviv University & Technion – Israel Institute of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: A new simulation technique, referred to as a synchronizer, which is a new, simple methodology for designing efficient distributed algorithms in asynchronous networks, is proposed and is proved to be within a constant factor of the lower bound.
Abstract: The problem of simulating a synchronous network by an asynchronous network is investigated. A new simulation technique, referred to as a synchronizer, which is a new, simple methodology for designing efficient distributed algorithms in asynchronous networks, is proposed. The synchronizer exhibits a trade-off between its communication and time complexities, which is proved to be within a constant factor of the lower bound.

762 citations

Proceedings ArticleDOI
21 Oct 1985
TL;DR: Verifiable secret sharing as discussed by the authors is a cryptographic protocol that allows one to break a secret in 11 pieccs and publicly distribute it to 11 people so that tile secret is reconstructible given only sufficiently many pieces.
Abstract: Verifiable secret sharing is a cryptographic protocol that allows one to break a secret in 11 pieccs and publicly distribute thcln to 11 people so that tile secret is reconstructible given only sufficiently many pieces. 'rhe novelty is that everyone can verify that all received a "valid" piece of the secret without having any idea of what the secret is. One application of this tool is the simulation of simultaneous-broadcast networks on semi-synchronous broadcast networks.

760 citations

Proceedings Article
01 Jan 1985
TL;DR: Verifiable secret sharing is a cryptographic protocol that allows one to break a secret in 11 pieccs and publicly distribute thcln to 11 people so that tile secret is reconstructible given only sufficiently many pieces.

710 citations

Proceedings ArticleDOI
28 Sep 2002
TL;DR: This work proposes an on-demand routing protocol for ad hoc wireless networks that provides resilience to byzantine failures caused by individual or colluding nodes and develops an adaptive probing technique that detects a malicious link after log n faults have occurred.
Abstract: An ad hoc wireless network is an autonomous self-organizing system ofmobile nodes connected by wireless links where nodes not in directrange can communicate via intermediate nodes. A common technique usedin routing protocols for ad hoc wireless networks is to establish therouting paths on-demand, as opposed to continually maintaining acomplete routing table. A significant concern in routing is theability to function in the presence of byzantine failures whichinclude nodes that drop, modify, or mis-route packets in an attempt todisrupt the routing service.We propose an on-demand routing protocol for ad hoc wireless networks that provides resilience to byzantine failures caused by individual or colluding nodes. Our adaptive probing technique detects a malicious link after log n faults have occurred, where n is the length of the path. These links are then avoided by multiplicatively increasing their weights and by using an on-demand route discovery protocol that finds a least weight path to the destination.

495 citations

Proceedings ArticleDOI
22 May 2005
TL;DR: This paper assumes this more realistic unsplittable model, and investigates the "price of anarchy", or deterioration of network performance measured in total traffic latency under the selfish user behavior, showing that for linear edge latency functions the price of anarchy is exactly $2.618 for weighted demand and exactly$2.5 for unweighted demand.
Abstract: The essence of the routing problem in real networks is that the traffic demand from a source to destination must be satisfied by choosing a single path between source and destination. The splittable version of this problem is when demand can be satisfied by many paths, namely a flow from source to destination. The unsplittable, or discrete version of the problem is more realistic yet is more complex from the algorithmic point of view; in some settings optimizing such unsplittable traffic flow is computationally intractable.In this paper, we assume this more realistic unsplittable model, and investigate the "price of anarchy", or deterioration of network performance measured in total traffic latency under the selfish user behavior. We show that for linear edge latency functions the price of anarchy is exactly $2.618 for weighted demand and exactly $2.5 for unweighted demand. These results are easily extended to (weighted or unweighted) atomic "congestion games", where paths are replaced by general subsets. We also show that for polynomials of degree d edge latency functions the price of anarchy is dδ(d). Our results hold also for mixed strategies.Previous results of Roughgarden and Tardos showed that for linear edge latency functions the price of anarchy is exactly 4/3 under the assumption that each user controls only a negligible fraction of the overall traffic (this result also holds for the splittable case). Note that under the assumption of negligible traffic pure and mixed strategies are equivalent and also splittable and unsplittable models are equivalent.

398 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The algorithm can be used as a building block for solving other distributed graph problems, and can be slightly modified to run on a strongly-connected diagraph for generating the existent Euler trail or to report that no Euler trails exist.

13,828 citations

Book
01 Jan 1996
TL;DR: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols.
Abstract: From the Publisher: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols; more than 200 tables and figures; more than 1,000 numbered definitions, facts, examples, notes, and remarks; and over 1,250 significant references, including brief comments on each paper.

13,597 citations

Book
08 Jul 2008
TL;DR: This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems and focuses on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis.
Abstract: An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.

7,452 citations

Book
01 Jan 1996
TL;DR: This book familiarizes readers with important problems, algorithms, and impossibility results in the area, and teaches readers how to reason carefully about distributed algorithms-to model them formally, devise precise specifications for their required behavior, prove their correctness, and evaluate their performance with realistic measures.
Abstract: In Distributed Algorithms, Nancy Lynch provides a blueprint for designing, implementing, and analyzing distributed algorithms. She directs her book at a wide audience, including students, programmers, system designers, and researchers. Distributed Algorithms contains the most significant algorithms and impossibility results in the area, all in a simple automata-theoretic setting. The algorithms are proved correct, and their complexity is analyzed according to precisely defined complexity measures. The problems covered include resource allocation, communication, consensus among distributed processes, data consistency, deadlock detection, leader election, global snapshots, and many others. The material is organized according to the system model-first by the timing model and then by the interprocess communication mechanism. The material on system models is isolated in separate chapters for easy reference. The presentation is completely rigorous, yet is intuitive enough for immediate comprehension. This book familiarizes readers with important problems, algorithms, and impossibility results in the area: readers can then recognize the problems when they arise in practice, apply the algorithms to solve them, and use the impossibility results to determine whether problems are unsolvable. The book also provides readers with the basic mathematical tools for designing new algorithms and proving new impossibility results. In addition, it teaches readers how to reason carefully about distributed algorithms-to model them formally, devise precise specifications for their required behavior, prove their correctness, and evaluate their performance with realistic measures. Table of Contents 1 Introduction 2 Modelling I; Synchronous Network Model 3 Leader Election in a Synchronous Ring 4 Algorithms in General Synchronous Networks 5 Distributed Consensus with Link Failures 6 Distributed Consensus with Process Failures 7 More Consensus Problems 8 Modelling II: Asynchronous System Model 9 Modelling III: Asynchronous Shared Memory Model 10 Mutual Exclusion 11 Resource Allocation 12 Consensus 13 Atomic Objects 14 Modelling IV: Asynchronous Network Model 15 Basic Asynchronous Network Algorithms 16 Synchronizers 17 Shared Memory versus Networks 18 Logical Time 19 Global Snapshots and Stable Properties 20 Network Resource Allocation 21 Asynchronous Networks with Process Failures 22 Data Link Protocols 23 Partially Synchronous System Models 24 Mutual Exclusion with Partial Synchrony 25 Consensus with Partial Synchrony

4,340 citations

Proceedings ArticleDOI
14 Sep 2003
TL;DR: Measurements taken from a 29-node 802.11b test-bed demonstrate the poor performance of minimum hop-count, illustrate the causes of that poor performance, and confirm that ETX improves performance.
Abstract: This paper presents the expected transmission count metric (ETX), which finds high-throughput paths on multi-hop wireless networks. ETX minimizes the expected total number of packet transmissions (including retransmissions) required to successfully deliver a packet to the ultimate destination. The ETX metric incorporates the effects of link loss ratios, asymmetry in the loss ratios between the two directions of each link, and interference among the successive links of a path. In contrast, the minimum hop-count metric chooses arbitrarily among the different paths of the same minimum length, regardless of the often large differences in throughput among those paths, and ignoring the possibility that a longer path might offer higher throughput.This paper describes the design and implementation of ETX as a metric for the DSDV and DSR routing protocols, as well as modifications to DSDV and DSR which allow them to use ETX. Measurements taken from a 29-node 802.11b test-bed demonstrate the poor performance of minimum hop-count, illustrate the causes of that poor performance, and confirm that ETX improves performance. For long paths the throughput improvement is often a factor of two or more, suggesting that ETX will become more useful as networks grow larger and paths become longer.

3,656 citations