Author

# Shlomo Moran

Other affiliations: University of Minnesota, IBM, Bell Labs

Bio: Shlomo Moran is an academic researcher from Technion – Israel Institute of Technology. The author has contributed to research in topics: Distributed algorithm & Upper and lower bounds. The author has an hindex of 39, co-authored 167 publications receiving 6526 citations. Previous affiliations of Shlomo Moran include University of Minnesota & IBM.

##### Papers published on a yearly basis

##### Papers

More filters

••

01 Apr 1988680 citations

••

01 Jun 2000TL;DR: SALSA, a new stochastic approach for link structure analysis, which examines random walks on graphs derived from the link structure, is presented and it is proved that SALSA is equivalent to a weighted in-degree analysis of the link-structure of World Wide Web subgraphs, making it computationally more efficient than the mutual reinforcement approach.

Abstract: Today, when searching for information on the World Wide Web, one usually performs a query through a term-based search engine. These engines return, as the query's result, a list of Web sites whose contents match the query. For broad topic queries, such searches often result in a huge set of retrieved documents, many of which are irrelevant to the user. However, much information is contained in the link-structure of the World Wide Web. Information such as which pages are linked to others can be used to augment search algorithms. In this context, Jon Kleinberg introduced the notion of two distinct types of Web sites: hubs and authorities . Kleinberg argued that hubs and authorities exhibit a mutually reinforcing relationship : a good hub will point to many authorities, and a good authority will be pointed at by many hubs. In light of this, he devised an algorithm aimed at finding authoritative sites. We present SALSA, a new stochastic approach for link structure analysis, which examines random walks on graphs derived from the link structure. We show that both SALSA and Kleinberg's mutual reinforcement approach employ the same meta-algorithm. We then prove that SALSA is equivalent to a weighted in-degree analysis of the link-structure of World Wide Web subgraphs, making it computationally more efficient than the mutual reinforcement approach. We compare the results of applying SALSA to the results derived through Kleinberg's approach. These comparisons reveal a topological phenomenon called the TKC effect (Tightly Knit Community) which, in certain cases, prevents the mutual reinforcement approach from identifying meaningful authorities.

571 citations

••

TL;DR: The Θ(m) bound on finding the maxima of wide totally monotone matrices is used to speed up several geometric algorithms by a factor of logn.

Abstract: LetA be a matrix with real entries and letj(i) be the index of the leftmost column containing the maximum value in rowi ofA.A is said to bemonotone ifi
1 >i
2 implies thatj(i
1) ≥J(i
2).A istotally monotone if all of its submatrices are monotone. We show that finding the maximum entry in each row of an arbitraryn xm monotone matrix requires Θ(m logn) time, whereas if the matrix is totally monotone the time is Θ(m) whenm≥n and is Θ(m(1 + log(n/m))) whenm

506 citations

••

TL;DR: It is proved that SALSA is quivalent to a weighted in degree analysis of the link-sturcutre of WWW subgraphs, making it computationally more efficient than the Mutual reinforcement approach, and comparisions reveal a topological Phenomenon called the TKC effect which prevents the Mutual Reinforcement approach from identifying meaningful authorities.

Abstract: Today, when searching for information on the WWW, one usually performs a query through a term-based search engine. These engines return, as the query's result, a list of Web pages whose contents matches the query. For broad-topic queries, such searches often result in a huge set of retrieved documents, many of which are irrelevant to the user. However, much information is contained in the link-structure of the WWW. Information such as which pages are linked to others can be used to augment search algorithms. In this context, Jon Kleinberg introduced the notion of two distinct types of Web pages: hubs and authorities. Kleinberg argued that hubs and authorities exhibit a mutually reinforcing relationship: a good hub will point to many authorities, and a good authority will be pointed at by many hubs. In light of this, he dervised an algoirthm aimed at finding authoritative pages. We present SALSA, a new stochastic approach for link-structure analysis, which examines random walks on graphs derived from the link-structure. We show that both SALSA and Kleinberg's Mutual Reinforcement approach employ the same metaalgorithm. We then prove that SALSA is quivalent to a weighted in degree analysis of the link-sturcutre of WWW subgraphs, making it computationally more efficient than the Mutual reinforcement approach. We compare that results of applying SALSA to the results derived through Kleinberg's approach. These comparisions reveal a topological Phenomenon called the TKC effectwhich, in certain cases, prevents the Mutual reinforcement approach from identifying meaningful authorities.

400 citations

••

TL;DR: Three self-stabilizing protocols for distributed systems in the shared memory model are presented, one of which is a mutual-exclusion prootocol for tree structured systems and the other two are a spanning tree protocol for systems with any connected communication graph.

Abstract: Three self-stabilizing protocols for distributed systems in the shared memory model are presented The first protocol is a mutual-exclusion protocol for tree structured systems The second protocol is a spanning tree protocol for systems with any connected communication graph The third protocol is obtained by use of fair protocol combination, a simple technique which enables the combination of two self-stabilizing dynamic protocols The result protocol is a self-stabilizing, mutual-exclusion protocol for dynamic systems with a general (connected) communication graph The presented protocols improve upon previous protocols in two ways: First, it is assumed that the only atomic operations are either read or write to the shared memory Second, our protocols work for any connected network and even for dynamic networks, in which the topology of the network may change during the execution

353 citations

##### Cited by

More filters

•

01 Jan 1996TL;DR: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols.

Abstract: From the Publisher:
A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols; more than 200 tables and figures; more than 1,000 numbered definitions, facts, examples, notes, and remarks; and over 1,250 significant references, including brief comments on each paper.

13,597 citations

•

01 Jan 1996

TL;DR: This book familiarizes readers with important problems, algorithms, and impossibility results in the area, and teaches readers how to reason carefully about distributed algorithms-to model them formally, devise precise specifications for their required behavior, prove their correctness, and evaluate their performance with realistic measures.

Abstract: In Distributed Algorithms, Nancy Lynch provides a blueprint for designing, implementing, and analyzing distributed algorithms. She directs her book at a wide audience, including students, programmers, system designers, and researchers.
Distributed Algorithms contains the most significant algorithms and impossibility results in the area, all in a simple automata-theoretic setting. The algorithms are proved correct, and their complexity is analyzed according to precisely defined complexity measures. The problems covered include resource allocation, communication, consensus among distributed processes, data consistency, deadlock detection, leader election, global snapshots, and many others.
The material is organized according to the system model-first by the timing model and then by the interprocess communication mechanism. The material on system models is isolated in separate chapters for easy reference.
The presentation is completely rigorous, yet is intuitive enough for immediate comprehension. This book familiarizes readers with important problems, algorithms, and impossibility results in the area: readers can then recognize the problems when they arise in practice, apply the algorithms to solve them, and use the impossibility results to determine whether problems are unsolvable. The book also provides readers with the basic mathematical tools for designing new algorithms and proving new impossibility results. In addition, it teaches readers how to reason carefully about distributed algorithms-to model them formally, devise precise specifications for their required behavior, prove their correctness, and evaluate their performance with realistic measures.
Table of Contents
1 Introduction
2 Modelling I; Synchronous Network Model
3 Leader Election in a Synchronous Ring
4 Algorithms in General Synchronous Networks
5 Distributed Consensus with Link Failures
6 Distributed Consensus with Process Failures
7 More Consensus Problems
8 Modelling II: Asynchronous System Model
9 Modelling III: Asynchronous Shared Memory Model
10 Mutual Exclusion
11 Resource Allocation
12 Consensus
13 Atomic Objects
14 Modelling IV: Asynchronous Network Model
15 Basic Asynchronous Network Algorithms
16 Synchronizers
17 Shared Memory versus Networks
18 Logical Time
19 Global Snapshots and Stable Properties
20 Network Resource Allocation
21 Asynchronous Networks with Process Failures
22 Data Link Protocols
23 Partially Synchronous System Models
24 Mutual Exclusion with Partial Synchrony
25 Consensus with Partial Synchrony

4,340 citations

••

20 Apr 2009TL;DR: This beginning graduate textbook describes both recent achievements and classical results of computational complexity theory and can be used as a reference for self-study for anyone interested in complexity.

Abstract: This beginning graduate textbook describes both recent achievements and classical results of computational complexity theory. Requiring essentially no background apart from mathematical maturity, the book can be used as a reference for self-study for anyone interested in complexity, including physicists, mathematicians, and other scientists, as well as a textbook for a variety of courses and seminars. More than 300 exercises are included with a selected hint set.

2,965 citations

••

TL;DR: It is proved that (1 - o(1) ln n setcover is a threshold below which setcover cannot be approximated efficiently, unless NP has slightlysuperpolynomial time algorithms.

Abstract: Given a collection ℱ of subsets of S = {1,…,n}, set cover is the problem of selecting as few as possible subsets from ℱ such that their union covers S,, and max k-cover is the problem of selecting k subsets from ℱ such that their union has maximum cardinality. Both these problems are NP-hard. We prove that (1 - o(1)) ln n is a threshold below which set cover cannot be approximated efficiently, unless NP has slightly superpolynomial time algorithms. This closes the gap (up to low-order terms) between the ratio of approximation achievable by the greedy alogorithm (which is (1 - o(1)) ln n), and provious results of Lund and Yanakakis, that showed hardness of approximation within a ratio of (log2n) / 2 ≃0.72 ln n. For max k-cover, we show an approximation threshold of (1 - 1/e)(up to low-order terms), under assumption that P ≠ NP.

2,941 citations