scispace - formally typeset
Search or ask a question
Author

Hagit Attiya

Bio: Hagit Attiya is an academic researcher from Technion – Israel Institute of Technology. The author has contributed to research in topics: Transactional memory & Shared memory. The author has an hindex of 44, co-authored 240 publications receiving 8322 citations. Previous affiliations of Hagit Attiya include Hebrew University of Jerusalem & Massachusetts Institute of Technology.


Papers
More filters
Book
01 Jan 1998
TL;DR: This chapter discusses how to improve the Fault Tolerance of Algorithms in Message-Passing Systems andSimulating Synchrony, and some of the approaches taken in this chapter to achieve this aim.
Abstract: 1. Introduction.PART I: FUNDAMENTALS.2. Basic Algorithms in Message-Passing Systems.3. Leader Election in Rings.4. Mutual Exclusion in Shared Memory.5. Fault-Tolerant Consensus.6. Causality and Time.PART II: SIMULATIONS.7. A Formal Model for Simulations.8. Broadcast and Multicast.9. Distributed Shared Memory.10. Fault-Tolerant Simulations of Read/Write Objects.11. Simulating Synchrony.12. Improving the Fault Tolerance of Algorithms.13. Fault-Tolerant Clock Synchronization.PART III: ADVANCED TOPICS.14. Randomization.15. Wait-Free Simulations of Arbitrary Objects.16. Problems Solvable in Asynchronous Systems.17. Solving Consensus in Eventually Stable Systems.References.Index.

1,132 citations

Journal ArticleDOI
TL;DR: Emulators that translate algorithms from the shared-memory model to two different message-passing models are presented, achieved by implementing a wait-free, atomic, single-writer multi-reader register in unreliable, asynchronous networks.
Abstract: Emulators that translate algorithms from the shared-memory model to two different message-passing models are presented. Both are achieved by implementing a wait-free, atomic, single-writer multi-reader register in unreliable, asynchronous networks. The two message-passing models considered are a complete network with processor failures and an arbitrary network with dynamic link failures.These results make it possible to view the shared-memory model as a higher-level language for designing algorithms in asynchronous distributed systems. Any wait-free algorithm based on atomic, single-writer multi-reader registers can be automatically emulated in message-passing systems, provided that at least a majority of the processors are not faulty and remain connected. The overhead introduced by these emulations is polynomial in the number of processors in the system.Immediate new results are obtained by applying the emulators to known shared-memory algorithms. These include, among others, protocols to solve the following problems in the message-passing model in the presence of processor or link failures: multi-writer multi-reader registers, concurrent time-stamp systems, l-exclusion, atomic snapshots, randomized consensus, and implementation of data structures.

509 citations

Journal ArticleDOI
TL;DR: Three wait-free implementations of atomicsnapshot memory are presented, one of which uses unbounded(integer) fields in these registers, and is particularly easy tounderstand, while the second and third use bounded registers.
Abstract: This paper introduces a general formulation of atomic snapshot memory, a shared memory partitioned into words written (updated) by individual processes, or instantaneously read (scanned) in its entirety. This paper presents three wait-free implementations of atomic snapshot memory. The first implementation in this paper uses unbounded (integer) fields in these registers, and is particularly easy to understand. The second implementation uses bounded registers. Its correctness proof follows the ideas of the unbounded implementation. Both constructions implement a single-writer snapshot memory, in which each word may be updated by only one process, from single-writer, n-reader registers. The third algorithm implements a multi-writer snapshot memory from atomic n-writer, n-reader registers, again echoing key ideas from the earlier constructions. All operations require Θ(n2) reads and writes to the component shared registers in the worst case. —Authors' Abstract

426 citations

Proceedings ArticleDOI
01 Aug 1990
TL;DR: A general formulation of atonuc wzap~hot rnenzory, a shared memory partitioned into words written (apduted) by individual processes, or instantaneously read (scanned) in its entirety is introduced.
Abstract: This paper introduces a general formulation of atonuc wzap~hot rnenzory, a shared memory partitioned into words written (apduted) by individual processes, or instantaneously read (scanned) in its entirety. Thk paw’ Presents three wait-free implementations of atomic snapshot A preliminary version of this paper appeared in Proceedings of the 9th Annaa[ ACM SVmpmnwn on Plznctptes of’ Distributed Compafing (Quebec city. Quebec, A%). ACM New York, 199Q pp. 1-14. H. Attiya’s and N. Shavit’s research was partially supported by National Science Foundation grant CCR-86-1 1442, by Office of Naval Research contract NW014-S5-K-0168, and by DARPA cmltracts NOO014-83-K-0125 and NOO014-89-J1988. E. Gafni’s research was partially supported by National Science Foundation Grant DCR 84-51396 and XEROX Co. grant W8S1111. Part of this work was done while N. Shavit was at Hebrew University, Jerusalem, visiting AT&T Bell Laboratories and the Theory of Distributed Systems Group at Massachusetts Institute of Technology, and while H. Attiya was at the LaboratoV for Computer Science at Massachusetts Institute of Technology. Authors’ present addresses: Y. Afek, Computer Science Department. Tel-Aviv University, Ramat-Aviv, Israel 69978; H. Attiya, Department of Computer Science, Technion, Haifa, Israel 3~000:” D Dolev, Department of computer Science, Hebrew University, Jerusalem, Israel 91904: E. Gafni, 3732 Boelter Hall, Computer Science Department, U. C. L.A., Los Angeles. Cahfornia 90024. M. Merritt, 600 Mountain Ave., Murray Hill. NJ 07974; N. Shavit, Laborato~ for Computer Scienee, MIT NE43, 367 Technology Square, Cambridge MA 02139. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice N gwen that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. (!2 1993 ACM 0004-541 1/93/0900-0873 $01.50 Joumd of ihe Amocl.]tmn for Computmg Mdchmerv, Vd 40. No 4. Scptemhcr 1993. pp 873-89[1

358 citations

Journal ArticleDOI
TL;DR: This paper shows that problems of processor renaming can be solved even in the presence of up to up to 2 faulty processors, contradicting the widely held belief that no nontrivial problem can be solve in such a system.
Abstract: This paper is concerned with the solvability of the problem of processor renaming in unreliable, completely asynchronous distributed systems. Fischer et al. prove in [8] that “nontrivial consensus” cannot be attained in such systems, even when only a single, benign processor failure is possible. In contrast, this paper shows that problems of processor renaming can be solved even in the presence of up to t

340 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Alur et al. as discussed by the authors proposed timed automata to model the behavior of real-time systems over time, and showed that the universality problem and the language inclusion problem are solvable only for the deterministic automata: both problems are undecidable (II i-hard) in the non-deterministic case and PSPACE-complete in deterministic case.

7,096 citations

Book
01 Jan 1996
TL;DR: This book familiarizes readers with important problems, algorithms, and impossibility results in the area, and teaches readers how to reason carefully about distributed algorithms-to model them formally, devise precise specifications for their required behavior, prove their correctness, and evaluate their performance with realistic measures.
Abstract: In Distributed Algorithms, Nancy Lynch provides a blueprint for designing, implementing, and analyzing distributed algorithms. She directs her book at a wide audience, including students, programmers, system designers, and researchers. Distributed Algorithms contains the most significant algorithms and impossibility results in the area, all in a simple automata-theoretic setting. The algorithms are proved correct, and their complexity is analyzed according to precisely defined complexity measures. The problems covered include resource allocation, communication, consensus among distributed processes, data consistency, deadlock detection, leader election, global snapshots, and many others. The material is organized according to the system model-first by the timing model and then by the interprocess communication mechanism. The material on system models is isolated in separate chapters for easy reference. The presentation is completely rigorous, yet is intuitive enough for immediate comprehension. This book familiarizes readers with important problems, algorithms, and impossibility results in the area: readers can then recognize the problems when they arise in practice, apply the algorithms to solve them, and use the impossibility results to determine whether problems are unsolvable. The book also provides readers with the basic mathematical tools for designing new algorithms and proving new impossibility results. In addition, it teaches readers how to reason carefully about distributed algorithms-to model them formally, devise precise specifications for their required behavior, prove their correctness, and evaluate their performance with realistic measures. Table of Contents 1 Introduction 2 Modelling I; Synchronous Network Model 3 Leader Election in a Synchronous Ring 4 Algorithms in General Synchronous Networks 5 Distributed Consensus with Link Failures 6 Distributed Consensus with Process Failures 7 More Consensus Problems 8 Modelling II: Asynchronous System Model 9 Modelling III: Asynchronous Shared Memory Model 10 Mutual Exclusion 11 Resource Allocation 12 Consensus 13 Atomic Objects 14 Modelling IV: Asynchronous Network Model 15 Basic Asynchronous Network Algorithms 16 Synchronizers 17 Shared Memory versus Networks 18 Logical Time 19 Global Snapshots and Stable Properties 20 Network Resource Allocation 21 Asynchronous Networks with Process Failures 22 Data Link Protocols 23 Partially Synchronous System Models 24 Mutual Exclusion with Partial Synchrony 25 Consensus with Partial Synchrony

4,340 citations

Journal ArticleDOI
TL;DR: It is proved that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast.
Abstract: We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].

2,718 citations

Journal ArticleDOI
TL;DR: In this paper, it is shown that it is impossible to achieve consistency, availability, and partition tolerance in the asynchronous network model, and then solutions to this dilemma in the partially synchronous model are discussed.
Abstract: When designing distributed web services, there are three properties that are commonly desired: consistency, availability, and partition tolerance. It is impossible to achieve all three. In this note, we prove this conjecture in the asynchronous network model, and then discuss solutions to this dilemma in the partially synchronous model.

1,456 citations

Book
Maurice Herlihy1
14 Mar 2008
TL;DR: Transactional memory as discussed by the authors is a computational model in which threads synchronize by optimistic, lock-free transactions, and there is a growing community of researchers working on both software and hardware support for this approach.
Abstract: Computer architecture is about to undergo, if not another revolution, then a vigorous shaking-up. The major chip manufacturers have, for the time being, simply given up trying to make processors run faster. Instead, they have recently started shipping "multicore" architectures, in which multiple processors (cores) communicate directly through shared hardware caches, providing increased concurrency instead of increased clock speed.As a result, system designers and software engineers can no longer rely on increasing clock speed to hide software bloat. Instead, they must somehow learn to make effective use of increasing parallelism. This adaptation will not be easy. Conventional synchronization techniques based on locks and conditions are unlikely to be effective in such a demanding environment. Coarse-grained locks, which protect relatively large amounts of data, do not scale, and fine-grained locks introduce substantial software engineering problem.Transactional memory is a computational model in which threads synchronize by optimistic, lock-free transactions. This synchronization model promises to alleviate many (not all) of the problems associated with locking, and there is a growing community of researchers working on both software and hardware support for this approach. This talk will survey the area, with a focus on open research problems.

1,268 citations