scispace - formally typeset
Search or ask a question
Author

Shlomi Dolev

Other affiliations: Deutsche Telekom, Fisk University, Tel Aviv University  ...read more
Bio: Shlomi Dolev is an academic researcher from Ben-Gurion University of the Negev. The author has contributed to research in topics: Distributed algorithm & Computer science. The author has an hindex of 48, co-authored 516 publications receiving 10435 citations. Previous affiliations of Shlomi Dolev include Deutsche Telekom & Fisk University.


Papers
More filters
Book
01 Jan 2000
TL;DR: A formal impossibility proof shows that, in order to ensure the correct behavior of the system, less than one-third of the processors may be of the Byzantine type; that is, to design the system as if there were no (yesterday) past history—a system that can be started in any possible state of its state space.
Abstract: AULT tolerance and reliability are important issues for flight vehicles such as aircraft, space-shuttles, and satellites. A self-stabilizing system recovers automatically following disturbances that force the system to an arbitrary state. The self-stabilization concept is an essential property of any autonomous control/computing system. Important branches of distributed computing theory were initiated because of the need for fault-tolerance of aircraft computing devices. The Byzantine fault model, for example, was a creation of the NASA SIFT project of more than a couple of decades ago. The Byzantine fault model is an elegant abstraction of faults where it is assumed that faulty parts behave as adversaries. The idea is to use redundancy—for instance, in the number of processors—in order to overcome the behavior of faulty processors. This line of thinking fits the common practice in engineering in which the design of critical parts is done independently by several design teams to make sure that there is no mistake in the calculations. Analogously, in the Byzantine fault model, some processors are simultaneously computing the same calculations for implementing a robust flight controller; thus the flight controller can function well in spite of the faulty behavior of several of the processors. Faulty behavior cannot be anticipated, the most severe behavior is therefore assumed—one that is reminiscent of the behavior of an adversary in the Byzantine court, in which backstabbing was common. A formal impossibility proof shows that, in order to ensure the correct behavior of the system, less than one-third of the processors may be of the Byzantine type. The task examined is the agreement, or consensus, for which processors need to decide on the same output, which is the input of one of the processors. The decision can be viewed as choosing the common result among the results of the design teams in the engineering example. The intuition behind the impossibility result is as follows: assume you have met two persons, Alice and Bob, one of which is honest while the other is not. You may try to decide what to do by speaking with each of them. Because you do not know which of the two is honest, you have to find this out. You may try a direct question to Alice asking who among them is not honest; Alice will answer Bob, and Bob, if asked, will obviously answer Alice. Each of them may also describe the conversations he/she had with the other, knowing that no one listened to the communication between them. This symmetry in the weights of the answers of Alice and Bob make it impossible to decide. It is possible to formally prove that agreement can be achieved if, and only if, less than one-third of the processors are Byzantine (e.g. Ref. 7). Systems that tolerate Byzantine faults are designed for flight devices, which need to be extremely robust. In such a device, the assumptions made for reaching agreement can be violated: Is it reasonable to assume that, during any period of the execution, less than one-third of the processors are faulty? What happens if, for a short period, more than a third are faulty, or perhaps experience a weaker fault than a Byzantine fault (say, caused by a transient electric spark)? What happens if messages sent by non-faulty processors are lost in one instant of time? Seven years prior to the introduction of the Byzantine fault model, Edsger W. Dijkstra suggested an important fundamental fault tolerance property called self-stabilization; 3 that is, to design the system as if there were no (yesterday) past history—a system that can be started in any possible state of its state space. It would therefore not be assumed that consistency was maintained from the fixed initial state by always executing steps according to the program of the processors. Self-stabilizing systems thus overcome transient faults. Temporary violations of the assumptions made by the algorithm designer can be viewed as leaving the system in an arbitrary initial state from which the system resumes. Self-stabilizing systems work correctly when started in any initial system state. Thus, even if the system loses its consistency due to an unexpected temporary violation of the assumptions made, it converges to legal behavior once the assumptions start to hold again. Self-stabilization is a strong fault tolerance property for systems; it ensures automatic recovery once faults stop occurring. A self-stabilizing system is designed to start in any possible state where its components—e.g., processors, processes, communication links, communication buffers—are in an arbitrary state; i.e., arbitrary variable values,

1,163 citations

Journal ArticleDOI
TL;DR: The benefits of energy-efficient passive components, low crosstalk and parallel processing suggest that the answer may be yes to optical technology's heat generation and bandwidth limitations.
Abstract: Could optical technology offer a solution to the heat generation and bandwidth limitations that the computing industry is starting to face? The benefits of energy-efficient passive components, low crosstalk and parallel processing suggest that the answer may be yes.

500 citations

Journal ArticleDOI
01 Mar 2010
TL;DR: This research provides a security assessment of the Android framework-Google's software stack for mobile devices and identifies high-risk threats to the framework and suggests several security solutions for mitigating them.
Abstract: This research provides a security assessment of the Android framework-Google's software stack for mobile devices. The authors identify high-risk threats to the framework and suggest several security solutions for mitigating them.

394 citations

Journal ArticleDOI
TL;DR: Three self-stabilizing protocols for distributed systems in the shared memory model are presented, one of which is a mutual-exclusion prootocol for tree structured systems and the other two are a spanning tree protocol for systems with any connected communication graph.
Abstract: Three self-stabilizing protocols for distributed systems in the shared memory model are presented The first protocol is a mutual-exclusion protocol for tree structured systems The second protocol is a spanning tree protocol for systems with any connected communication graph The third protocol is obtained by use of fair protocol combination, a simple technique which enables the combination of two self-stabilizing dynamic protocols The result protocol is a self-stabilizing, mutual-exclusion protocol for dynamic systems with a general (connected) communication graph The presented protocols improve upon previous protocols in two ways: First, it is assumed that the only atomic operations are either read or write to the shared memory Second, our protocols work for any connected network and even for dynamic networks, in which the topology of the network may change during the execution

353 citations

Journal ArticleDOI
TL;DR: The imbalance problem is investigated, referring to several real-life scenarios in which malicious files are expected to be about 10% of the total inspected files, and a chronological evaluation showed a clear trend in which the performance improves as the training set is more updated.
Abstract: In previous studies classification algorithms were employed successfully for the detection of unknown malicious code. Most of these studies extracted features based on byte n-gram patterns in order to represent the inspected files. In this study we represent the inspected files using OpCode n-gram patterns which are extracted from the files after disassembly. The OpCode n-gram patterns are used as features for the classification process. The classification process main goal is to detect unknown malware within a set of suspected files which will later be included in antivirus software as signatures. A rigorous evaluation was performed using a test collection comprising of more than 30,000 files, in which various settings of OpCode n-gram patterns of various size representations and eight types of classifiers were evaluated. A typical problem of this domain is the imbalance problem in which the distribution of the classes in real life varies. We investigated the imbalance problem, referring to several real-life scenarios in which malicious files are expected to be about 10% of the total inspected files. Lastly, we present a chronological evaluation in which the frequent need for updating the training set was evaluated. Evaluation results indicate that the evaluated methodology achieves a level of accuracy higher than 96% (with TPR above 0.95 and FPR approximately 0.1), which slightly improves the results in previous studies that use byte n-gram representation. The chronological evaluation showed a clear trend in which the performance improves as the training set is more updated.

263 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Book
01 Jan 1996
TL;DR: This book familiarizes readers with important problems, algorithms, and impossibility results in the area, and teaches readers how to reason carefully about distributed algorithms-to model them formally, devise precise specifications for their required behavior, prove their correctness, and evaluate their performance with realistic measures.
Abstract: In Distributed Algorithms, Nancy Lynch provides a blueprint for designing, implementing, and analyzing distributed algorithms. She directs her book at a wide audience, including students, programmers, system designers, and researchers. Distributed Algorithms contains the most significant algorithms and impossibility results in the area, all in a simple automata-theoretic setting. The algorithms are proved correct, and their complexity is analyzed according to precisely defined complexity measures. The problems covered include resource allocation, communication, consensus among distributed processes, data consistency, deadlock detection, leader election, global snapshots, and many others. The material is organized according to the system model-first by the timing model and then by the interprocess communication mechanism. The material on system models is isolated in separate chapters for easy reference. The presentation is completely rigorous, yet is intuitive enough for immediate comprehension. This book familiarizes readers with important problems, algorithms, and impossibility results in the area: readers can then recognize the problems when they arise in practice, apply the algorithms to solve them, and use the impossibility results to determine whether problems are unsolvable. The book also provides readers with the basic mathematical tools for designing new algorithms and proving new impossibility results. In addition, it teaches readers how to reason carefully about distributed algorithms-to model them formally, devise precise specifications for their required behavior, prove their correctness, and evaluate their performance with realistic measures. Table of Contents 1 Introduction 2 Modelling I; Synchronous Network Model 3 Leader Election in a Synchronous Ring 4 Algorithms in General Synchronous Networks 5 Distributed Consensus with Link Failures 6 Distributed Consensus with Process Failures 7 More Consensus Problems 8 Modelling II: Asynchronous System Model 9 Modelling III: Asynchronous Shared Memory Model 10 Mutual Exclusion 11 Resource Allocation 12 Consensus 13 Atomic Objects 14 Modelling IV: Asynchronous Network Model 15 Basic Asynchronous Network Algorithms 16 Synchronizers 17 Shared Memory versus Networks 18 Logical Time 19 Global Snapshots and Stable Properties 20 Network Resource Allocation 21 Asynchronous Networks with Process Failures 22 Data Link Protocols 23 Partially Synchronous System Models 24 Mutual Exclusion with Partial Synchrony 25 Consensus with Partial Synchrony

4,340 citations

Journal ArticleDOI
TL;DR: The field of cavity optomechanics explores the interaction between electromagnetic radiation and nano-or micromechanical motion as mentioned in this paper, which explores the interactions between optical cavities and mechanical resonators.
Abstract: We review the field of cavity optomechanics, which explores the interaction between electromagnetic radiation and nano- or micromechanical motion This review covers the basics of optical cavities and mechanical resonators, their mutual optomechanical interaction mediated by the radiation pressure force, the large variety of experimental systems which exhibit this interaction, optical measurements of mechanical motion, dynamical backaction amplification and cooling, nonlinear dynamics, multimode optomechanics, and proposals for future cavity quantum optomechanics experiments In addition, we describe the perspectives for fundamental quantum physics and for possible applications of optomechanical devices

4,031 citations