scispace - formally typeset
Search or ask a question
Topic

Synchronization (computer science)

About: Synchronization (computer science) is a research topic. Over the lifetime, 43303 publications have been published within this topic receiving 559005 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This article deals with the execution of a simulation program on a parallel computer by decomposing the simulation application into a set of concurrently executing processes and introduces interesting synchronization problems that are at the heart of the PDES problem.
Abstract: Parallel discrete event simulation (PDES), sometimes called distributed simulation, refers to the execution of a single discrete event simulation program on a parallel computer. PDES has attracted a considerable amount of interest in recent years. From a pragmatic standpoint, this interest arises from the fact that large simulations in engineering, computer science, economics, and military applications, to mention a few, consume enormous amounts of time on sequential machines. From an academic point of view, parallel simulation is interesting because it represents a problem domain that often contains substantial amounts of parallelism (e.g., see [59]), yet paradoxically, is surprisingly difficult to parallelize in practice. A sufficiently general solution to the PDES problem may lead to new insights in parallel computation as a whole. Historically, the irregular, data-dependent nature of PDES programs has identified it as an application where vectorization techniques using supercomputer hardware provide little benefit [14].A discrete event simulation model assumes the system being simulated only changes state at discrete points in simulated time. The simulation model jumps from one state to another upon the occurrence of an event. For example, a simulator of a store-and-forward communication network might include state variables to indicate the length of message queues, the status of communication links (busy or idle), etc. Typical events might include arrival of a message at some node in the network, forwarding a message to another network node, component failures, etc.We are especially concerned with the simulation of asynchronous systems where events are not synchronized by a global clock, but rather, occur at irregular time intervals. For these systems, few simulator events occur at any single point in simulated time; therefore parallelization techniques based on lock-step execution using a global simulation clock perform poorly or require assumptions in the timing model that may compromise the fidelity of the simulation. Concurrent execution of events at different points in simulated time is required, but as we shall soon see, this introduces interesting synchronization problems that are at the heart of the PDES problem.This article deals with the execution of a simulation program on a parallel computer by decomposing the simulation application into a set of concurrently executing processes. For completeness, we conclude this section by mentioning other approaches to exploiting parallelism in simulation problems.Comfort and Shepard et al. have proposed using dedicated functional units to implement specific sequential simulation functions, (e.g., event list manipulation and random number generation [20, 23, 47]). This method can provide only a limited amount of speedup, however. Zhang, Zeigler, and Concepcion use the hierarchical decomposition of the simulation model to allow an event consisting of several subevents to be processed concurrently [21, 98]. A third alternative is to execute independent, sequential simulation programs on different processors [11, 39]. This replicated trials approach is useful if the simulation is largely stochastic and one is performing long simulation runs to reduce variance, or if one is attempting to simulate a specific simulation problem across a large number of different parameter settings. However, one drawback with this approach is that each processor must contain sufficient memory to hold the entire simulation. Furthermore, this approach is less suitable in a design environment where results of one experiment are used to determine the experiment that should be performed next because one must wait for a sequential execution to be completed before results are obtained.

1,615 citations

Journal ArticleDOI
TL;DR: A new tool, called Eraser, is described, for dynamically detecting data races in lock-based multithreaded programs, which uses binary rewriting techniques to monitor every shared-monory reference and verify that consistent locking behavior is observed.
Abstract: Multithreaded programming is difficult and error prone. It is easy to make a mistake in synchronization that produces a data race, yet it can be extremely hard to locate this mistake during debugging. This article describes a new tool, called Eraser, for dynamically detecting data races in lock-based multithreaded programs. Eraser uses binary rewriting techniques to monitor every shared-monory reference and verify that consistent locking behavior is observed. We present several case studies, including undergraduate coursework and a multithreaded Web search engine, that demonstrate the effectiveness of this approach.

1,553 citations

Journal ArticleDOI
TL;DR: A generalization of this condition, which equates dynamical variables from one subsystem with a function of the variables of another subsystem, which means that synchronization implies a collapse of the overall evolution onto a subspace of the system attractor in full space.
Abstract: Synchronization of chaotic systems is frequently taken to mean actual equality of the variables of the coupled systems as they evolve in time. We explore a generalization of this condition, which equates dynamical variables from one subsystem with a function of the variables of another subsystem. This means that synchronization implies a collapse of the overall evolution onto a subspace of the system attractor in full space. We explore this idea in systems where a response system [bold y]([ital t]) is driven with the output of a driving system [bold x]([ital t]), but there is no feedback to the driver. We lose generality but gain tractability with this restriction. To investigate the existence of the synchronization condition [bold y]([ital t])=[phi]([bold x]([ital t])) we introduce the idea of mutual false nearest neighbors to determine when closeness in response space implies closeness in driving space. The synchronization condition also implies that the response dynamics is determined by the drive alone, and we provide tests for this as well. Examples are drawn from computer simulations on various known cases of synchronization and on data from nonlinear electrical circuits. Determining the presence of generalized synchronization will be quite important when one has only scalarmore » observations from the drive and from the response systems since the use of time delay (or other) embedding methods will produce imperfect'' coordinates in which strict equality of the synchronized variables is unlikely to transpire.« less

1,514 citations

Journal ArticleDOI
TL;DR: This book proposes a unified mathematical treatment of a class of 'linear' discrete event systems, which contains important subclasses of Petri nets and queuing networks with synchronization constraints, which is shown to parallel the classical linear system theory in several ways.
Abstract: This book proposes a unified mathematical treatment of a class of 'linear' discrete event systems, which contains important subclasses of Petri nets and queuing networks with synchronization constraints. The linearity has to be understood with respect to nonstandard algebraic structures, e.g. the 'max-plus algebra'. A calculus is developed based on such structures, which is followed by tools for computing the time behaviour to such systems. This algebraic vision lays the foundation of a bona fide 'discrete event system theory', which is shown to parallel the classical linear system theory in several ways.

1,424 citations

Proceedings ArticleDOI
01 Oct 1997
TL;DR: Eraser as mentioned in this paper uses binary rewriting techniques to monitor every shared memory reference and verify that consistent locking behavior is observed in lock-based multi-threaded programs, which can be used to detect data races.
Abstract: Multi-threaded programming is difficult and error prone. It is easy to make a mistake in synchronization that produces a data race, yet it can be extremely hard to locate this mistake during debugging. This paper describes a new tool, called Eraser, for dynamically detecting data races in lock-based multi-threaded programs. Eraser uses binary rewriting techniques to monitor every shared memory reference and verify that consistent locking behavior is observed. We present several case studies, including undergraduate coursework and a multi-threaded Web search engine, that demonstrate the effectiveness of this approach.

1,424 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
82% related
Network packet
159.7K papers, 2.2M citations
80% related
Node (networking)
158.3K papers, 1.7M citations
78% related
Scheduling (computing)
78.6K papers, 1.3M citations
78% related
Software
130.5K papers, 2M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202253
20211,823
20202,223
20192,643
20182,629
20172,539