scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 1998"


Proceedings ArticleDOI
26 May 1998
TL;DR: The protocols presented in this paper take advantage of the semantics of group communication and use related isolation guarantees to eliminate the possibility of deadlocks, reduce the message overhead, and increase performance.
Abstract: This paper proposes a family of replication protocols based on group communication in order to address some of the concerns expressed by database designers regarding existing replication solutions. Due to these concerns, current database systems allow inconsistencies and often resort to centralized approaches, thereby reducing some of the key advantages provided by replication. The protocols presented in this paper take advantage of the semantics of group communication and use related isolation guarantees to eliminate the possibility of deadlocks, reduce the message overhead, and increase performance. A simulation study shows the feasibility of the approach and the flexibility with which different types of bottlenecks can be circumvented.

155 citations


Journal ArticleDOI
Alexander Thomasian1
TL;DR: A performance analysis of standard locking is provided and several two-phase processing methods are described and shown to outperform restart-oriented locking methods in high-contention environments provided adequate hardware resources are available.
Abstract: Standard locking (two-phase locking with on-demand lock requests and blocking upon lock conflict) is the primary concurrency control (CC) method for centralized databases. The main source of performance degradation with standard locking is blocking, whereas transaction (txn) restarts to resolve deadlocks have a secondary effect on performance. We provide a performance analysis of standard locking that accurately estimates its performance degradation leading to thrashing. We next introduce two sets of methods to cope with its performance limitations. Restartoriented locking methods selectively abort txns to increase the level of concurrency for active txns with respect to standard locking in high-contention environments. For example, the running-priority method aborts blocked txns based on the essential blocking principle, which only allows blocking by active txns. The waitdepth-limited (WDL) method further minimizes wasted processing by basing abort decisions on the progress made by a txn. Restart waiting serves as a load-control mechanism by deferring the restart of an aborted txn until conflicting txns have left the system. These two methods have performance superior to other restartoriented methods and standard locking in high-contention environments. In twophase processing methods an aborted txn may continue its first phase of execution in “virtual” mode, that is, without requesting any locks, prefetching data for its second execution phase. The second execution phase is shorter since no disk I/O is required, resulting in a lower effective degree of txn concurrency and less data contention. This method is effective provided access invariance prevails; that is, txns access the same set of objects in the second phase as they did in the first. The optimistic die method is appropriate for the first phase and the optimistic kill method for further phases. Lock preclaiming instead of the optimistic kill method in the second phase prevents further restarts, which is a weak point of the optimistic CC method due to the quadratic effect, that is, the probability of failed validation increases as the square of txn size. Several two-phase processing methods are described and shown to outperform restart-oriented locking methods in high-contention environments provided adequate hardware resources are available. This tutorial reviews CC methods based on standard locking, restartoriented locking methods, two-phase processing methods including optimistic CC, and hybrid methods (combining optimistic CC and locking) in centralized systems. Its main goals are as follows: (i) succinctly specify CC methods of interest; (ii) describe models for performance evaluation of CC methods, including new models that alleviate some of the shortcomings of models used in earlier studies; (iii) compare the performance of CC methods; (iv) list insights gained from analytic and simulation studies; (v) review methods to relieve the level of lock contention: special methods for indices and aggregate data; modified txn structures; and relaxed levels of consistency for queries; (vi) survey performance evaluation studies of CC methods; (vii) illustrate the applicability of basic analytic methods to

147 citations


Proceedings ArticleDOI
23 Feb 1998
TL;DR: The principle objective of the paper is to present an algorithm that overcomes drawbacks in distributed and mobile collaborative environments based on the notion of user intention, and also on the construction of equivalent histories by exploiting and combining some general semantic properties such as forward/backward transposition.
Abstract: In a distributed groupware system, objects shared by users are subject to concurrency and real time constraints. In order to satisfy these, various concurrency control algorithms have been proposed that exploit the semantic properties of operations (C.A. Ellis and S.J. Gibbs, 1989; A. Karsenty and M. Beaudouin-Lafon, 1993; C. Sun et al., 1996). By ordering concurrent operations, they generally guarantee consistency of the different copies of each object. However, in some situations they can result in inconsistent copies, a non respect of user's intentions, and in the need to undo and redo some operations. The principle objective of the paper is to present an algorithm that overcomes these drawbacks in distributed and mobile collaborative environments. The algorithm is based on the notion of user intention, and also on the construction of equivalent histories by exploiting and combining some general semantic properties such as forward/backward transposition.

125 citations


Patent
21 Dec 1998
TL;DR: In this article, a method and system for database concurrency control are provided that allow lock groups to contain columns of different tables and allow an individual column of a table to be in more than one lock group.
Abstract: A method and system for database concurrency control are provided that allow lock groups to contain columns of different tables and allow an individual column of a table to be in more than one lock group. While using optimistic concurrency control for monitoring multiple transactions modifying the same database, it allows the concurrent access of a single table when the individual columns of the table are accessed by separate users or applications. This, in turn, reduces the delay of waiting for a table to be free for access and decreases the delay of rolling back transactions that are concurrently accessing a table. The reduction of these delays increases the overall data processing efficiency for the system.

93 citations


Journal ArticleDOI
01 May 1998
TL;DR: An integrated methodology for fragmentation and allocation that is simple and practical and can be applied to real-life problems is developed that is theoretically sound and comprehensive enough to achieve the objectives of efficiency and effectiveness.
Abstract: Distributed database design requires decisions on closely related issues such as fragmentation, allocation, degree of replication, concurrency control, and query processing. We develop an integrated methodology for fragmentation and allocation that is simple and practical and can be applied to real-life problems. The methodology also incorporates replication and concurrency control costs. At the same time, it is theoretically sound and comprehensive enough to achieve the objectives of efficiency and effectiveness. It distributes data across multiple sites such that design objectives in terms of response time and availability for transactions, and constraints on storage space, are adequately addressed. This methodology has been used successfully in designing a distributed database system for a large geographically distributed organization.

78 citations


Proceedings ArticleDOI
26 May 1998
TL;DR: In this paper, the authors explore the use of different variants of broadcast protocols for managing replicated databases and present a protocol that employs atomic broadcast and completely eliminates the need for acknowledgements during transaction commitment.
Abstract: We explore the use of different variants of broadcast protocols for managing replicated databases. Starting with the simplest broadcast primitive, the reliable broadcast protocol, we show how it can be used to ensure correct transaction execution. The protocol is simple, and has several advantages, including prevention of deadlocks. However, it requires a two-phase commitment protocol for ensuring correctness. We then develop a second protocol that uses causal broadcast and avoids the overhead of two-phase commit by exploiting the causal delivery properties of the broadcast primitives to implicitly collect the relevant information used in two-phase commit. Finally, we present a protocol that employs atomic broadcast and completely eliminates the need for acknowledgements during transaction commitment.

63 citations


Book ChapterDOI
01 Jan 1998
TL;DR: In this paper, the authors introduce distributed problem solving from the human level, briefly present the accompanying research area of Computer-Supported Cooperative Work (CSCW), and then focus on the awareness information that is of special importance for supporting coordinated cooperation of groups with unstructured tasks.
Abstract: Research in distributed problem solving in the last years focused on distributed applications which cooperate to accomplish a task Another level of distributed problem solving is that of human teams which are distributed in space and cooperate in solving a problem In this paper we will introduce distributed problem solving from the ‘human level’, briefly present the accompanying research area of Computer-Supported Cooperative Work (CSCW) and the different basic mechanisms of computer support for workgroup computing, and then focus on the awareness information that is of special importance for supporting coordinated cooperation of groups with unstructured tasks

60 citations


Proceedings ArticleDOI
23 Feb 1998
TL;DR: It is shown how a small number of additional interfaces enable GiST to support a much larger class of operations, which includes, nearest-neighbor and ranked search, user-defined aggregation and index-assisted selectivity estimation, are increasingly common in new database applications.
Abstract: The generalized search tree, or GiST, defines a framework of basic interfaces required to construct a hierarchical access method for database systems. As originally specified, GiST only supports record selection. We show how a small number of additional interfaces enable GiST to support a much larger class of operations. Members of this class, which includes, nearest-neighbor and ranked search, user-defined aggregation and index-assisted selectivity estimation, are increasingly common in new database applications. The advantages of implementing these operations in the GiST framework include reduction of user development effort and the ability to use industrial strength concurrency and recovery mechanisms provided by expert implementers.

54 citations


Journal ArticleDOI
TL;DR: This work believes that its RTU-based user-space TCP/Internet protocol (TCP/IP) implementation provides bandwidth guarantees for bulk data connections even with real-time and "best-effort" load competing for CPU on the endsystem.
Abstract: Two important requirements for protocol implementations to be able to provide quality of service (QoS) guarantees within the endsystem are: (1) efficient processor scheduling for application and protocol processing and (2) efficient mechanisms for data movement. Scheduling is needed to guarantee that the application and protocol tasks involved in processing each stream execute in a timely manner and obtain their required share of the CPU. We have designed and implemented an operating system (OS) mechanism called the real-time upcall (RTU) to provide such guarantees to applications. The RTU mechanism provides a simple real-time concurrency model and has minimal overheads for concurrency control and context switching compared to thread-based approaches. To demonstrate its efficacy, we have built RTU-based transmission control protocol (TCP) and user datagram protocol (UDP) protocol implementations that combine high efficiency with guaranteed performance. For efficient data movement, we have implemented a number of techniques such as: (1) direct movement of data between the application and the network adapter; (2) batching of input-output (I/O) operations to reduce context switches; and (3) header-data splitting at the receiver to keep bulk data page aligned. Our RTU-based user-space TCP/Internet protocol (TCP/IP) implementation provides bandwidth guarantees for bulk data connections even with real-time and "best-effort" load competing for CPU on the endsystem. Maximum achievable throughput is higher than the NetBSD kernel implementation due to efficient data movement. Sporadic and small messages with low delay requirements are also supported using reactive RTUs that are scheduled with very low delay. We believe that ours is the first solution that combines good data path performance with application-level bandwidth and delay guarantees for standard protocols and OSs.

53 citations


Journal ArticleDOI
TL;DR: The main contribution of the paper is the synthesis of these two algorithmic techniques-guess propagation and primary copy replication-for implementing a framework that is easy to program to and is well suited for the needs of groupware applications.
Abstract: This paper describes algorithms for implementing a high-level programming model for synchronous distributed groupware applications. In this model, several application data objects may be atomically updated, and these objects automatically maintain consistency with their replicas using an optimistic algorithm. Changes to these objects may be optimistically or pessimistically observed by view objects by taking consistent snapshots. The algorithms for both update propagation and view notification are based upon optimistic guess propagation principles adapted for fast commit by using primary copy replication techniques. The main contribution of the paper is the synthesis of these two algorithmic techniques-guess propagation and primary copy replication-for implementing a framework that is easy to program to and is well suited for the needs of groupware applications.

51 citations


Journal ArticleDOI
TL;DR: A new concurrency control protocol for main-memory real-time database systems is presented, which is based on predeclaration of data requirements at a relation granularity, and offers the possibility of determining execution times without the effects of blocking and I/O.

Proceedings ArticleDOI
02 Dec 1998
TL;DR: This paper investigates a set of code transformations which allow systematic integration of a real-time guest thread into a host thread, producing an integrated thread which meets all real- time requirements.
Abstract: This paper presents details of how to perform thread integration to provide low-cost concurrency for general-purpose microcontrollers and microprocessors. A post-pass compiler interleaves multiple threads of control at the machine instruction level for concurrent execution on a uniprocessor and provides very fine-grain multithreading without context switching overhead. Such efficient concurrency allows implementation of real-time functions in software rather than dedicated peripheral hardware. We investigate a set of code transformations which allow systematic integration of a real-time guest thread into a host thread, producing an integrated thread which meets all real-time requirements. The thread integration concept and the associated code transformations have been applied to example functions chosen from three application domains to evaluate the method's feasibility.

Book ChapterDOI
01 Jan 1998
TL;DR: Centralized workflow systems fall short to meet the demands of distributed heterogeneous environments which are very common in enterprises of even moderate complexity.
Abstract: Workflows are activities involving the coordinated execution of multiple tasks performed by different processing entities, mostly in distributed heterogeneous environments which are very common in enterprises of even moderate complexity. Centralized workflow systems fall short to meet the demands of such environments.

Proceedings Article
24 Aug 1998
TL;DR: A detailed comparison with the TSB-tree, both analytically and based on experiments with real implementations, shows that LHAM is highly superior in terms of insert performance, while query performance is in almost all cases at least as good as for the T SB-tree; in many cases it is much better.
Abstract: Numerous applications such as stock market or medical information systems require that both historical and current data be logically integrated into a temporal database. The underlying access method must support different forms of “time-travel” queries, the migration of old record versions onto inexpensive archive media, and high insert and update rates. This paper introduces a new access method for transaction-time temporal data, called the Logstructured History Data Access Method (LHAM) that meets these demands. The basic principle of LHAM is to partition the data into successive components based on the timestamps of the record versions. Components are assigned to different levels of a storage hierarchy, and incoming data is continuously migrated through the hierarchy. The paper discusses the LHAM concepts, including concurrency control and recovery, our full-fledged LHAM implementation, and experimental performance results based on this implementation. A detailed comparison with the TSB-tree, both analytically and based on experiments with real implementations, shows that LHAM is highly superior in terms of insert performance while query performance is in almost all cases at least as good as for the TSB-tree; in many cases it is much better,

Proceedings ArticleDOI
01 Jan 1998
TL;DR: The paper presents a new lock free algorithm that provides many of the advantages of non blocking algorithms while avoiding the overhead of true non blocking behavior and demonstrates application performance superior to all others studied.
Abstract: Passing messages through shared memory plays an important role in symmetric multiprocessors and on Clumps. The management of concurrent access to message queues is an important aspect of design for shared memory message passing systems. Using both microbenchmarks and applications, the paper compares the performance of concurrent access algorithms for passing active messages on a Sun Enterprise 5000 server. The paper presents a new lock free algorithm that provides many of the advantages of non blocking algorithms while avoiding the overhead of true non blocking behavior. The lock free algorithm couples synchronization tightly to the data structure and demonstrates application performance superior to all others studied. The success of this algorithm implies that other practical problems might also benefit from a reexamination of the non blocking literature.

Journal ArticleDOI
TL;DR: A unified transaction model for databases with an arbitrary set of semantically rich operations is presented, and several sufficiently rich subclasses of prefix reducible schedules are proposed, and concurrency control protocols that guarantee both serializability and atomicity for schedules from these classes are designed.

Proceedings ArticleDOI
17 Jun 1998
TL;DR: The paper presents an approach to implementing fully asynchronous reader/writer mechanisms which addresses the problems of priority inversion and blocking among tasks within multiprocessor real time systems and helps to remove priority inverted and blocking incurred by the commonly used lock based synchronization mechanisms.
Abstract: The paper presents an approach to implementing fully asynchronous reader/writer mechanisms which addresses the problems of priority inversion and blocking among tasks within multiprocessor real time systems. The approach is conceived from the concept of process consensus that the writer and the reader come to an agreement on accessing the shared data before proceeding to carry out their respective data operations. Because neither locking operations nor repeated actions of read and check are involved, the shared data can be accessed at any time by the writer and all the readers in a manner not only wait-free but also loop-free. In addition, sharing data via this approach introduces no impact upon either timing behaviour or schedulability of any task in the system. Hence the approach helps to remove priority inversion and blocking incurred by the commonly used lock based synchronization mechanisms.

Journal ArticleDOI
TL;DR: In this article, the authors extend the classical two-phase locking mechanism to multilevel secure file systems and provide a set of linguistic constructs that support exception handling, partial rollback, and forward recovery.
Abstract: The concurrency control requirements for transaction processing in a multilevel secure file system are different from those in conventional transaction processing systems. In particular, there is the need to coordinate transactions at different security levels avoiding both potential timing covert channels and the starvation of transactions at higher security levels. Suppose a transaction at a lower security level attempts to write a data item that is being read by a transaction at a higher security level. On the one hand, a timing covert channel arises if the transaction at the lower security level is either delayed or aborted by the scheduler. On the other hand, the transaction at the high security level may be subjected to an indefinite delay if it is forced to abort repeatedly. This paper extends the classical two-phase locking mechanism to multilevel secure file systems. The scheme presented here prevents potential timing covert channels and avoids the abort of higher level transactions nonetheless guaranteeing serializability. The programmer is provided with a powerful set of linguistic constructs that supports exception handling, partial rollback, and forward recovery. The proper use of these constructs can prevent the indefinite delay in completion of a higher level transaction, and allows the programmer to trade off starvation with transaction isolation.

Journal ArticleDOI
TL;DR: The objective of this paper is to illustrate how global semantic ACID properties, enforced by the transactions themselves, may be implemented on top of client/server technology to preserve high data availability.
Abstract: Global ACID properties (Atomicity, Consistency, Isolation and Durability) may be implemented by using a DDBMS (Distributed Data Base Management System.) However, in this solution data availability is low. Further, data may be blocked, i.e. if some data are locked from a remote location, the data cannot always be unlocked when the connection to the data fails. This is not a problem when client/server technology is used because client/server technology only uses local locks, a reason why multidatabases and client/server technology are widely used in real-life distributed systems. However, the trouble with such systems is that they have no inherent global ACID properties. The objective of this paper is to illustrate how global semantic ACID properties, enforced by the transactions themselves, may be implemented on top of client/server technology. This is done to preserve high data availability. The global atomicity property is implemented by using retriable and compensatable subtransactions. The global consistency property must be implemented by the transactions themselves. The global isolation property is implemented by using countermeasures to isolation anomalies. The global durability property is implemented by using the durability property of the local DBMSs. The largest bank in Denmark, Den Danske Bank, has implemented all its applications using methods described in this paper. © 1998 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
10 Aug 1998
TL;DR: A new mechanism is proposed that prevents network saturation by dynamically adjusting message injection limitation into the network, making fully adaptive feasible and guaranteeing that the frequency of deadlock is really negligible, allowing the use of simple low-cost recovery strategies.
Abstract: Deadlock avoidance and recovery techniques are alternatives to deal with the interconnection network deadlock problem. Both techniques allow fully adaptive routing on some set of resources while providing dedicated resources to escape from deadlock. They mainly differ in the way they supply escape paths and when those paths are used. As the escape paths only provide limited bandwidth to escape from deadlocks, both techniques suffer from severe performance degradation when the network is close to saturation. On the other hand, deadlock recovery is based on the assumption that deadlocks are rare. Several studies show that deadlock are more prone when the network is close to or beyond saturation. In this paper we propose a new mechanism that prevents network saturation by dynamically adjusting message injection limitation into the network. As a consequence, this mechanism will avoid the performance degradation problem that typically occurs in both deadlock avoidance and recovery techniques, making fully adaptive feasible. Also, it will guarantee that the frequency of deadlock is really negligible, allowing the use of simple low-cost recovery strategies.

Proceedings ArticleDOI
02 Dec 1998
TL;DR: The approach of using separate algorithms to process read-only transactions in real-time systems is investigated and a weaker form of consistency, view consistency, is defined, which allows ROTs to perceive different serialization order of update transactions.
Abstract: In this paper, we investigate the approach of using separate algorithms to process read-only transactions in real-time systems. A read-only transaction (ROT) is a transaction that only reads, but does not update any data item. Since there is a significant proportion of ROTs in several real-time systems, it is important to investigate how to process ROTs effectively. Using an algorithm to process ROTs separately from update transactions may reduce the interference between ROTs and update transactions. This reduced interference alleviates the impact of concurrency control on real-time priority-driven scheduling and improves the timeliness of the system. Moreover, we explore the different consistency requirements of ROTs. Particularly, we define a weaker form of consistency, view consistency, which allows ROTs to perceive different serialization order of update transactions. While view consistency permits non-serializability, ROTs are still ensured to see consistent data. We propose two robust algorithms for different consistency requirements of ROTs. The two algorithms are robust in the sense that they can be used in a compatible way so that a real-time system can provide different consistent data for different applications. The performance of two algorithms was examined through a series of simulation studies. The simulation results show that the two algorithms outperform the high-priority two-phase locking protocol.

Proceedings ArticleDOI
03 Jul 1998
TL;DR: MRL provides an integrated framework for concurrency control, emergent event handling, and negotiation of distributed robotic agents in a declarative manner, and incorporates concurrent control facilities into distributed AI and agent oriented programming systems.
Abstract: The paper presents a programming language, Multiagent Robot Language (MRL), for communication with and control of robotic agents, including physical robots and sensors. While robotic agents can perform their own tasks, task level cooperation allows them to perform more complex tasks that cannot be achieved by a single robot. MRL provides an integrated framework for concurrency control, emergent event handling, and negotiation of distributed robotic agents in a declarative manner. MRL is an executable specification language for multiagent robot control, since MRL programs are transformed into a set of guarded Horn clauses (parallel logic programs running on parallel computers). This feature provides both low and semantic level distributed control to enable intelligent cooperation between physical agents; this new approach incorporates concurrent control facilities into distributed AI and agent oriented programming systems.

Journal ArticleDOI
TL;DR: The techniques developed in order to support transaction and concurrency control in a temporal DBMS that was implemented as an additional layer to a commercial DBMS are described and shown that the overhead introduced is negligible.

Journal ArticleDOI
TL;DR: This paper proposes a framework in which both static and dynamic costs of transactions can be taken into account, and presents a method for pre-analyzing transactions based on the notion of branch-points for data accessed up to a branch point and predicting expected data access for completing the transaction.
Abstract: Real-time databases are poised to be an important component of complex embedded real-time systems In real-time databases (as opposed to real-time systems), transactions must satisfy the ACID properties in addition to satisfying the timing constraints specified for each transaction (or task) Although several approaches have been proposed to combine real-time scheduling and database concurrency control methods, to the best of our knowledge, none of them provide a framework for taking into account the dynamic cost associated with aborts, rollbacks, and restarts of transactions In this paper, we propose a framework in which both static and dynamic costs of transactions can be taken into account Specifically, we present: i) a method for pre-analyzing transactions based on the notion of branch-points for data accessed up to a branch point and predicting expected data access to be incurred for completing the transaction, ii) a formulation of cost that includes static and dynamic factors for prioritizing transactions, iii) a scheduling algorithm which uses the above two, and iv) simulation of the algorithm for several operating conditions and workload Our dynamic priority assignment policy (termed the cost conscious approach or CCA) adapts well to fluctuations in the system load without causing excessive numbers of transaction restarts Our simulations indicate that i) CCA performs better than the EDF-HP algorithm for both soft and firm deadlines, ii) CCA is more fair than EDF-HP, iii) CCA is better than EDF-CR for soft deadline, even though CCA requires and uses less information, and iv) CCA is especially good for disk-resident data

01 Jan 1998
TL;DR: The Thread-Specific Storage pattern improves performance and simplifies multithreaded applications by allowing multiple threads to use one logically global access point to retrieve thread-specific data without incurring locking overhead for each access.
Abstract: In theory, multi-threading an application can improve performance (by executing multiple instruction streams simultaneously) and simplify program structure (by allowing each thread to execute synchronously rather than reactively or asynchronously). In practice, multi-threaded applications often perform no better, or even worse, than single-threaded applications due to the overhead of acquiring and releasing locks. In addition, multi-threaded applications are hard to program due to the complex concurrency control protocols required to avoid race conditions and deadlocks. This paper describes the Thread-Specific Storage pattern, which alleviates several problems with multi-threading performance and programming complexity. The Thread-Specific Storage pattern improves performance and simplifies multithreaded applications by allowing multiple threads to use one logically global access point to retrieve thread-specific data without incurring locking overhead for each access.

Journal ArticleDOI
TL;DR: It is argued that a Concurrency Control approach better fits to indirect cooperation than a Concurrent Programming one and there do exist syntactic correctness criteria which define a large sphere of security in which application programmers are released from the burden of interaction explicit programming.

Proceedings ArticleDOI
30 Mar 1998
TL;DR: A new technique for efficiently handling index region modifications is developed and extended to reduce/eliminate query blocking overheads during node-splits, and two variants of this extended scheme are examined: one that reduces the blocking overhead for queries, and another that completely eliminates it.
Abstract: Multi-dimensional index structures such as R-trees enable fast searching in high-dimensional spaces. They differ from uni-dimensional structures in the following aspects: index regions in the tree may be modified during ordinary insert and delete operations; and node splits during inserts are quite expensive. Both these characteristics may lead to reduced concurrency of update and query operations. We examine how to achieve high concurrency for multi-dimensional structures. First, we develop a new technique for efficiently handling index region modifications. Then, we extend it to reduce/eliminate query blocking overheads during node-splits. We examine two variants of this extended scheme: one that reduces the blocking overhead for queries, and another that completely eliminates it. Experiments on image data on a shared-memory multiprocessor show that these schemes achieve up to 2 times higher throughput than existing techniques, and scale well with the number of processors.

Proceedings ArticleDOI
01 Mar 1998
TL;DR: A course on Parallel and Distributed Processing that is taught at undergraduate level in the Computer Science degree of the University is described, presenting an integrated approach concerning concurrency, parallelism, and distribution issues.
Abstract: Most known teaching experiences focus on parallel computing courses only, but some teaching experiences on distributed computing courses have also been reported. In this paper we describe a course on Parallel and Distributed Processing that is taught at undergraduate level in the Computer Science degree of our University.This course presents an integrated approach concerning concurrency, parallelism, and distribution issues. It's a breadth-first course addressing a wide spectrum of abstractions: the theoretical component focus on the fundamental abstractions to model concurrent systems, including process cooperation schemes, concurrent programming models, data and control distribution, concurrency control and recovery in transactional systems, and parallel processing models; the practical component illustrates the design and implementation issues involved in selected topics such as a data and control distribution problem, a distributed transaction-based support system and a parallel algorithm.We also discuss how this approach has been contributing to prepare the student to further actions regarding research and development of concurrent, distributed, or parallel systems.

Proceedings ArticleDOI
28 Jul 1998
TL;DR: This work is creating a distributed infrastructure for the SCIRun computational steering system, a tightly integrated, multi-threaded framework for composing scientific applications from existing or new components.
Abstract: Building systems that alter program behavior during execution based on user-specified criteria (computational steering systems) has been a recent research topic, particularly among the high performance computing community. To enable a computational steering system with powerful visualization capabilities to run on distributed memory architectures, a distributed infrastructure (or runtime system) must first be built. This infrastructure would permit harnessing a variety of machines to collaborate on an interactive simulation. Building such an infrastructure requires strategies for coordinating execution across machines (concurrency control mechanisms), mechanisms for fast data transfer between machines, and mechanisms for user manipulation of remote execution. We are creating a distributed infrastructure for the SCIRun computational steering system. SCIRun, a scientific problem solving environment (PSE), provides the ability to interactively guide or steer a running computation. Initially designed for a shared memory multiprocessor, SCIRun is a tightly integrated, multi-threaded framework for composing scientific applications from existing or new components. High performance computing is needed to maintain interactivity for scientists and engineers running simulations. Extending such a performance-sensitive application toolkit to enable pieces of the computation to run on different machine architectures all within the same computation would prove very useful. Not only could many different machines execute this framework, but also several machines could be configured to work synergistically on computations.

Journal ArticleDOI
01 Dec 1998
TL;DR: The key idea of COO is to base software process correctness on a safe transaction model which integrates some general properties that define a very permissive core synchronisation protocol, and process specific knowledge that allows the gearing of the core protocol towards process characteristics.
Abstract: Indexing terms: Cooperation, Consistency, Concurrency control, Constraints, Software engineering environment, Transactions Abstract: The COO system proposes a framework to organise the cooperation between developers of complex software systems. The key idea of COO is to base software process correctness on a safe transaction model: COO promotes an original advanced transaction model which integrates some general properties that define a very permissive core synchronisation protocol, and process specific knowledge that allows the gearing of the core protocol towards process characteristics.