scispace - formally typeset
Search or ask a question

Showing papers on "Concurrency control published in 2005"


Journal ArticleDOI
TL;DR: A theory is developed that characterizes when nonserializable executions of applications can occur under Snapshot Isolation, and it is applied to demonstrate that the TPC-C benchmark application has no serialization anomalies under SI, and how this demonstration can be generalized to other applications.
Abstract: Snapshot Isolation (SI) is a multiversion concurrency control algorithm, first described in Berenson et al. [1995]. SI is attractive because it provides an isolation level that avoids many of the common concurrency anomalies, and has been implemented by Oracle and Microsoft SQL Server (with certain minor variations). SI does not guarantee serializability in all cases, but the TPC-C benchmark application [TPC-C], for example, executes under SI without serialization anomalies. All major database system products are delivered with default nonserializable isolation levels, often ones that encounter serialization anomalies more commonly than SI, and we suspect that numerous isolation errors occur each day at many large sites because of this, leading to corrupt data sometimes noted in data warehouse applications. The classical justification for lower isolation levels is that applications can be run under such levels to improve efficiency when they can be shown not to result in serious errors, but little or no guidance has been offered to application programmers and DBAs by vendors as to how to avoid such errors. This article develops a theory that characterizes when nonserializable executions of applications can occur under SI. Near the end of the article, we apply this theory to demonstrate that the TPC-C benchmark application has no serialization anomalies under SI, and then discuss how this demonstration can be generalized to other applications. We also present a discussion on how to modify the program logic of applications that are nonserializable under SI so that serializability will be guaranteed.

351 citations


Book
21 Nov 2005
TL;DR: This book provides an extensive survey of the R-tree evolution, studying the applicability of the structure & its variations to efficient query processing, accurate proposed cost models, & implementation issues like concurrency control and parallelism.
Abstract: Space support in databases poses new challenges in every part of a database management system & the capability of spatial support in the physical layer is considered very important This has led to the design of spatial access methods to enable the effective & efficient management of spatial objects R-trees have a simplicity of structure & together with their resemblance to the B-tree, allow developers to incorporate them easily into existing database management systems for the support of spatial query processing This book provides an extensive survey of the R-tree evolution, studying the applicability of the structure & its variations to efficient query processing, accurate proposed cost models, & implementation issues like concurrency control and parallelism Written for database researchers, designers & programmers as well as graduate students, this comprehensive monograph will be a welcome addition to the field

247 citations


Proceedings ArticleDOI
14 Jun 2005
TL;DR: This paper presents a middleware-based replication scheme which provides the popular snapshot isolation level at the same tuple-level granularity as database systems like PostgreSQL and Oracle, without any need to declare transaction properties in advance.
Abstract: Many cluster based replication solutions have been proposed providing scalability and fault-tolerance. Many of these solutions perform replica control in a middleware on top of the database replicas. In such a setting concurrency control is a challenge and is often performed on a table basis. Additionally, some systems put severe requirements on transaction programs (e.g., to declare all objects to be accessed in advance). This paper addresses these issues and presents a middleware-based replication scheme which provides the popular snapshot isolation level at the same tuple-level granularity as database systems like PostgreSQL and Oracle, without any need to declare transaction properties in advance. Both read-only and update transactions can be executed at any replica while providing data consistency at all times. Our approach provides what we call "1-copy-snapshot-isolation" as long as the underlying database replicas provide snapshot isolation. We have implemented our approach as a replicated middleware on top of PostgreSQL replicas. By providing a standard JDBC interface, the middleware is completely transparent to the client program. Fault-tolerance is provided by automatically reconnecting clients in case of crashes. Our middleware shows good performance in terms of response times and scalability.

241 citations


Journal ArticleDOI
TL;DR: This article presents different replication protocols, argues their correctness, describes their implementation as part of a generic middleware, Middle-R, and proves their feasibility with an extensive performance evaluation.
Abstract: The widespread use of clusters and Web farms has increased the importance of data replication. In this article, we show how to implement consistent and scalable data replication at the middleware level. We do this by combining transactional concurrency control with group communication primitives. The article presents different replication protocols, argues their correctness, describes their implementation as part of a generic middleware, Middle-R, and proves their feasibility with an extensive performance evaluation. The solution proposed is well suited for a variety of applications including Web farms and distributed object platforms.

187 citations


Proceedings ArticleDOI
S. Wu1, Bettina Kemme1
05 Apr 2005
TL;DR: This paper presents Postgres-R(SI), an extension of PostgreSQL offering transparent replication designed to work smoothly with PostgreSQL's concurrency control providing snapshot isolation for the entire replicated system.
Abstract: Replicating data over a cluster of workstations is a powerful tool to increase performance, and provide fault-tolerance for demanding database applications. The big challenge in such systems is to combine replica control (keeping the copies consistent) with concurrency control. Most of the research so far has focused on providing the traditional correctness criteria serializability. However, more and more database systems, e.g., Oracle and PostgreSQL, use multi-version concurrency control providing the isolation level snapshot isolation. In this paper, we present Postgres-R(SI), an extension of PostgreSQL offering transparent replication. Our replication tool is designed to work smoothly with PostgreSQL's concurrency control providing snapshot isolation for the entire replicated system. We present a detailed description of the replica control algorithm, and how it is combined with PostgreSQL's concurrency control component. Furthermore, we discuss some challenges we encountered when implementing the protocol. Our performance analysis based on the TPC-W benchmark shows that this approach exhibits excellent performance for real-life applications even if they are update intensive.

132 citations


Proceedings ArticleDOI
07 Sep 2005
TL;DR: A flexible methodology for object-oriented programs that protects object structures against inconsistency due to race conditions is presented, based on a recent methodology for single-threaded programs where developers define aggregate object structures using an ownership system and declare invariants over them.
Abstract: Developing safe multithreaded software systems is difficult due to the potential unwanted interference among concurrent threads. This paper presents a flexible methodology for object-oriented programs that protects object structures against inconsistency due to race conditions. It is based on a recent methodology for single-threaded programs where developers define aggregate object structures using an ownership system and declare invariants over them. The methodology is supported by a set of language elements and by both a sound modular static verification method and run-time checking support. The paper reports on preliminary experience with a prototype implementation.

84 citations


Proceedings ArticleDOI
14 Jun 2005
TL;DR: Immortal DB as discussed by the authors is a transaction-time database that supports snapshot isolation concurrency control and lazy timestamping, which propagates timestamps to all updates of a transaction after commit.
Abstract: Immortal DB builds transaction time database support into the SQL Server engine, not in middleware. Transaction time databases retain and provide access to prior states of a database. An update "inserts" a new record while preserving the old version. The system supports as of queries returning records current at the specified time. It also supports snapshot isolation concurrency control. Versions are stamped with the times of their updating transactions. The timestamp order agrees with transaction serialization order. Lazy timestamping propagates timestamps to all updates of a transaction after commit. All versions are kept in an integrated storage structure, with historical versions initially stored with current data. Time-splits of pages permit large histories to be maintained, and enable time based indexing. We demonstrate Immortal DB with a moving objects application that tracks cars in the Seattle area.

70 citations


Journal ArticleDOI
01 Apr 2005
TL;DR: In restart recovery, the redo pass of the ARIES-based recovery protocol will always produce a structurally consistent and balanced B-link tree, on which the database updates by backward-rolling transactions can always be undone logically, when a physical (page-oriented) undo is no longer possible.
Abstract: In this paper we present new concurrent and recoverable B-link-tree algorithms. Unlike previous algorithms, ours maintain the balance of the B-link tree at all times, so that a logarithmic time bound for a search or an update operation is guaranteed under arbitrary sequences of record insertions and deletions. A database transaction can contain any number of operations of the form “fetch the first (or next) matching record”, “insert a record”, or “delete a record”, where database records are identified by their primary keys. Repeatable-read-level isolation for transactions is guaranteed by key-range locking. The algorithms apply the write-ahead logging (WAL) protocol and the steal and no-force buffering policies for index and data pages. Record inserts and deletes on leaf pages of a B-link tree are logged using physiological redo-undo log records. Each structure modification such as a page split or merge is made an atomic action by keeping the pages involved in the modification latched for the (short) duration of the modification and the logging of that modification; at most two B-link-tree pages are kept X-latched at a time. Each structure modification brings the B-link tree into a structurally consistent and balanced state whenever the tree was structurally consistent and balanced initially. Each structure modification is logged using a single physiological redo-only log record. Thus, a structure modification will never be undone even if the transaction that gave rise to it eventually aborts. In restart recovery, the redo pass of our ARIES-based recovery protocol will always produce a structurally consistent and balanced B-link tree, on which the database updates by backward-rolling transactions can always be undone logically, when a physical (page-oriented) undo is no longer possible.

57 citations


Journal Article
TL;DR: This paper presents different replication protocols, argues their correctness, describes their implementation as part of a generic middleware tool, and proves their feasibility with an extensive performance evaluation.
Abstract: The widespread use of clusters and web farms has increased the importance of data replication In this paper, we show how to implement consistent and scalable data replication at the middleware level We do this by combining transactional concurrency control with group communication primitives The paper presents different replication protocols, argues their correctness, describes their implementation as part of a generic middleware tool, and proves their feasibility with an extensive performance evaluation The solution proposed is well suited for a variety of applications including web farms and distributed object platforms

55 citations


Proceedings ArticleDOI
31 Oct 2005
TL;DR: A new decentralized serialization graph testing protocol to ensure concurrency control and recovery in peer-to-peer environments and exhibits a significant performance gain over traditional distributed locking-based protocols with respect to the execution of transactions encompassing Web service requests.
Abstract: Business processes executing in peer-to-peer environments usually invoke Web services on different, independent peers. Although peer-to-peer environments inherently lack global control, some business processes nevertheless require global transactional guarantees, i.e., atomicity and isolation applied at the level of processes. This paper introduces a new decentralized serialization graph testing protocol to ensure concurrency control and recovery in peer-to-peer environments. The uniqueness of the proposed protocol is that it ensures global correctness without relying on a global serialization graph. Essentially, each transactional process is equipped with partial knowledge that allows the transactional processes to coordinate. Globally correct execution is achieved by communication among dependent transactional processes and the peers they have accessed. In case of failures, a combination of partial backward and forward recovery is applied. Experimental results exhibit a significant performance gain over traditional distributed locking-based protocols with respect to the execution of transactions encompassing Web service requests.

53 citations


Proceedings ArticleDOI
22 Feb 2005
TL;DR: It is believed that a simple and compositional proof system is paramount to allow verification of real programs and this paper introduces a reasoning system for this model, focusing on simplicity and modularity.
Abstract: Current object-oriented approaches to distributed programs may be criticized in several respects. First, method calls are generally synchronous, which leads to much waiting in distributed and unstable networks. Second, the common model of thread concurrency makes reasoning about program behavior very challenging. A model based on concurrent objects communicating by means of asynchronous method calls has been proposed to combine object orientation and distribution in a more satisfactory way. This paper introduces a reasoning system for this model, focusing on simplicity and modularity. We believe that a simple and compositional proof system is paramount to allow verification of real programs. The proposed proof rules are derived from the Hoare rules of a standard sequential language by means of a semantic encoding preserving soundness and relative completeness.

Journal ArticleDOI
TL;DR: A synthesis approach for reactive software that is aimed at minimizing the overhead introduced by the operating system and the interaction among the concurrent tasks is presented.
Abstract: A reactive system must process inputs from the environment at the speed and with the delay dictated by the environment. The synthesis of reactive software from a modular concurrent specification model generates a set of concurrent tasks coordinated by an operating system. This paper presents a synthesis approach for reactive software that is aimed at minimizing the overhead introduced by the operating system and the interaction among the concurrent tasks. A formal model based on Petri nets is used to synthesize the tasks and verify the correctness of their composition. A practical application of the approach is illustrated by means of a real-life industrial example, which shows the significant impact of the approach on the performance of the system.

Proceedings ArticleDOI
Rui Li1, Du Li1
06 Nov 2005
TL;DR: A novel landmark-based transformation (LBT) approach is proposed, its correctness no longer depending on conditions that are very difficult to verify and thus easy to prove, and an example algorithm is given that significantly outperforms a state-of-the-art OT algorithm.
Abstract: Operational transformation (OT) is a responsive and nonblocking concurrency control method widely-accepted in group editors. Correctness and performance are the basis of usefulness and usability of OT-based group editors. However, the correctness of previous OT algorithms depends on conditions that are very difficult to verify. In this paper we propose a novel landmark-based transformation (LBT) approach, its correctness no longer depending on those conditions and thus easy to prove. In addition, we give an example algorithm that significantly outperforms a state-of-the-art OT algorithm. This work reveals a more practical approach to developing OT algorithms.

Proceedings ArticleDOI
01 Jun 2005
TL;DR: Results obtained from tests executed under variable scenarios show that speedup and resource utilization gains could be obtained by adopting the proposed replication approach in addition to the pure parallel and distributed simulation.
Abstract: Parallel and distributed simulations enable the analysis of complex systems by concurrently exploiting the aggregate computation power and memory of clusters of execution units. In this paper we investigate a new direction for increasing both the speedup of a simulation process and the utilization of computation and communication resources. Many simulation-based investigations require to collect independent observations for a correct and significant statistical analysis of results. The execution of many independent parallel or distributed simulation runs may suffer the speedup reduction due to rollbacks under the optimistic approach, and due to idle CPU times originated by synchronization and communication bottlenecks under the conservative approach. We present a parallel and distributed simulation framework supporting concurrent replication of parallel and distributed simulations (CR-PADS), as an alternative to the execution of a linear sequence of multiple parallel or distributed simulation runs. Results obtained from tests executed under variable scenarios show that speedup and resource utilization gains could be obtained by adopting the proposed replication approach in addition to the pure parallel and distributed simulation.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: The first local deadlock detection algorithm for PN models is presented, based on the Mitchell and Merritt algorithm, which is suitable for both parallel and distributed PN implementations.
Abstract: The process network (PN) model, which consists of concurrent processes communicating over first-in first out unidirectional queues, is useful for modeling and exploiting functional parallelism in streaming data applications. The PN model maps easily onto multi-processor and/or multi-threaded targets. Since the PN model is Turing complete, memory requirements cannot be predicted statically. In general, any bounded-memory scheduling algorithm for this model requires run-time deadlock detection. The few PN implementations that perform deadlock detection detect only global deadlocks. Not all local deadlocks, however, will cause a PN system to reach global deadlock. In this paper, we present the first local deadlock detection algorithm for PN models. The proposed algorithm is based on the Mitchell and Merritt algorithm and is suitable for both parallel and distributed PN implementations.

Book ChapterDOI
02 Dec 2005
TL;DR: It is shown that verification of real-valued properties in probabilistic distributed systems can be considerably simplified, and moreover that there is an interpretation which is susceptible to counterexample search via state exploration, despite the underlying real-number domain.
Abstract: The mechanisation of proofs for probabilistic systems is particularly challenging due to the verification of real-valued properties that probability entails: experience indicates [12,4,11] that there are many difficulties in automating real-number arithmetic in the context of other program features. In this paper we propose a framework for verification of probabilistic distributed systems based on the generalisation of Kleene algebra with tests that has been used as a basis for development of concurrency control in standard programming [7]. We show that verification of real-valued properties in these systems can be considerably simplified, and moreover that there is an interpretation which is susceptible to counterexample search via state exploration, despite the underlying real-number domain.

Proceedings ArticleDOI
14 Jun 2005
TL;DR: This paper presents an online B-tree merging method, in which the merging of leaf pages in two B-trees are piggybacked lazily with normal user transactions, thus making the merging I/O efficient and allowing user transactions to access only one index instead of both.
Abstract: Many scenarios involve merging of two B-tree indexes, both covering the same key range. Increasing demand for continuous availability and high performance requires that such merging be done online, with minimal interference to normal user transactions. In this paper we present an online B-tree merging method, in which the merging of leaf pages in two B-trees are piggybacked lazily with normal user transactions, thus making the merging I/O efficient and allowing user transactions to access only one index instead of both. The concurrency control mechanism is designed to interfere as little as possible with ongoing user transactions. Merging is made forward recoverable by following a conventional logging protocol, with a few extensions. Should a system failure occur, both indexes being merged can be recovered to a consistent state and no merging work is lost. Experiments and analysis show the I/O savings and the performance, and compare variations on the basic algorithm.

19 May 2005
TL;DR: A methodology is proposed that comprises a paradigm of fundamental aspects of concurrency of concurrent software for embedded systems, with an emphasis on the development of control software.
Abstract: In this thesis, we are concerned with the development of concurrent software for embedded systems. The emphasis is on the development of control software. Embedded systems are concurrent systems whereby hardware and software communicate with the concurrent world. Concurrency is essential, which cannot be ignored. It requires a proper handling to avoid pathological problems (e.g. deadlock and livelock) and performance penalties (e.g. starvation and priority conflicts). Multithreading, as such, leads to sources of complexity in concurrent software. This complexity is considered frightening, because it complicates the software designs and the resulting code. Moreover, this paradigm complicates the understanding of the behaviour of concurrent software. A paradigm with a precise understanding of concurrency is essential. In this thesis, a methodology is proposed that comprises a paradigm of fundamental aspects of concurrency.

Journal ArticleDOI
01 May 2005
TL;DR: This paper presents a distributed coevolutionary classifier (DCC) for extracting comprehensible rules in data mining that allows different species to be evolved cooperatively and simultaneously, while the computational workload is shared among multiple computers over the Internet.
Abstract: This paper presents a distributed coevolutionary classifier (DCC) for extracting comprehensible rules in data mining. It allows different species to be evolved cooperatively and simultaneously, while the computational workload is shared among multiple computers over the Internet. Through the intercommunications among different species of rules and rule sets in a distributed manner, the concurrent processing and computational speed of the coevolutionary classifiers are enhanced. The advantage and performance of the proposed DCC are validated upon various datasets obtained from the UCI machine learning repository. It is shown that the predicting accuracy of DCC is robust and the computation time is reduced as the number of remote engines increases. Comparison results illustrate that the DCC produces good classification rules for the datasets, which are competitive as compared to existing classifiers in literature.

Journal ArticleDOI
TL;DR: This work shows that a combination of a "value-based" latch pool and previous high concurrency locking techniques can solve the maintenance of materialized aggregate join views.
Abstract: The maintenance of materialized aggregate join views is a well-studied problem. However, to date the published literature has largely ignored the issue of concurrency control. Clearly, immediate materialized view maintenance with transactional consistency, if enforced by generic concurrency control mechanisms, can result in low levels of concurrency and high rates of deadlock. While this problem is superficially amenable to well-known techniques, such as fine-granularity locking and special lock modes for updates that are associative and commutative, we show that these previous high concurrency locking techniques do not fully solve the problem, but a combination of a "value-based" latch pool and these previous high concurrency locking techniques can solve the problem.

Proceedings ArticleDOI
11 Jul 2005
TL;DR: An operational semantics is used to formalize and prove the type soundness result and an isolation property of tasks, and a first-order type system is presented which can verify information for the concurrency controller.
Abstract: In this paper we design a language and runtime support for isolation-only, multithreaded transactions (called tasks). Tasks allow isolation to be declared instead of having to be encoded using the low-level synchronization constructs. The key concept of our design is the use of a type system to support rollback-free and safe runtime execution of tasks.We present a first-order type system which can verify information for the concurrency controller. We use an operational semantics to formalize and prove the type soundness result and an isolation property of tasks. The semantics uses a specialized concurrency control algorithm, that is based on access versioning.

Proceedings ArticleDOI
20 Jul 2005
TL;DR: This paper discusses a concurrency control based on the significancy of roles assigned to transactions, and defines a significantly dominant relation on roles.
Abstract: A role-based access control model is used to make a system secure. A role concept shows a job function in an enterprise. Traditional locking protocols and timestamp ordering schedulers are based on principles "first-comer-winner" and "timestamp order" to make multiple conflicting transactions serializable, respectively. In this paper, we discuss a concurrency control based on the significancy of roles assigned to transactions. We define a significantly dominant relation on roles. We discuss a role ordering (RO) scheduler based on the role concept. We evaluate the RO scheduler compared with the two-phase locking (2PL) protocol.

Journal ArticleDOI
TL;DR: In this article, the authors consider real-time update of access control policies in a database system and propose several algorithms that not only prevent such security breaches but also ensure the correctness of execution.
Abstract: Real-time update of access control policies, that is, updating policies while they are in effect and enforcing the changes immediately, is necessary for many security-critical applications. In this paper, we consider real-time update of access control policies in a database system. Updating policies while they are in effect can lead to potential security problems, such as, access to database objects by unauthorized users. In this paper, we propose several algorithms that not only prevent such security breaches but also ensure the correctness of execution. The algorithms differ from each other in the degree of concurrency provided and the semantic knowledge used. Of the algorithms presented, the most concurrency is achieved when transactions are decomposed into atomic steps. Once transactions are decomposed, the atomicity, consistency, and isolation properties no longer hold. Since the traditional transaction processing model can no longer be used to ensure the correctness of the execution, we use an alternate semantic-based transaction processing model. To ensure correct behavior, our model requires an application to satisfy a set of necessary properties, namely, semantic atomicity, consistent execution, sensitive transaction isolation, and policy-compliant. We show how one can verify an application statically to check for the existence of these properties.

Proceedings ArticleDOI
19 Sep 2005
TL;DR: This paper proposes SESAMO, which is a concurrency control mechanism that does not require that a single mobile host (a node in MANET) plays the role of the centralized coordinator for all global transactions executed in the mobile MDBS.
Abstract: The nodes of a mobile ad hoc network (MANET) represent mobile computers in which database systems (DBSs) may reside. In such an environment, we may have a mobile multidatabase system (mobile MDBS), i.e., a collection of autonomous, distributed, heterogeneous and mobile DBSs, where each mobile computer can access multiple DBSs of that collection by means of global transactions. In this paper we propose SESAMO, which is a concurrency control mechanism that does not require that a single mobile host (a node in MANET) plays the role of the centralized coordinator for all global transactions executed in the mobile MDBS. Moreover, SESAMO allows that subtransactions commit independently of the commitment of their respective global transactions, since it is based on the semantic serializability, which relaxes global serializability.

Proceedings ArticleDOI
19 Jul 2005
TL;DR: An extended workflow Petri net model is introduced to deal with synchronization among activities in workflows which include critical sections and which the validity of resource tokens is subject to real time constraints.
Abstract: To maximize throughput in workflow systems concurrency is required. On the other hand concurrency must be controlled especially in systems in which a set of tasks can not serve more than one activity at a time constituting a critical section. This paper introduces an extended workflow Petri net model to deal with synchronization among activities in workflows which include critical sections and which the validity of resource tokens is subject to real time constraints. The paper also addresses the soundness property of the proposed extended workflow Petri net.

Journal ArticleDOI
TL;DR: In this article, the authors propose a new mechanism for detecting and resolving network congestion and potential deadlocks based on efficiently tracking paths of congestion and increasing the scheduling priority of packets along those paths.
Abstract: Efficient and reliable communication is essential for achieving high performance in a networked computing environment. Finite network resources bring about unavoidable competition among in-flight network packets, resulting in network congestion and, possibly, deadlock. Many techniques have been proposed to improve network performance by efficiently handling network congestion and potential deadlock. However, none of them provide an efficient way of accelerating the movement of network packets in congestion toward their destinations. In this paper, we propose a new mechanism for detecting and resolving network congestion and potential deadlocks. The proposed mechanism is based on efficiently tracking paths of congestion and increasing the scheduling priority of packets along those paths. This acts to throttle other packets trying to enter those congested regions - in effect, locking out packets from congested regions until congestion has had the opportunity to disperse. Simulation results show that the proposed technique effectively disperses network congestion and is also applicable in helping to resolve potential deadlock.

Proceedings ArticleDOI
04 Apr 2005
TL;DR: A detailed simulation model of a distributed database system is presented and the performance price paid for maintaining security with concurrency control in a distributeddatabase system is investigated.
Abstract: Majority of the research in multilevel secure database management systems (MLS/DBMS) focuses primarily on centralized database systems. However, with the demand for higher performance and higher availability, database systems have moved from centralized to distributed architectures, and the research in multilevel secure distributed database management systems (MLS/DDBMS) is gaining more and more prominence. Concurrency control is an integral part of database systems. Secure concurrency control algorithms proposed in literature achieve correctness and security at the cost of declined performance of high security level transactions. These algorithms infringe the fairness in processing transactions at different security levels. Though the performance of different concurrency control algorithms have been explored extensively for centralized multilevel secure database management systems, but to the best of author's knowledge the relative performance of transactions at different security levels using different secure concurrency control algorithm for MLS/DDBMS has not been reported yet. To fill this gap, this paper presents a detailed simulation model of a distributed database system and investigates the performance price paid for maintaining security with concurrency control in a distributed database system. The paper investigates the relative performance of transactions at different security levels.

Proceedings ArticleDOI
18 Jan 2005
TL;DR: This paper applies a static analysis method to find ambiguous behavior and synthesize software/hardware automatically, which is the main focus of this paper and is expected that the proposed technique can be applied to other compositional FSM extensions.
Abstract: To describe complex control modules, the following four features are requested for extended FSM models: concurrency, compositionality, static analyzability, and automatic code synthesis capability. In our codesign environment we use a new FSM extension called flexible FSM model. It extends the expression capabilities by concurrency, hierarchy, and state variable while it maintains formal property. Because of formality and the structured nature of fFSM model, we can apply a static analysis method to find ambiguous behavior and synthesize software/hardware automatically, which is the main focus of this paper. We expect that the proposed technique can be applied to other compositional FSM extensions.

Book ChapterDOI
16 Dec 2005
TL;DR: A compilation technique is presented that allows a programmer to describe synchronous modules without having to consider undefined inputs and transforms such a description into code that does as much as it can with undefined inputs, allowing modules to be compiled separately and assembled later.
Abstract: Synchronous models are useful for designing real-time embedded systems because they provide timing control and deterministic concurrency. However, the semantics of such models usually require an entire system to be compiled at once to analyze the dependencies among modules. The alternative is to write modules that can respond when the values of some of their inputs are unknown, a tedious and error-prone process. We present a compilation technique that allows a programmer to describe synchronous modules without having to consider undefined inputs. Our algorithm transforms such a description into code that does as much as it can with undefined inputs, allowing modules to be compiled separately and assembled later. We implemented our technique in a compiler for the Esterel language and present results that show the technique does not impose a substantial overhead.

Book ChapterDOI
12 Sep 2005
TL;DR: A new XPath-based DataGuide locking protocol is proposed, which extends and generalizes on the hierarchical data locking protocol, which provides a high degree of concurrency and produces serializable schedules.
Abstract: Concurrency control has been a hot area for quite some time. Today, when XML gains more and more attention, new concurrency control methods for accessing XML data are developed. There was proposed a number of protocols suited for XML. Grabs et al. presented DGLOCK locking protocol based on the DataGuide. This approach resulted in a major concurrency increase for XML data. In this paper, we propose a new XPath-based DataGuide locking protocol, which extends and generalizes on the hierarchical data locking protocol. Our protocol (1) may be implemented on top of any existing system, (2) provides a high degree of concurrency and (3) produces serializable schedules. The protocol suites for XPath operations very well, as it captures XPath navigational behaviour. Our method also takes into account the semantics of update operations to increase concurrency. The paper presents formal proof of correctness for the protocol.