scispace - formally typeset
Search or ask a question

Showing papers on "Multiversion concurrency control published in 2007"


Proceedings ArticleDOI
10 Feb 2007
TL;DR: This paper presents the first scalable TM implementation for directory-based distributed shared memory systems that is livelock free without the need for user-level intervention and is based on transactional coherence and consistency (TCC), which supports continuous transactions and fault isolation.
Abstract: Transactional memory (TM) provides mechanisms that promise to simplify parallel programming by eliminating the need for locks and their associated problems (deadlock, livelock, priority inversion, convoying). For TM to be adopted in the long term, not only does it need to deliver on these promises, but it needs to scale to a high number of processors. To date, proposals for scalable TM have relegated livelock issues to user-level contention managers. This paper presents the first scalable TM implementation for directory-based distributed shared memory systems that is livelock free without the need for user-level intervention. The design is a scalable implementation of optimistic concurrency control that supports parallel commits with a two-phase commit protocol, uses write-back caches, and filters coherence messages. The scalable design is based on transactional coherence and consistency (TCC), which supports continuous transactions and fault isolation. A performance evaluation of the design using both scientific and enterprise benchmarks demonstrates that the directory-based TCC design scales efficiently for NUMA systems up to 64 processors

148 citations


06 Aug 2007
TL;DR: The theory and practice of system call wrapper concurrency vulnerabilities are discussed, and exploit techniques against GSWTK, Systrace, and CerbNG are demonstrated.
Abstract: System call interposition allows the kernel security model to be extended. However, when combined with current operating systems, it is open to concurrency vulnerabilities leading to privilege escalation and audit bypass. We discuss the theory and practice of system call wrapper concurrency vulnerabilities, and demonstrate exploit techniques against GSWTK, Systrace, and CerbNG.

90 citations


Proceedings ArticleDOI
14 Mar 2007
TL;DR: The results show that easier-to-use long transactions can still allow programs to deliver scalable performance by simply wrapping existing data structures with transactional collection classes, without the need for custom implementations or knowledge of data structure internals.
Abstract: While parallel programmers find it easier to reason about large atomic regions, the conventional mutual exclusion-based primitives for synchronization force them to interleave many small operations to achieve performance. Transactional memory promises that programmer scan use large atomic regions while achieving similar performance. However, these large transactions can conflict when operating on shared data structures, even for logically independent operations. Transactional collection classes address this problem by allowing long-running transactions to operate on shared data while eliminating unnecessary conflicts. Transactional collection classes wrap existing data structures, without the need for custom implementations or knowledge of data structure internals.Without transactional collection classes, access to shared datafrom within long-running transactions can suffer from data dependency conflicts that are logically unnecessary, but are artifacts of the data structure implementation such as hash table collisions or tree-balancing rotations. Our transactional collection classes use the concept of semantic concurrency control to eliminate these unnecessary data dependencies, replacing them with conflict detection based on the operations of the abstract data type.The design and behavior of these transactional collection classes is discussed with reference to the related work from the database community such as multi-level transactions and semantic concurrency control, as well as other concurrent data structures such as java.util.concurrent. The required transactional semantics needed for implementing transactional collection are enumerated, including open-nested transactions and commit and abort handlers. We also discuss how isolation can be reduced for greater concurrency. Finally, we provide guidelines on the construction of classes that preserve isolation and serializability.The performance of these classes is evaluated with a number of benchmarks including targeted micro-benchmarks and a version of SPECjbb2000 with increased contention. The results show that easier-to-use long transactions can still allow programs to deliver scalable performance by simply wrapping existing data structures with transactional collection classes.

67 citations


Proceedings ArticleDOI
30 Sep 2007
TL;DR: This paper presents an alternative approach to implement concurrency in GHC, where the runtime system is a thin substrate providing only a small set of concurrency primitives, and the remaining concurrency features are implemented in software libraries written in Haskell.
Abstract: The Glasgow Haskell Compiler (GHC) has quite sophisticated support for concurrency in its runtime system, which is written in low-level C code. As GHC evolves, the runtime system becomes increasingly complex, error-prone, difficult to maintain and difficult to add new concurrency features.This paper presents an alternative approach to implement concurrency in GHC. Rather than hard-wiring all kinds of concurrency features, the runtime system is a thin substrate providing only a small set of concurrency primitives, and the remaining concurrency features are implemented in software libraries written in Haskell. This design improves the safety of concurrency support; it also provides more customizability of concurrency features, which can be developed as Haskell library packages and deployed modularly.

37 citations


Journal ArticleDOI
TL;DR: A method for verifying concurrent Java components that includes ConAn and complements it with other static and dynamic verification tools and techniques is proposed, based on an analysis of common concurrency problems and concurrency failures in Java components.
Abstract: The Java programming language supports concurrency. Concurrent programs are harder to verify than their sequential counterparts due to their inherent non-determinism and a number of specific concurrency problems, such as interference and deadlock. In previous work, we have developed the ConAn testing tool for the testing of concurrent Java components. ConAn has been found to be effective at testing a large number of components, but there are certain classes of failures that are hard to detect using ConAn. Although a variety of other verification tools and techniques have been proposed for the verification of concurrent software, they each have their strengths and weaknesses. In this paper, we propose a method for verifying concurrent Java components that includes ConAn and complements it with other static and dynamic verification tools and techniques. The proposal is based on an analysis of common concurrency problems and concurrency failures in Java components. As a starting point for determining the concurrency failures in Java components, a Petri-net model of Java concurrency is used. By systematically analysing the model, we come up with a complete classification of concurrency failures. The classification and analysis are then used to determine suitable tools and techniques for detecting each of the failures. Finally, we propose to combine these tools and techniques into a method for verifying concurrent Java components. Copyright (c) 2006 John Wiley & Sons, Ltd.

32 citations


Journal ArticleDOI
01 Jan 2007
TL;DR: A correctness criterion is proposed and the Grid concurrency control protocol is proposed, which has the capability to deal with heterogeneity, autonomy, distribution and high volume of data in Grids.
Abstract: Grid architecture is a fast evolving distributed computing architecture. The working of databases in the Grid architecture is not well understood. In view of changing distributed architecture we strongly feel that concurrency control issues should be revisited and reassessed for this new and evolving architecture. Implementing global lock table and global log records may not be practically possible in the Grid architecture due to the scalability issues. In this paper, we propose a correctness criterion and the Grid concurrency control protocol, which has the capability to deal with heterogeneity, autonomy, distribution and high volume of data in Grids. We then prove the correctness of the protocol followed by performance evaluation of the protocol.

32 citations


Patent
John Joseph Duffy1, Michael M. Magruder1, Goetz Graefe1, David Detlefs1, Vinod Grover1 
03 Jul 2007
TL;DR: In this paper, a transactional memory word is provided for each piece of data, which includes a version number, a reader indicator, and an exclusive writer indicator for concurrency control.
Abstract: Various technologies and techniques are disclosed that improve implementation of concurrency control modes in a transactional memory system. A transactional memory word is provided for each piece of data. The transactional memory word includes a version number, a reader indicator, and an exclusive writer indicator. The transactional memory word is analyzed to determine if the particular concurrency control mode is proper. Using the transactional memory word to help with concurrency control allows multiple combinations of operations to be performed against the same memory location simultaneously and/or from different transactions. For example, a pessimistic read operation and an optimistic read operation can be performed against the same memory location.

30 citations


Book ChapterDOI
01 Oct 2007
TL;DR: This paper presents the first interprocedural concurrency analysis that can handle OpenMP, and, in general, programs with unnamed and textually unaligned barriers, and implemented for OpenMP programs written in C.
Abstract: Concurrency analysis is a static analysis technique that determines whether two statements or operations in a shared memory program may be executed by different threads concurrently. Concurrency relationships can be derived from the partial ordering among statements imposed by synchronization constructs. Thus, analyzing barrier synchronization is at the core of concurrency analyses for many parallel programming models. Previous concurrency analyses for programs with barriers commonly assumed that barriers are named or textually aligned. This assumption may not hold for popular parallel programming models, such as OpenMP, where barriers are unnamed and can be placed anywhere in a parallel region, i.e., they may be textually unaligned. We present in this paper the first interprocedural concurrency analysis that can handle OpenMP, and, in general, programs with unnamed and textually unaligned barriers.We have implemented our analysis for OpenMP programs written in C and have evaluated the analysis on programs from the NPB and SpecOMP2001 benchmark suites.

23 citations


Journal ArticleDOI
TL;DR: This comprehensive architecture supports nested transactions, transactional handlers, and two-phase commit and is a seamless integration of transactional memory with modern programming languages and runtime environments.
Abstract: As multicore chips become ubiquitous, the need to provide architectural support for practical parallel programming is reaching critical. Conventional lock-based concurrency control techniques are difficult to use, requiring the programmer to navigate through the minefield of coarse-versus fine-grained locks, deadlock, livelock, lock convoying, and priority inversion. This explicit management of concurrency is beyond the reach of the average programmer, threatening to waste the additional parallelism available with multicore architectures. This comprehensive architecture supports nested transactions, transactional handlers, and two-phase commit. The result is a seamless integration of transactional memory with modern programming languages and runtime environments

21 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed multi-versions transaction processing approach and a deadlock-free concurrency control mechanism based on multiversiontwo-phase locking scheme integrated with a timestamp approach provide significantly higher throughput.
Abstract: Transaction management on Mobile Database Systems (MDS) has to cope with a number of constraints such as limited bandwidth, low processing power, unreliable communication, and mobility etc. As a result of these constraints, traditional concurrency control mechanisms are unable to manage transactional activities to maintain availability. Innovative transaction execution schemes and concurrency control mechanisms are therefore required to exploit the full potential of MDS. In this paper, we report our investigation on a multi-versions transaction processing approach and a deadlock-free concurrency control mechanism based on multiversion two-phase locking scheme integrated with a timestamp approach. We study the behavior of the proposed model with a simulation study in a MDS environment. We have compared our schemes using a reference model to argue that such a performance comparison helps to show the superiority of our model over others. Experimental results demonstrate that our model provide significantly higher throughput by improving degree of concurrency, by reducing transaction wait time, and by minimizing restarts and aborts.

20 citations


Book ChapterDOI
18 Dec 2007
TL;DR: A concurrency control mechanism with dynamic timer adjustment which helps in reducing the communication overhead and enhances the transaction throughput and the performance trade off metrics is proposed.
Abstract: In a mobile computing environment, users can perform on-line transaction processing independent of their physical location. In a mobile environment, multiple mobile hosts may update the data simultaneously which may result in inconsistency of data. To solve such problems many concurrency control techniques have been proposed. The traditional two phase locking protocol has some inherent problems such as deadlocks & long unpredictable blocking. In this paper we propose a concurrency control mechanism with dynamic timer adjustment which helps in reducing the communication overhead and enhances the transaction throughput. The simulation results specify the performance trade off metrics.

01 Jan 2007
TL;DR: This document looks at some of the problems with traditional methods of concurrency control via locking, and explains how Multi-Version Concurrency algorithms help to resolve some of these problems.
Abstract: This document looks at some of the problems with traditional methods of concurrency control via locking, and explains how Multi-Version Concurrency algorithms help to resolve some of these problems. It describes the approaches to Multi-Version concurrency and examines a couple of implementations in greater detail.

Proceedings Article
01 Jan 2007
TL;DR: A new transaction model is described that supports release of early results inside and outside of a transaction, decreasing the severe limitations of conventional lock mechanisms, yet still warranties consistency and recoverability of released resources (results).
Abstract: Concurrency control mechanisms such as turn-taking, locking, serialization, transactional locking mechanism, and operational transformation try to provide data consistency when concurrent activities are permitted in a reactive system. Locks are typically used in transactional models for assurance of data consistency and integrity in a concurrent environment. In addition, recovery management is used to preserve atomicity and durability in transaction models. Unfortunately, conventional lock mechanisms severely (and intentionally) limit concurrency in a transactional environment. Such lock mechanisms also limit recovery capabilities. Finally, existing recovery mechanisms themselves afford a considerable overhead to concurrency. This paper describes a new transaction model that supports release of early results inside and outside of a transaction, decreasing the severe limitations of conventional lock mechanisms, yet still warranties consistency and recoverability of released resources (results). This is achieved through use of a more flexible locking mechanism and by using two types of consistency graph. This provides an integrated solution for transaction management, recovery management and concurrency control. We argue that these are necessary features for management of long-term transactions within "digital ecosystems" of small to medium enterprises.

Proceedings ArticleDOI
14 Mar 2007
TL;DR: This work proposes to achieve the best of both worlds in parallel programming by letting programmers focus on the correctness of the application by using coarse-grained unnamed critical sections for mutual exclusion, and letting the compiler maximize the concurrency among critical sections by selecting an assignment of compiler-managed fine- grained locks to critical sections.
Abstract: One of the major performance and productivity issues in parallel programming arises from the use of lock/unlock operations or critical sections to enforce mutual exclusion. When programmers manage multiple fine-grained locks explicitly, they run the risk of introducing data races or creating deadlocks. When they use coarsegrained locks or critical sections, they run the risk of losing scalability in parallel performance. Ideally, we would like to give the programmers the best of both worlds – the convenience of coarsegrained locks or critical sections combined with the scalability of fined-grained locks. We propose to achieve this ideal by (1) letting programmers focus on the correctness of the application by using coarse-grained unnamed critical sections for mutual exclusion, and (2) letting the compiler maximize the concurrency among critical sections by selecting an assignment of compiler-managed fine-grained locks to critical sections. Our approach is presented in the context of OpenMP and POSIX threads (Pthread) programming models. Consider the simple OpenMP program shown in Figure 1. The main program begins as a single thread. When the parallel sections construct is encountered, a team of thread are generated, each (including the initial thread) executing one section. At the end of the parallel sections, they synchronize and terminate, leaving only the initial thread to proceed. Four unnamed critical sections are used in this program. The default OpenMP implementation uses a single global lock to control all unnamed critical sections, thereby introducing unnecessary serialization. For example, CS1 and CS2 do

Book ChapterDOI
16 Jul 2007
TL;DR: An overview of µTC language is presented, emphasizing the aspects of memory synchronization and concurrent control structures, and the toolchain developed to support the model is shown, focusing on compiler strategies.
Abstract: Microthreaded C also called µTC is a concurrent language based on the C language which allows the programmer to code concurrency-oriented applications for targeting chip multiprocessors µTC source code contains fine-grained concurrent control structures, where the concurrency is explicitly written via new keywords This language is used as an interface for defining dynamic concurrency and as an intermediate language to capture concurrency from data-parallel languages such as Single-Assignment C, or as the target for parallelizing compilers for sequential languages such as C This paper presents an overview of µTC language, emphasizing the aspects of memory synchronization and concurrent control structures In order to understand the properties and scopes of the language, we also present the outlines of the architectures after discussing the global concepts of the microthreading model Finally we show the toolchain we are currently developing to support the model, focusing on compiler strategies

Book ChapterDOI
23 Sep 2007
TL;DR: SXDGL completely eliminates data contention between read-only and update transactions and takes into account the hierarchical structure and semantics of XML data model determining conflicts between concurrent XML-operations of update transactions.
Abstract: Nowadays, concurrency control for XML data is a big research problem. There are a number of researchers working on this problem, but most of the proposed methods are based on the two-phase locking protocol, which potentially leads to a high blocking rates in data-intensive XML-applications. In this paper we present and evaluate SXDGL, a new snapshot based concurrency control protocol for XML data. SXDGL completely eliminates data contention between read-only and update transactions. Moreover, SXDGL takes into account the hierarchical structure and semantics of XML data model determining conflicts between concurrent XML-operations of update transactions. The conducted evaluation shows significant benefits of SXDGL for processing concurrent transactions in data-intensive XML-applications.

Proceedings Article
26 Jul 2007
TL;DR: The overall system design is presented together with some experimental results showing the system testing possibilities, and the proposed system serves as a support platform for performance evaluation of known and new algorithms of the particular processing components, including CPU scheduling, concurrency control and conflict resolution strategies.
Abstract: Previous research in real-time databases has focused primarily on evolution and evaluation of transaction processing algorithms, priority assignment strategies or concurrency control techniques. But for the most part the research efforts are based only on simulation studies with many parameters defined. It is very difficult to achieve guaranteed real time database services when putting a database into a real-time environment because various components can compete for system resources. So our objective was to design and implement an experimental real-time database system suitable for study of real time transaction processing. The experimental system was implemented as an integrated set of the most important functional parts of a veritable real-time database system. It serves as a support platform for performance evaluation of known and new algorithms of the particular processing components, including CPU scheduling, concurrency control and conflict resolution strategies. Because of the strong interactions among the processed components, proposed system can help us to understand their effect on system performance and to identify the most influencing factors. In this paper the overall system design is presented together with some experimental results showing the system testing possibilities.

Proceedings ArticleDOI
21 Mar 2007
TL;DR: A transactional, optimistic concurrency control framework for WSANs that enables understanding of a system execution as a single thread of control, while permitting the deployment of actual execution over multiple threads distributed on several nodes is proposed.
Abstract: Effectively managing concurrent execution is one of the biggest challenges for future wireless sensor/actor networks (WSANs): for safety reasons concurrency needs to be tamed to prevent unintentional nondeterministic executions, on the other hand, for real-time guarantees concurrency needs to be boosted to achieve timeliness. We propose a transactional, optimistic concurrency control framework for WSANs that enables understanding of a system execution as a single thread of control, while permitting the deployment of actual execution over multiple threads distributed on several nodes. By exploiting the properties of wireless broadcast communication, we propose a lightweight and fault-tolerant implementation of our transactional framework

Journal IssueDOI
TL;DR: A method for verifying concurrent Java components that includes ConAn and complements it with other static and dynamic verification tools and techniques is proposed, based on an analysis of common concurrency problems and concurrency failures in Java components.
Abstract: The Java programming language supports concurrency. Concurrent programs are harder to verify than their sequential counterparts due to their inherent non-determinism and a number of specific concurrency problems, such as interference and deadlock. In previous work, we have developed the ConAn testing tool for the testing of concurrent Java components. ConAn has been found to be effective at testing a large number of components, but there are certain classes of failures that are hard to detect using ConAn. Although a variety of other verification tools and techniques have been proposed for the verification of concurrent software, they each have their strengths and weaknesses. In this paper, we propose a method for verifying concurrent Java components that includes ConAn and complements it with other static and dynamic verification tools and techniques. The proposal is based on an analysis of common concurrency problems and concurrency failures in Java components. As a starting point for determining the concurrency failures in Java components, a Petri-net model of Java concurrency is used. By systematically analysing the model, we come up with a complete classification of concurrency failures. The classification and analysis are then used to determine suitable tools and techniques for detecting each of the failures. Finally, we propose to combine these tools and techniques into a method for verifying concurrent Java components. Copyright © 2006 John Wiley & Sons, Ltd.

Book ChapterDOI
25 Jun 2007
TL;DR: These case studies show that Ada concurrency features provide the adequate abstraction level both for describing and evaluating concurrency and for carrying out design decisions.
Abstract: When developing concurrent software, a proper engineering practice is to choose a good level of abstraction for expressing concurrency control. Ideally, this level should provide platform-independent abstractions but, as the platform concurrency behaviour cannot be ignored, this abstraction level must also be able to cope with it and exhibit the influence of different possible behaviours. We state that the Ada language provides such a convenient abstraction level for concurrency description and evaluation, including distributed concurrency. For demonstrating it, we present two new cooperative algorithms based on remote procedure calls which, although simply stated, contain actual concurrency complexity and difficulties. They allow a distributed symmetric non-deterministic rendezvous. One relies on a common server and the second is fully distributed. Both realize a symmetric rendezvous using an asymmetric RPC modelled by Ada rendezvous. These case studies show that Ada concurrency features provide the adequate abstraction level both for describing and evaluating concurrency and for carrying out design decisions.

Proceedings ArticleDOI
08 Oct 2007
TL;DR: The simulation results show that the new concurrency control protocol proposed offers better performance than other protocols.
Abstract: In this paper, DMVOCC-MDA-2PLV protocol is proposed for processing mobile distributed real-time transaction in mobile broadcast environments. The new protocol can eliminate conflicts between mobile read-only and mobile update transactions, and resolve data conflicts flexibly using multiversion dynamic adjustment of serialization order to avoid unnecessary restarts of transactions. Mobile read-only transactions can be committed with no-blocking. Respond time of mobile read-only transactions is greatly improved. The tolerance of mobile transactions of disconnections from the broadcast channel is increased. The simulation results show that the new concurrency control protocol proposed offers better performance than other protocols.

Book ChapterDOI
03 Sep 2007
TL;DR: This paper's evaluation confirms that for low update rates, the MVGiST significantly improves scalability w.r.t. the number of concurrent accesses when compared to a traditional, locking-based concurrency control mechanism.
Abstract: Prevailing concurrency control mechanisms for multidimensional index structures, such as the Generalized Search Tree (GiST), are based on locking techniques. These approaches may cause significant overhead in settings where the indexed data is rarely updated and read access is highly concurrent. In this paper we present the Multiversion-GiST (MVGiST), which extends the GiST with Multiversion Concurrency Control. Beyond enabling lock-free read access, our approach provides readers a consistent view of the whole index structure, which is achieved through the creation of lightweight, read-only versions of the GiST that share unchanging nodes amongst themselves. Our evaluation confirms that for low update rates, the MVGiST significantly improves scalability w.r.t. the number of concurrent accesses when compared to a traditional, locking-based concurrency control mechanism.

Proceedings ArticleDOI
09 Jul 2007
TL;DR: This paper develops an efficient web service directory that is based on the Multiversion Generalised Search Tree (MVGiST), an integration of a multidimensional index structure with multiversion concurrency control.
Abstract: Web service directories are shared resources that have to accommodate a high number of concurrent read requests, whereas updates are relatively infrequent. To allow for the automatic composition of complex web services based on those contained in a directory, read requests may involve a series of queries which require a consistent view of the data. We have developed an efficient web service directory that is based on the Multiversion Generalised Search Tree (MVGiST), an integration of a multidimensional index structure with multiversion concurrency control. The MVGiST is able to index web services according to their input and output parameters, supports a high level of concurrent read requests, and guarantees consistency across multiple subsequent read queries. In this paper we evaluate the performance and scalability of the MVGiST and compare it with a traditional, locking-based concurrency control mechanism.

Book
05 Sep 2007
TL;DR: The case study is a Networked Game: A Networked game about building Distributed Systems and the challenges of building distributed systems.
Abstract: What is Distributed Processing?.- Concepts of Concurrency.- Models of Concurrency.- Concurrency in Operating Systems.- Interprocess Communication.- Protocols.- Security.- Languages and Distributed Processing.- Building Distributed Systems.- Case Study: A Networked Game.- The End.

Journal ArticleDOI
TL;DR: It is observed that throughput of the system decreases as the securitylevel of the transaction increases, i.e., there is tradeoff between the security level and the throughput ofThe system.
Abstract: In distributed database systems the global database is partitioned into a collection of local databases stored at different sites. In this era of growing technology and fast communication media, security has an important role to play. In this paper we presented a secure concurrency control protocol (SCCP) based on the timestamp ordering, which provides concurrency control and maintains security. We also implemented SCCP and a comparison of SCCP is presented in three cases (High, Medium and Low security levels). In this experiment, It is observed that throughput of the system decreases as the security level of the transaction increases, i.e., there is tradeoff between the security level and the throughput of the system.

Proceedings ArticleDOI
11 Mar 2007
TL;DR: This work defined guidelines to restructure object-oriented software in order to modularize concurrency control using aspect-oriented programming, which makes the concurrence control easy to evolve and decreases the complexity of other parts of the software, such as business and data management modules, by decoupling concurrency controls from them.
Abstract: Information systems based on the World Wide Web increased the impact of concurrent programs. Such increase demands the definition of methods for obtaining safe and efficient implementations of concurrent programs, since the complexity of implementation and tests in concurrent environments is bigger than in sequential environments. This work defined guidelines to restructure object-oriented software in order to modularize concurrency control using aspect-oriented programming. Those guidelines are supported by a concurrency control implementation that guarantees system correctness without redundant concurrency control, both increasing performance and guaranteeing safety. We define abstract aspects that constitute a simple aspect framework that can be reused to implement concurrency control in other applications. The achieved modularization makes the concurrency control easy to evolve and decreases the complexity of other parts of the software, such as business and data management modules, by decoupling concurrency control code from them.

01 Jan 2007
TL;DR: A graph based mechanism is proposed for preserving Snapshot Isolation protocol(SI) serializable at run-time, and a new model of database transaction (segmented transaction model) is proposed, in order to guarantee the effectivity of DSISG.
Abstract: In this thesis, concept of database concurrency control, computational models of database transaction, the correct criterias of concurrent execution of transactions and concurrency control algorithms such as two phase locking, serialization graph testing, Snapshot Isolation are reviewed. A graph based mechanism is proposed for preserving Snapshot Isolation protocol(SI) serializable at run-time. Firstly, we present Dynamic Managed Snapshot Isolation Serialization Graph(called DSISG). By using this mechanism, non-serializable transactions under Snapshot Isolation protocol can be detected at run-time. Secondly, in order to guarantee the effectivity of DSISG, a new model of database transaction(segmented transaction model) is proposed. Thirdly, an algorithm of managing a hierarchical structured acyclic graph is presented. The run-time characterzing of non-serializable transaction under Snapshot Isolation protocol will be more efficient when this hierachical graph structure is applied to DSISG. We also summarize the contributions of this thesis and formulate some open problems.

Proceedings Article
01 Jan 2007
TL;DR: A light reflector, preferably for at least three different color light sources, used in a cluster arrangement, which beams the rays of the input light sources to the target or area destined for illumination in closely adjacent and parallel relation.
Abstract: A light reflector, preferably for at least three different color light sources, used in a cluster arrangement, which beams the rays of the input light sources to the target or area destined for illumination in closely adjacent and parallel relation, so that the visually perceived color of the light beam is a function of the mixture of the input light source, and can be readily varied over a wide range by varying the intensity and amounts of the individual light inputs.

Proceedings Article
01 Jan 2007
TL;DR: An improved priority assignment policy named Flexible High Reward with concurrency control factor (FHR-CF) is proposed to reduce the MissRatio and WastedRatio in real-time database systems.
Abstract: A lot of research works have been done in this filed of real-time database systems to seek for optimizing transaction scheduling. The findings of such studies examining the use of various algorithms of priority assignment policy have been discussed widely. One drawback of these approaches is presenting poor performance due to neglect repetitively missed real-time transactions. In this paper, an improved priority assignment policy named Flexible High Reward with concurrency control factor (FHR-CF) is proposed to reduce the MissRatio and WastedRatio.

Proceedings ArticleDOI
04 Dec 2007
TL;DR: This paper presents two novel, proof-of-concept prototype DSLs that combine the two approaches using concurrency control i.
Abstract: Service-oriented applications are conceptualised with the notion of efficiently acquiring and processing distributed data. Presently, accessing distributed data can account for up to 70 percent or more of the time spent developing such applications. Hence, one of the first things to be service-enabled in service-oriented architecture is to efficiently access and process data. To avoid hard-coding applications, we recommend the use of a data service layer (DSL) to act as a single point of access to reusable, real-time heterogeneous data. Currently, commercial integration products use two approaches, business information warehouse or virtual data federation. In this paper, we present two novel, proof-of-concept prototype DSLs that combine the two approaches using concurrency control i. e. one with Optimistic concurrency control and another with pessimistic concurrency control. Both approaches are capable of efficiently coordinating client transactions that engage multiple data sources. We also discuss the performance tests carried-out and analyse the results.