scispace - formally typeset
Search or ask a question

Showing papers on "Serialization published in 2008"


Patent
30 Sep 2008
TL;DR: In this paper, a computer-implemented system and method for processing messages using a common interface platform supporting multiple pluggable data formats in a service-oriented pipeline architecture is disclosed.
Abstract: A computer-implemented system and method for processing messages using a common interface platform supporting multiple pluggable data formats in a service-oriented pipeline architecture is disclosed. The method in an example embodiment includes deserializing or serializing a request/response message using a pluggable serializer/deserializer mechanism and a corresponding pluggable data format parser. An example embodiment uses a common model for serialization/deserialization regardless of the data format, resulting in a consistent and efficient mechanism.

65 citations


Proceedings ArticleDOI
01 Jun 2008
TL;DR: A distributed evolutionary computation system that uses the computational capabilities of the ubiquituous Web browser, and can obtain high, and to a certain point, reliable performance from volunteer computing based on AJAJ, with speedups of up to several machines.
Abstract: In a connected world, spare CPU cycles are up for grabs, if you only make its obtention easy enough. In this paper we present a distributed evolutionary computation system that uses the computational capabilities of the ubiquituous Web browser. Asynchronous Javascript and JSON (Javascript object notation, a serialization protocol) allows anybody with a Web browser (that is, mostly everybody connected to the Internet) to participate in a genetic algorithm experiment with little effort, or none at all. Since, in this case, computing becomes a social activity and is inherently impredictable, in this paper we will explore the performance of this kind of virtual computer by solving simple problems such as the royal road function and analyzing how many machines and evaluations it yields. We will also examine possible performance bottlenecks and how to solve them, and, finally, issue some advice on how to set up this kind of experiments to maximize turnout and, thus, performance. The experiments show that we we can obtain high, and to a certain point, reliable performance from volunteer computing based on AJAJ, with speedups of up to several (averaged) machines.

57 citations


Proceedings ArticleDOI
24 Oct 2008
TL;DR: This paper implemented a DBT-based tool for secure execution of x86 binaries using dynamic information flow tracking, and is the first such framework that correctly handles multithreaded binaries without serialization.
Abstract: Dynamic binary translation (DBT) is a runtime instrumentation technique commonly used to support profiling, optimization, secure execution, and bug detection tools for application binaries. However, DBT frameworks may incorrectly handle multithreaded programs due to races involving updates to the application data and the corresponding metadata maintained by the DBT. Existing DBT frameworks handle this issue by serializing threads, disallowing multithreaded programs, or requiring explicit use of locks. This paper presents a practical solution for correct execution of multithreaded programs within DBT frameworks. To eliminate races involving metadata, we propose the use of transactional memory (TM). The DBT uses memory transactions to encapsulate the data and metadata accesses in a trace, within one atomic block. This approach guarantees correct execution of concurrent threads of the translated program, as TM mechanisms detect and correct races. To demonstrate this approach, we implemented a DBT-based tool for secure execution of x86 binaries using dynamic information flow tracking. This is the first such framework that correctly handles multithreaded binaries without serialization. We show that the use of software transactions in the DBT leads to a runtime overhead of 40%. We also show that software optimizations in the DBT and hardware support for transactions can reduce the runtime overhead to 6%.

53 citations


Patent
24 Dec 2008
TL;DR: In this paper, a method and apparatus for optimizing quiescence in a transactional memory system is described, where non-ordering transactions, such as read-only transactions, transactions that do not access non-transactional data, and write-buffering hardware transactions, are identified.
Abstract: A method and apparatus for optimizing quiescence in a transactional memory system is herein described. Non-ordering transactions, such as read-only transactions, transactions that do not access non-transactional data, and write-buffering hardware transactions, are identified. Quiescence in weak atomicity software transactional memory (STM) systems is optimized through selective application of quiescence. As a result, transactions may be decoupled from dependency on quiescing/waiting on previous non-ordering transaction to increase parallelization and reduce inefficiency based on serialization of transactions.

49 citations


Patent
12 Mar 2008
TL;DR: In this paper, a rolling context data structure is used to store multiple contexts associated with different image elements that are being processed in the software pipeline, and each context stores state data for a particular image element, and the association of each image element with a context is maintained as the image element is passed from stage to stage.
Abstract: A multithreaded rendering software pipeline architecture utilizes a rolling context data structure to store multiple contexts that are associated with different image elements that are being processed in the software pipeline. Each context stores state data for a particular image element, and the association of each image element with a context is maintained as the image element is passed from stage to stage of the software pipeline, thus ensuring that the state used by the different stages of the software pipeline when processing the image element remains coherent irrespective of state changes made for other image elements being processed by the software pipeline. Multiple image elements may therefore be processed concurrently by the software pipeline, and often without regard for synchronization or serialization of state changes that affect only certain image elements.

35 citations


Patent
30 Sep 2008
TL;DR: In this article, a computer-implemented system and method for processing messages using native data serialization/deserialization without any transformation, in a service-oriented pipeline architecture is disclosed.
Abstract: A computer-implemented system and method for processing messages using native data serialization/deserialization without any transformation, in a service-oriented pipeline architecture is disclosed. The method in an example embodiment that includes serializing or deserializing the request/response message directly into the format (specific on-the-wire data format or a java object) the recipient expects (either a service implementation or a service consumer or the framework), without first converting into an intermediate format. This provides an efficient mechanism for the same service implementation to be accessed by exchanging messages using different data formats.

35 citations



Journal ArticleDOI
TL;DR: A performance evaluation conducted on a dual-core cluster has shown experimental evidence of throughput increase on SCI, Myrinet, Gigabit Ethernet and shared memory communication, and the impact of this improvement on the overall application performance of representative parallel codes is analyzed.

33 citations


Patent
13 Mar 2008
TL;DR: In this paper, a buffered write process is provided that performs buffered writes to shadow copies of objects and writes content back to the objects after validating a respective transaction during commit.
Abstract: Various technologies and techniques are disclosed that support buffered writes and enforced serialization order in a software transactional memory system. A buffered write process is provided that performs writes to shadow copies of objects and writes content back to the objects after validating a respective transaction during commit. When a write lock is first obtained for a particular transaction, a shadow copy is made of a particular object. Writes are performed to and reads from the shadow copy. After validating the particular transaction during commit, content is written from the shadow copy to the particular object. A transaction ordering process is provided that ensures that an order in which the transactions are committed matches an abstract serialization order of the transactions. Transactions are not allowed to commit until their ticket number matches a global number that tracks the next transaction that should commit.

30 citations


Proceedings ArticleDOI
07 Jun 2008
TL;DR: XMem provides type-safe, transparent, shared memory support for co-located MREs, and facilitates easy integration and use by existing communication technologies and software systems, such as RMI, JNDI, JDBC, serialization/XML, and network sockets.
Abstract: Developers commonly build contemporary enterprise applications using type-safe, component-based platforms, such as J2EE, and architect them to comprise multiple tiers, such as a web container, application server, and database engine. Administrators increasingly execute each tier in its own managed runtime environment (MRE) to improve reliability and to manage system complexity through the fault containment and modularity offered by isolated MRE instances. Such isolation, however, necessitates expensive cross-tier communication based on protocols such as object serialization and remote procedure calls. Administrators commonly co-locate communicating MREs on a single host to reduce communication overhead and to better exploit increasing numbers of available processing cores. However, state-of-the-art MREs offer no support for more efficient communication between co-located MREs, while fast inter-process communication mechanisms, such as shared memory, are widely available as a standard operating system service on most modern platforms.To address this growing need, we present the design and implementation of XMem ? type-safe, transparent, shared memory support for co-located MREs. XMem guarantees type-safety through coordinated, parallel, multi-process class loading and garbage collection. To avoid introducing any level of indirection, XMem manipulates virtual memory mapping. In addition, object sharing in XMem is fully transparent: shared objects are identical to local objects in terms of field access, synchronization, garbage collection, and method invocation, with the only difference being that sharedto-private pointers are disallowed. XMem facilitates easy integration and use by existing communication technologies and software systems, such as RMI, JNDI, JDBC, serialization/XML, and network sockets.We have implemented XMem in the open-source, productionquality HotSpot Java Virtual Machine. Our experimental evaluation, based on core communication technologies underlying J2EE, as well as using open-source server applications, indicates that XMem significantly improves throughput and response time by avoiding the overheads imposed by object serialization and network communication.

20 citations


Book ChapterDOI
19 Nov 2008
TL;DR: This work demonstrates ES3 provenance by generating complex data products from Earth satellite imagery and returns an XML serialization of a provenance graph, forward or backwards from a specified process or file.
Abstract: The Earth System Science Server (ES3) is a software environment for data-intensive Earth science, with unique capabilities for automatically and transparently capturing and managing the provenance of arbitrary computations. Transparent acquisition avoids the scientist having to express their computations in specific languages or schemas for provenance to be available. ES3 models provenance as relationships between processes and their input and output files. These relationships are captured by monitoring read and write accesses at various levels in the science software and asynchronously converting them to time-ordered streams of provenance events which are stored in an XML database. An ES3 provenance query returns an XML serialization of a provenance graph, forward or backwards from a specified process or file. We demonstrate ES3 provenance by generating complex data products from Earth satellite imagery.

Book ChapterDOI
01 Jun 2008
TL;DR: A novel storage index (based on partial orders), called POI, that exploits the fact that RDF Knowledge Bases have not a unique serialization, that can be used for storing several (version-related or not) SW KBs.
Abstract: This paper concerns versioning services over Semantic Web (SW) repositories. We propose a novel storage index (based on partial orders), called POI, that exploits the fact that RDF Knowledge Bases (KB) have not a unique serialization (as it happens with texts). POI can be used for storing several (version-related or not) SW KBs. We discuss the benefits and drawbacks of this approach in terms of storage space and efficiency both analytically and experimentally in comparison with the existing approaches (including the change-based approach). For the latter case we report experimental results over synthetic data sets. POI offers notable space saving as well as efficiency in various cross version operations. It is equipped with an efficient version insertion algorithm and could be also exploited in cases where the set of KBs does not fit in main memory.

Proceedings ArticleDOI
Jiangming Yang1, Haixun Wang2, Ning Gu1, Yiming Liu1, Chunsong Wang1, Qiwei Zhang1 
21 Apr 2008
TL;DR: This paper proposes a flexible and efficient method to achieve consistency maintenance in the Web 2.0 world, and shows good performance improvement compared with existing methods based on distributed lock.
Abstract: Online collaboration and sharing is the central theme of many web-based services that create the so-called Web 2.0 phenomena. Using the Internet as a computing platform, many Web 2.0 applications set up mirror sites to provide large-scale availability and to achieve load balance. However, in the age of Web 2.0, where every user is also a writer and publisher, the deployment of mirror sites makes consistency maintenance a Web scale problem. Traditional concurrency control methods (e.g. two phase lock, serialization, etc.) are not up to the task for several reasons. First, large network latency between mirror sites will make two phase locking a throughput bottleneck. Second, locking will block a large portion of concurrent operations, which makes it impossible to provide large-scale availability. On the other hand, most Web 2.0 operations do not need strict serializability - it is not the intention of a user who is correcting a typo in a shared document to block another who is adding a comment, as long as consistency can still be achieved. Thus, in order to enable maximal online collaboration and sharing, we need a lock-free mechanism that can maintain consistency among mirror sites on the Web. In this paper, we propose a flexible and efficient method to achieve consistency maintenance in the Web 2.0 world. Our experiments show its good performance improvement compared with existing methods based on distributed lock.

Journal ArticleDOI
TL;DR: This paper presents the Mobile JikesRVM, implemented on top of the IBM Jikes Research Virtual Machine (RVM), an extension of its scheduler that allows applications to easily capture the state of a running thread and makes it possible to restore it elsewhere (i.e. on a different hardware architecture or operating system).

26 Jan 2008
TL;DR: This dissertation presents a complete system that improves the overall performance of XML messaging through consideration of the programming interfaces to the system itself and to XML processing, the serialization format used for the messages, and the protocol used to transmit the messages.
Abstract: In recent years, XML has been widely adopted as a universal format for structured data. A variety of XML-based systems have emerged, most prominently SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This popularity is helped by the excellent support for XML processing in many programming languages and by the variety of XML-based technologies for more complex needs of applications. Concurrently with this rise of XML, there has also been a qualitative expansion of the Internet’s scope. Namely, mobile devices are becoming capable enough to be full-fledged members of various distributed systems. Such devices are battery-powered, their network connections are based on wireless technologies, and their processing capabilities are typically much lower than those of stationary computers. This dissertation presents work performed to try to reconcile these two developments. XML as a highly redundant text-based format is not obviously suitable for mobile devices that need to avoid extraneous processing and communication. Furthermore, the protocols and systems commonly used in XML messaging are often designed for fixed networks and may make assumptions that do not hold in wireless environments. This work identifies four areas of improvement in XML messaging systems: the programming interfaces to the system itself and to XML processing, the serialization format used for the messages, and the protocol used to transmit the messages. We show a complete system that improves the overall performance of XML messaging through consideration of these areas. The work is centered on actually implementing the proposals in

Proceedings ArticleDOI
26 Sep 2008
TL;DR: A serialization functional unit which consists of a serialization unit and deserialization unit along with descriptors and pool to describe and store serialized objects can enhance the performance of Java based mobile devices which runs applications those communicate with similar applications very often.
Abstract: This paper describes serialization support in an object oriented Java processor like jHISC. The relevance of serializing an object confines to situations when an object has to be sent over network or stored as a persistent object. But these are not rare scenarios when an application in a mobile device is considered. This paper proposes a serialization functional unit which consists of a serialization unit and deserialization unit along with descriptors and pool to describe and store serialized objects. This design can enhance the performance of Java based mobile devices which runs applications those communicate with similar applications very often. This design makes use of architectural features of processors like jHISC. This design can contribute much to Java based mobile computing in the near future.

Proceedings ArticleDOI
23 Sep 2008
TL;DR: In this work, a DOM implementation based on a hybrid data representation that uses both literal XML and DOM objects is proposed that stores the original literal XML representation and reuses it to avoid traversing all of the tree data during serialization.
Abstract: Distributed SOA computing environments usually use SOAP intermediaries that sit between senders and receivers to mediate SOAP messages. The intermediaries may add support services to the SOAP message exchange, such as routing, logging, and security. The typical processing by a SOAP intermediary is parsing the incoming SOAP messages, checking the data in each message, and then serializing the messages to put them back into the network. DOM is one of the popular interfaces to navigate an XML tree. Existing DOM implementations are not efficient for SOAP intermediary processing. Existing DOM implementations parse XML data to create tree data and traverse the tree data for serialization. Typically, a SOAP intermediary rarely modifies the tree data. In such situations, creating the tree data and serializing it back into XML data is computationally expensive. We propose a DOM implementation based on a hybrid data representation that uses both literal XML and DOM objects. In our implementation, a SOAP intermediary stores the original literal XML representation and reuses it to avoid traversing all of the tree data during serialization. We prototyped the DOM implementation and evaluated its performance.

01 Jan 2008
TL;DR: A tuple space implementation for the Globus Toolkit that can be used by Grid applications as a coordination service and a simple workflow is developed in order to show the versatility of this service.
Abstract: Coordinating activities in a distributed system is an open research topic. Several models have been proposed to achieve this purpose such as message passing, publish/subscribe, workflows or tuple spaces. We have focused on the latter model, trying to overcome some of its disadvantages. In particular we have applied spatial database techniques to tuple spaces in order to increase their performance when handling a large number of tuples. Moreover, we have studied how structured peer to peer approaches can be applied to better distribute tuples on large networks. Using some of these result, we have developed a tuple space implementation for the Globus Toolkit that can be used by Grid applications as a coordination service. The development of such a service has been quite challenging due to the limitations imposed by XML serialization that have heavily influenced its design. Nevertheless, we were able to complete its implementation and use it to implement two different types of test applications: a completely parallelizable one and a plasma simulation that is not completely parallelizable. Using this last application we have compared the performance of our service against MPI. Finally we have developed and tested a simple workflow in order to show the versatility of our service.

DissertationDOI
28 Apr 2008
TL;DR: In this article, a tuple space implementation for the Globus Toolkit is presented, which can be used by Grid applications as a coordination service, and compared with MPI in terms of performance.
Abstract: Coordinating activities in a distributed system is an open research topic. Several models have been proposed to achieve this purpose such as message passing, publish/subscribe, workflows or tuple spaces. We have focused on the latter model, trying to overcome some of its disadvantages. In particular we have applied spatial database techniques to tuple spaces in order to increase their performance when handling a large number of tuples. Moreover, we have studied how structured peer to peer approaches can be applied to better distribute tuples on large networks. Using some of these result, we have developed a tuple space implementation for the Globus Toolkit that can be used by Grid applications as a coordination service. The development of such a service has been quite challenging due to the limitations imposed by XML serialization that have heavily influenced its design. Nevertheless, we were able to complete its implementation and use it to implement two different types of test applications: a completely parallelizable one and a plasma simulation that is not completely parallelizable. Using this last application we have compared the performance of our service against MPI. Finally, we have developed and tested a simple workflow in order to show the versatility of our service.

Patent
24 Oct 2008
TL;DR: In this paper, a message may be buffered in the source queue until a transmission time is reached, in turn for each buffered message, and subsequent messages with the serialization context to the target queue for buffering therein.
Abstract: Messages may be provided to a source queue in serialized order, each message associated with a serialization context. The messages may be buffered in the source queue until a transmission time is reached, in turn, for each buffered message. Transmission-ready messages may be sent from the source queue according to the serialized order, using the serialization context, while continuing to store existing messages that are not yet transmission-ready. A queue assignment of the serialization context may be changed to a target queue. Subsequent messages may be provided with the serialization context to the target queue for buffering therein, while remaining transmission-ready messages may be continued to be sent from the source queue. All of the existing messages from the source queue associated with the serialization context may be determined to have been sent, and the subsequent messages may begin to be sent from the target queue in serialized order, using the serialization context.

Proceedings ArticleDOI
25 Mar 2008
TL;DR: A multi-version concurrency control protocol called LSTP which leverages the guarantees of the replication protocol to provide transactional semantics is proposed and is designed to provide useful consistency semantics over P-Ring for read intensive workloads without sacrificing the scalability and other desirable properties inherent to the system.
Abstract: Structured P2P systems have been developed for constructing applications at internet scale in cooperative environments and exhibit a number of desirable features such as scalability and self-maintenance. We argue that such systems when augmented with well defined consistency semantics provide an attractive building block for many large scale data processing applications in cluster environments. Towards this end, we study the problem of providing transactional semantics to P-Ring a P2P system which supports efficient range queries. We first extend a commonly used replication protocol in P2P systems to provide well defined guarantees in the presence of concurrent updates and under well defined failure assumptions. A multi-version concurrency control protocol called LSTP which leverages the guarantees of the replication protocol to provide transactional semantics is proposed. LSTP is designed to provide useful consistency semantics over P-Ring for read intensive workloads without sacrificing the scalability and other desirable properties inherent to the system. Under LSTP, read-only transactions are abort-free and non-blocking and the index stores no state for such transactions. We show that LSTP ensures no missed dependencies between transactions and guarantees basic consistency for read-only transactions when update transactions are serializable. The design of LSTP and its provable properties is a proof of concept that P2P systems can be augmented with transactional semantics. Results from a preliminary simulation study are also presented.

Patent
26 Jun 2008
TL;DR: In this article, a tracking component tracks information on relationships associated with an entity, and further enables users to perform subsequent change processing on the entity's relationship information, so that database operation can be performed without requirement of additional information from the database (e.g., foreign key information that is part of associated graphs).
Abstract: Systems and methods that enable relationship information to be carried along with the entity when serializing/deserializing entities among application tiers. A tracking component tracks information on relationships associated with an entity, and further enables users to perform subsequent change processing on the entity's relationship information. Accordingly, relationship information can be carried along with the entity such that database operation can be performed without requirement of additional information from the database (e.g., foreign key information that is part of associated graphs).

Patent
18 Apr 2008
TL;DR: In this paper, the authors present a "viral ticket book" model that provides lower latency while improving compatibility with client protocols, and optimize requests from clients which span multiple Data Volumes and which require strong serialization.
Abstract: The disclosed embodiments are directed to improving the efficiency of guaranteeing data consistency to clients, such as for one or more objects stored on a plurality of volumes configured as a Striped Volume Set. In particular, the disclosed embodiments optimize requests from clients which span multiple Data Volumes and which require strong serialization. The disclosed embodiments provide a “viral ticket book” model that provides lower latency while improving compatibility with client protocols.

Book ChapterDOI
19 Mar 2008
TL;DR: This paper proposes a scheme for similarity join over XML data based on XML data serialization and subsequent similarity matching over XML node subsequences, and uses Bloom filter to speed up text similarity computation.
Abstract: This paper proposes a scheme for similarity join over XML data based on XML data serialization and subsequent similarity matching over XML node subsequences With the recent explosive diffusion of XML, great volumes of electronic data are now marked up with XML As a consequence, a growing amount of XML data represents similar contents, but with dissimilar structures To extract as much information as possible from this heterogeneous information, similarity join has been used Our proposed similarity join for XML data can be summarized as follows: 1) we serialize XML data as XML node sequences; 2) we extract semantically/structurally coherent subsequences; 3) we filter out dissimilar subsequences using textual information; and 4) we extract pairs of subsequences as the final result by checking structural similarity The above process is costly to execute To make it scalable against large document sets, we use Bloom filter to speed up text similarity computation We show the feasibility of the proposed scheme by experiments

Patent
09 Dec 2008
TL;DR: In this article, a serialization construct is implemented within an environment of a number of parallel data flow graphs, where a quiesce node is added to every active data flow graph.
Abstract: A serialization construct is implemented within an environment of a number of parallel data flow graphs. A quiesce node is appended to every active data flow graph. The quiesce node prevents a token from passing to a next data flow graph within a chain before an execution of the active data flow graph has been finished. A serial data flow graph is implemented to provided for a serial execution while no other data flow graph is active. A serialize node is appended to a starting point of a serial data flow graph. A serialize end node is appended to an endpoint of the serial data flow graph. The serialize node is activated to start a serial operation. The serialize end node is activated after the serial operation has been terminated.

Proceedings ArticleDOI
07 Apr 2008
TL;DR: This paper proposes a novel method in order to improve concurrency of particular kind of transaction, known as long running transactions, and designs a sort of hybrid approach between optimistic and pessimistic concurrency models.
Abstract: Transaction management in different application contexts is still a challenging task. In this paper we propose a novel method in order to improve concurrency of particular kind of transaction, known as long running transactions. Differently from other techniques presented in the literature, we design a sort of hybrid approach between optimistic and pessimistic concurrency models. From one hand, our basic idea consists of handling frequent disconnections or inactivity periods of a generic transaction during its life-cycle and, from the other one, we consider the semantics related to operations produced by transactions. First, our solution avoids an indefinite or long resource locking due to disconnecting (or idle) transactions or a high rate of preventive aborts; eventually, a transaction "semantic compatibility" is exploited in order to increase the concurrency reconcilable operations on the same resources. To these purposes, we have implemented a middleware with the aims of emulating a transactional scheduling, and several experiments have been carried out.

Proceedings ArticleDOI
29 Sep 2008
TL;DR: This paper presents an efficient implementation of object marshaling for the Java platform originally used for high performance computing environments, and shows that the same technique is effective on ubiquitous resource constrained platforms, such as Java micro edition.
Abstract: Object marshaling, called serialization in Java, offers a high level of abstraction for information interchange in object oriented systems. It thus reduces the source lines of code required to transmit objects across a network. This abstraction often comes with a runtime and data penalty. In this paper we present an efficient implementation of object marshaling for the Java platform originally used for high performance computing environments. We demonstrate that the same technique is effective on ubiquitous resource constrained platforms, such as Java micro edition. We show that by adopting high performance techniques we are able to bring object marshaling to a platform where it is not otherwise possible due to the lack of runtime type inspection. We also demonstrate that performance of this system is better for array oriented data and acceptable for typical application data when compared with a hand coded protocol. This demonstrates the value of bringing techniques from high performance computing to ubiquitous resource constrained devices.

Proceedings ArticleDOI
21 Jul 2008
TL;DR: A novel design that uses a Bloom filter to hash the addresses of the read set into a structure and it is found that the proposed scheme utilizes the private cache more efficiently in a typical system configuration.
Abstract: Transactional memory systems promise to simplify parallel programming by avoiding deadlock, livelock, and serialization problems through optimistic, concurrent execution of code segments that potentially can have data conflicts with each other. Data conflict detection in proposed hardware transactional memory systems is done by associating a read bit with each cache block that is set when a block is speculatively read. However, since the set of blocks that have been speculatively read - the read set - has to be maintained until the transaction commits, one can often not replace a block that has been speculatively read. This leads to poor utilization of the private caches in a multi-core system. We propose a new scheme for managing the read set in hardware transactional memory systems. The novel insight is that only the addresses of the speculatively read blocks are needed for conflict detection but not the data. As a result, there is an opportunity to reduce the space needed to keep track of speculatively read blocks by B/A, where B is the block size and A is the block address. Assuming that B is 32 bytes and A is 32 bits, there is an eightfold space saving due to this. This paper presents a novel design for leveraging this opportunity and evaluates a concept that uses a Bloom filter to hash the addresses of the read set into a structure. We find that the proposed scheme utilizes the private cache more efficiently in a typical system configuration.

Proceedings ArticleDOI
04 Mar 2008
TL;DR: Adya's model based on serialization graphs is extended as a first step to construct replication protocols with such a feature and is extended to extend to replicated systems with weakest isolation models.
Abstract: Replication has been told to be a solution to provide scalability and high availability in databases. Unfortunately, the cost of ensuring isolated and consistent executions is sometimes too high. Weakest isolation models have proved to be a way to reduce this cost but they can violate some applications transactions isolation needs. In stand-alone systems, models supporting different isolation restrictions for concurrent transaction are used to avoid this dilemma. With this kind of protocols, applications can specify every transaction isolation requirements. However, how to extend these models to replicated systems is still an issue. In this paper we extend Adya's model based on serialization graphs as a first step to construct replication protocols with such a feature.

Proceedings ArticleDOI
08 Dec 2008
TL;DR: In this paper, a serialized multitasking code generation technique from dataflow specification is proposed to run the multitasking application without OS on any target processor, which reduces runtime overhead of task switching.
Abstract: This paper is concerned about multitasking embedded software development from the system specification to the final implementation including design space exploration(DSE). In the proposed framework, dataflow model is used for task specification. Multitasking software is generated for the performance evaluation of architecture candidates during the DSE process. Since the same code is also used for the final implementation, it is highly desirable to make it portable and efficient. In this paper, we propose a serialized multitasking code generation technique from dataflow specification to run the multitasking application without OS on any target processor. The code serialization also reduces runtime overhead of task switching as previous works have reported. By separating run-time scheduler generation from task code generation, various scheduling policies can be explored. Experiments with DiVX application confirm the viability of the proposed technique.