scispace - formally typeset
Search or ask a question

Showing papers on "Rollback published in 2010"


Patent
09 Nov 2010
TL;DR: Fault tolerance is provided in a distributed system by storing state related to a requested operation on the component, persisting that stored state in a data store, asynchronously processing the operation request, and if a failure occurs, restarting the component using the stored state from the data store.
Abstract: Fault tolerance is provided in a distributed system. The complexity of replicas and rollback requests are avoided; instead, a local failure in a component of a distributed system is tolerated. The local failure is tolerated by storing state related to a requested operation on the component, persisting that stored state in a data store, such as a relational database, asynchronously processing the operation request, and if a failure occurs, restarting the component using the stored state from the data store.

476 citations


Patent
28 Oct 2010
TL;DR: In this article, a transaction can be committed via a set of hierarchical stages, which in turn can facilitate integration of an in-memory database system with one or more external database systems.
Abstract: The subject disclosure relates to a database recovery technique that implements various aspects of transaction logging to optimize database recovery performance. Transactions are logged logically with no reference to physical location, which enables logging to be performed via multiple independent log streams in parallel. A set of log streams can further be adjusted to conform to a local configuration of a mirror or secondary node in order to increase mirroring flexibility. Additionally, individual transactions or groups of transactions are recorded using a single log record, which contains timestamp information to enable database recovery without reference to physical checkpoint files. Further, techniques are provided herein for processing transactions without Write Ahead Logging or hardening of undo information. As further described herein, a transaction can be committed via a set of hierarchical stages, which in turn can facilitate integration of an in-memory database system with one or more external database systems.

94 citations


Patent
07 Jun 2010
TL;DR: In this paper, a virtual database is attached to a server database management system ("DBMS") such that the DBMS believes it needs to recover the database to a last known point of consistency, unaware that the log records are actually being sourced from the transaction log portion of the backup file.
Abstract: A virtual database is attached to a server database management system ("DBMS") such that the DBMS believes it needs to recover the database to a last known point of consistency. In order to perform this recovery, the DBMS requests the transaction log file entries to be read from what it believes is the database's transaction log file. However, the requests are intercepted and translated (unbeknownst to the DBMS) instead into requests to read the transaction log portion of the backup file. The DBMS then uses the transaction log records to bring the database to a point of transactional consistency, unaware that the log records are actually being sourced from the transaction log portion of the backup file. All changes made to the data during the recovery phase and later during the execution of any TSQL statements which insert, update, or delete data are routed into a cache file. Accordingly, a "virtual" database is created and used by the server DBMS engine as if it were a real database.

82 citations


Patent
01 Feb 2010
TL;DR: In this paper, the authors propose an architecture that eliminates the need for on-disk full backups of data retaining only changes that have occurred, in a separate table, by using the incremental capture of changed data (e.g., in an XML format).
Abstract: Architecture that eliminates the need for on-disk full backups of data retaining only changes that have occurred, in a separate table. Thus, the architecture provides for incremental recovery of incremental changes in a relational database (e.g., SQL). The architecture provides improved recovery time and recovery point objectives. By using the incremental capture of changed data (e.g., in an XML format), the capability is provided to capture schema changes, query the incremental change data and efficiently restore user data to an earlier point-in-time state. Changes (e.g., insert, update and delete operations) are tracked (e.g., continuously) by a set of triggers and the incrementally captured changed rows are inserted in a data capture table (a differential change “delta” table) in a human-readable format (e.g., XML). Rollback is also provided.

60 citations


Patent
11 Jan 2010
TL;DR: In this paper, a system and method for modifying execution scripts associated with a job scheduler may include monitoring for the execution of a task to determine when the task has failed, and details of the failed task may be identified and used to attempt recovery from the task failure.
Abstract: A system and method for modifying execution scripts associated with a job scheduler may include monitoring for the execution of a task to determine when the task has failed. Details of the failed task may be identified and used to attempt recovery from the task failure. After initiating any recovery tasks, execution of the recovery tasks may be monitored, and one or more supplementary recovery tasks may be identified and executed, or the original task may be rerun at an appropriate execution point based on the initial point of failure. Thus, when a task has failed, an iterative process may begin where various effects of the failed task are attempted to be rolled back, and depending on the success of the rollback, the initial task can be rerun at the point of failure, or further recovery tasks may be executed.

51 citations


Patent
11 Mar 2010
TL;DR: In this paper, an apparatus and methods are disclosed for intelligently determining when to merge transactions to backup storage. In particular, in accordance with the illustrative embodiment, queued transactions may be merged based on a variety of criteria.
Abstract: An apparatus and methods are disclosed for intelligently determining when to merge transactions to backup storage. In particular, in accordance with the illustrative embodiment, queued transactions may be merged based on a variety of criteria, including, but not limited to, one or more of the following: the number of queued transactions; the rate of growth of the number of queued transactions; the calendrical time; estimates of the time required to execute the individual transactions; a measure of importance of the individual transactions; the transaction types of the individual transactions; a measure of importance of one or more data updated by the individual transactions; a measure of availability of one or more resources; a current estimate of the time penalty associated with shadowing a page of memory; and the probability of rollback for the individual transactions, and for the merged transaction.

47 citations


Patent
12 May 2010
TL;DR: In this paper, the authors propose to achieve scale-out of the distributed database system that assumes a real-time update to be a requirement and which is achieved by dividing the database system into two or more database domains.
Abstract: It is a purpose of this invention to achieve Scale-Out of the distributed database system that assumes a real-time update to be a requirement and which is achieved by dividing the database system into two or more database domains. This is to achieve handling of even larger scale databases while providing even higher performance. Assuming that the large-scale database system has been distributed to two or more of above-mentioned data base domains, in multi transaction processing with real-time update of the database object across two or more of above-mentioned database domain, this invention is achieved by executing the above-mentioned multi transaction processing to the database meta information storage management part in the database meta information management repository device by applying partition topology technology or replication topology technology for exchange and synchronization of meta information such as status information etc. at even higher speeds.

34 citations


Patent
29 Jan 2010
TL;DR: In this paper, a control logic device performs a local rollback in a parallel super computing system and checks whether an error occurs during the rollback interval, if the error occurs and the unrecoverable condition does not occur.
Abstract: A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

33 citations


Patent
16 Jul 2010
TL;DR: In this paper, the distributed storage unit can invoke a write lock for all encoded data slices having the same slice name as the slice being currently written, and a rollback timer started.
Abstract: Multiple revisions of an encoded data slice can be stored in a distributed storage unit. Before writing a new revision of an encoded data slice to storage, the distributed storage unit can invoke a write lock for all encoded data slices having the same slice name as the slice being currently written. The slice being currently written can be stored in temporary storage, and a rollback timer started. If a commit command is received before expiration of the rollback timer, the currently written slice can be permanently stored and made accessible for read requests. If the rollback timer expires prior to the storage unit receiving a commit command, however, a previously stored revision will be used.

32 citations


Patent
Xiaoxin Chen1
22 Dec 2010
TL;DR: In this paper, an in-memory database management system (DBMS) in a virtual machine (VM) preserves the durability property of the ACID model for database management without significantly slowing performance due to accesses to disk.
Abstract: An in-memory database management system (DBMS) in a virtual machine (VM) preserves the durability property of the ACID model for database management without significantly slowing performance due to accesses to disk. Input data relating to a database transaction is recorded into a replay log and forwarded to the VM for processing by the DBMS. An indication of a start of processing by the DBMS of the database transaction is received after receipt of the input data by the VM and an indication of completion of processing of the database transaction by the DBMS is subsequently received, upon which outgoing output data received from the VM subsequent to the receipt of the completion indication is delayed. The delayed outgoing output data is ultimately released upon a confirmation that all input data received prior to the receipt of the start indication has been successfully stored into the replay log, thereby preserving durability for the database transaction.

29 citations


Patent
30 Sep 2010
TL;DR: In this article, a database system providing high performance database versioning is described, where a method for restoring databases to a consistent version including creating a cache view of a shared cache and performing undo or redo operations on the cache view only when a log sequence number falls within a certain range.
Abstract: A database system providing high performance database versioning is described. In a database system employing a transaction log, a method for restoring databases to a consistent version including creating a cache view of a shared cache and performing undo or redo operations on the cache view only when a log sequence number falls within a certain range.

Patent
30 Aug 2010
TL;DR: In this paper, a method and system for replicating database data is provided, where one or more standby database replicas can be used for servicing read-only queries, and the amount of storage required is scalable in the size of the primary database storage.
Abstract: A method and system for replicating database data is provided. One or more standby database replicas can be used for servicing read-only queries, and the amount of storage required is scalable in the size of the primary database storage. One technique is described for combining physical database replication to multiple physical databases residing within a common storage system that performs de-duplication. Having multiple physical databases allows for many read-only queries to be processed, and the de-duplicating storage system provides scalability in the size of the primary database storage. Another technique uses one or more diskless standby database systems that share a read-only copy of physical standby database files. Notification messages provide consistency between each diskless system's in-memory cache and the state of the shared database files. Use of a transaction sequence number ensures that each database system only accesses versions of data blocks that are consistent with a transaction checkpoint.

Proceedings ArticleDOI
12 Aug 2010
TL;DR: The simulation results show that ART can significantly reduce the number of replications and improve scalability compared with existing mechanisms.
Abstract: In large-scale Grid computing environments, providing fault-tolerance is required for both scientific computation and file-sharing to increase their reliability. In previous works, several mechanisms were proposed for the Grids or distributed computing systems. However, some of them used only space redundancy (hardware replication), and others used only time redundancy (checkpointing and rollback). For this reason, the existing mechanisms are inefficient in terms of their resource utilization on the Grids. The main goal of ART is reducing the number of replications by using checkpointing and rollback scheme for each replication. In ART, the minimum number of replications is adaptively selected based on analysis of probability of successful execution within the given deadline and reliability requirement of each task. Our simulation results show that ART can significantly reduce the number of replications and improve scalability compared with existing mechanisms.

Journal ArticleDOI
TL;DR: A column dependency-based approach to identify the affected transactions which need to be compensated along with the malicious transactions and to ensure durability, committed non-malicious transactions are then re-executed in a manner that retains database consistency.
Abstract: Even state of the art database protection mechanisms often fail to prevent occurrence of malicious attacks. Since in a database environment, the modifications made by one transaction may affect the execution of some of the later transactions, it leads to spreading of the damage caused by malicious (bad) transactions. Following traditional log-based recovery schemes, one can rollback (undo) the effect of all the transactions, both malicious as well as non-malicious. In such a scenario, even the unaffected transactions are also rolled back. In this paper, we propose a column dependency-based approach to identify the affected transactions which need to be compensated along with the malicious transactions. To ensure durability, committed non-malicious transactions are then re-executed in a manner that retains database consistency. We present a static recovery algorithm as well as an on-line version of the same and prove their correctness. A detailed performance evaluation of the proposed scheme with TPC-C benchmark suite is also presented.

Patent
05 Mar 2010
TL;DR: In this article, a rollback checkpoint for a step in an executable process is established, and a change request is received, and the step with the established rollback checkpoint is adjusted.
Abstract: A computer-readable medium, computer-implemented method, and system are provided. In one embodiment, a rollback checkpoint for a step in an executable process is established, and the executable process is executed. A change request is received, and the step with the established rollback checkpoint is adjusted. Any subsequent steps of the executable process are also adjusted.

Proceedings ArticleDOI
23 Dec 2010
TL;DR: A checkpoint rollback strategy with double modular redundancy is employed, using a Markov model capturing the behavior of the proposed scheme, to calculate the probability of task completion against faults that occur in a Poisson process.
Abstract: Proposed here is a novel architecture for a fault-tolerant real-time system. We employ a checkpoint rollback strategy with double modular redundancy. Main consideration is given to how to recover from both transient and permanent faults without any built-in fault-detection modules or spare processors. Besides state comparison between duplicated tasks, the system has access to the state of the previous checkpoint so that the integrity of a processor can be checked. Using a Markov model capturing the behavior of the proposed scheme, we calculate the probability of task completion against faults that occur in a Poisson process. The optimal number of checkpoints is selected so as to maximize the probability of task completion.

Patent
Yoshihiro Okada1
15 Sep 2010
TL;DR: In this article, an operation management server for managing operation of an application using Java® includes: an update processing execution unit which, when a method which involves update of data is called up, obtains pre-update data which is a target of the update and executes update processing corresponding to the method; a first abnormal end determination unit which determines whether the update processing has ended abnormally, and a first rollback unit, which returns the target data to the preupdate data.
Abstract: When a failure occurred, the state of a system is returned to a state before the failure. An operation management server for managing operation of an application using Java® includes: an update processing execution unit which, when a method which involves update of data is called up, obtains pre-update data which is a target of the update and executes update processing corresponding to the method; a first abnormal end determination unit which determines whether the update processing has ended abnormally; a first rollback unit, which, when the update processing has ended abnormally, returns the update processing target data to the pre-update data; an operation processing execution unit which, when a method which does not involve update of data is called up, executes operation processing corresponding to this method, a second abnormal end determination unit which determines whether the operation processing has ended abnormally; and a second rollback unit, which, when the operation processing has ended abnormally, returns the state to the state before execution of the operation processing.

Patent
20 Jan 2010
TL;DR: In this article, the authors proposed a rollback method based on circuit switching, which reduces the failure rate of the CS rollback and improves the reliability of the user experience by using a pre-acquired wireless access capability of the target cell.
Abstract: The invention discloses a rollback method, a rollback system, access network equipment and core network equipment based on circuit switching. The method comprises the steps of: receiving a CS call request based on the circuit switching; acquiring the wireless access capability of a target cell; and transmitting a CS rollback command containing the identification of a matching cell to user equipment when the matching cell matched with the wireless access capability of the user equipment exists in the target cell. When the embodiment of the invention is used for CS rollback, the wireless access capability of the target cell is pre-acquired, so the target cell matched with the wireless access capability supported by the user equipment can be aimingly transmitted to the user equipment when the rollback command is transmitted, and the user equipment can successfully roll back to the matched target cell, so the method reduces the failure rate of the CS rollback and improve the reliability of the CS rollback and user experience.

Patent
09 Feb 2010
TL;DR: In this paper, an active system and a standby system are used to suppress an increase in period of time to reflect the data updated in the active system on the standby system, and to suppress a rollback of the data performed in the stand-by system.
Abstract: Provided is a computer system including: an active system; and a standby system. The active system generates, when an update request is received, an after-update log, and sends the after-update log to the standby system at a predetermined timing. The standby system generates a before-update log based on the after-update log sent from the active system and the stored data, updates, after the before-update log is generated, the stored data based on the after-update log, and rolls, when a rollback request is received, the data back to the data before update based on the generated before-update log. Accordingly, it becomes possible to suppress an increase in period of time to reflect the data updated in the active system on the standby system, and to suppress an increase in period of time for rollback of the data performed in the standby system.

Patent
15 Sep 2010
TL;DR: In this article, a packet-circuit exchanging on-chip router oriented rollback steering routing algorithm and a router used thereby is described, which is an adaptive routing algorithm, which performs routing arbitrage according to an onchip network congestion condition and dynamically changes a routing path according to the occupation situation of a link resource.
Abstract: The invention discloses a packet-circuit exchanging on-chip router oriented rollback steering routing algorithm and a router used thereby. The algorithm is an adaptive routing algorithm, which performs routing arbitrage according to an on-chip network congestion condition and dynamically changes a routing path according to the occupation situation of a link resource. The algorithm records output ports meeting a routing condition, reselects an output port after meeting congestion and realizes rollback routing so as to fully use network resources, effectively avoids congestion, improves average throughput and reduces average packet delay. The router comprises an input state machine, a priority encoder, an address decoder, an arbiter and an output state machine, which are sequentially connected. When selecting the routing path, the router does not retrace to route in a 180-degree direction and does not route in a direction far away from a target node, so the router does not cause the problem of dead locking or active locking. The algorithm and the router of the invention are low in cost and high in performance and are suitable for realizing an on-chip network system with high performance.

Proceedings ArticleDOI
16 Jul 2010
TL;DR: This paper summarizes and presents some types of security challenges based on existing viewpoints, then analysis the similar challenges in Xen and discusses the potential directions and implementations for modifying it to adapt to these challenges.
Abstract: While virtual machines provide significant flexibility for users and administrators to clone, snapshot, migration and rollback with unprecedented ease, it also bring forth some new problems and negative effects to the security of computing environments. The applications and operating systems are forced to run in a dynamical and unregulated computing environment, which gives rise to so radical difference that the administrator is difficult to maintain the security of computing environment. This paper summarizes and presents some types of security challenges based on existing viewpoints, then we analysis the similar challenges in Xen and discuss the potential directions and implementations for modifying it to adapt to these challenges.

Journal ArticleDOI
TL;DR: A new unacknowledged message list (UML) scheme for an efficient and accurate GVT computation and provides an effective solution for both simultaneous reporting and transient message problems in the context of synchronous algorithm.
Abstract: Time Wrap algorithm is a well-known mechanism of optimistic synchronization in a parallel discrete-event simulation (PDES) system. It offers a run time recovery mechanism that deals with the causality errors. For an efficient use of rollback, the global virtual time (GVT) computation is performed to reclaim the memory, commit the output, detect the termination, and handle the errors. This paper presents a new unacknowledged message list (UML) scheme for an efficient and accurate GVT computation. The proposed UML scheme is based on the assumption that certain variables are accessible by all processors. In addition to GVT computation, the proposed UML scheme provides an effective solution for both simultaneous reporting and transient message problems in the context of synchronous algorithm. To support the proposed UML approach, two algorithms are presented in details, with a proof of its correctness. Empirical evidence from an experimental study of the proposed UML scheme on PHOLD benchmark fully confirms the theoretical outcomes of this paper. (Received in June 2009, accepted in April 2010. This paper was with the authors 5 months for 3 revisions.)

Patent
15 Dec 2010
TL;DR: In this paper, an operation intercept based repentance method of a distributed critical task system, comprising the following steps of: firstly carrying out redundancy backup on important files of the system to ensure the recoverability of data information; secondly intercepting the operation information of the operating system by an operation intercepter in real time and writing into an operation log; and analyzing captured operation records by an operator analyzer and writing unrecoverable operation or operation which does not need to be recovered into redundant files by an operating system.
Abstract: The invention provides an operation intercept based repentance method of a distributed critical task system, comprising the following steps of: firstly carrying out redundancy backup on important files of the system to ensure the recoverability of data information; secondly intercepting the operation information of the system by an operation intercepter in real time and writing into an operation log; and analyzing captured operation records by an operation analyzer and writing unrecoverable operation or operation which does not need to be recovered into redundant files by an operation storage and processing the operation which needs to be recovered by a repenter. When the system is faulty, the repenter carries out repentance recovery on the system through three continuous steps of operation rollback, operation restoration, operation resetting and solves the problem of inconsistency possibly generated in the repenting process through a consistency manager according to a graded compensation strategy.

01 Jan 2010
TL;DR: In this article, the authors propose a formal model to reason about properties of long-running transactions, and about systems exploiting them, by extending name passing calculi with dedicated primitives, and one of their main concerns is the interplay between communication and transactions.
Abstract: Computing systems are becoming more and more complex, composed by a huge number of components interacting in different ways. Also, interactions are frequently loosely-coupled, in the sense that each component has scarce information on its communication partners, which may be unreliable (e.g., they may disconnect, or may not follow the expected protocol). Communication may be unreliable too, for instance in the case of wireless networks. Nevertheless, applications are expected to provide reliable services to their users. For these reasons, a main concern is the management of unexpected events. In the case of loosely-coupled distributed systems (e.g., for web services), unexpected events are managed according to the long-running transaction approach. A long-running transaction is a computation that either commits (i.e., succeeds), or it aborts and is compensated. Compensating a (long-running) transaction means executing a sequence of actions that revert the effect of the actions that lead to abortion, so as to reach a consistent state. This is a relaxation of the properties of ACID transactions from database theory [12], based on the fact that in the systems we are interested in rollback cannot always be perfect (e.g., one can not undo the sending of an e-mail, and if one tries to undo an airplane reservation, (s)he may have to pay some penalty). Recently, many proposals of formal models to reason about properties of long-running transactions, and about systems exploiting them, have been put forward. We concentrate on process calculi, since they are a good tool to experiment with different primitives and compare their relative merits and drawbacks. Later on, the results of these experiments can drive the design of real languages. Process calculi approaches to long-running transactions divide in two main categories: interaction-based calculi and flow composition approaches. Interaction-based calculi are obtained by extending name passing calculi with dedicated primitives, and one of their main concerns is the interplay between communication and transactions. We recall among them the πt-calculus [1], c-join [5], webπ [15], dcπ [18], the ATc calculus [2] and SOCK [11]. Flow composition approaches instead deal with the composition of atomic activities, studying how to derive compensations for complex activities from compensations of basic ones. We recall for instance SAGAs [10], StAC [8], cCSP [7] and the SAGAs calculi [6]. Some of the primitives for long-running transactions have been introduced in real languages such as WS-BPEL [17] and Jolie [16]. Long-running transactions have also been analyzed in a choreographic setting in [9]. However, only a

Journal ArticleDOI
TL;DR: An algorithm based on intelligent checkpointing of transactions as they proceed, and, in case of conflict, rolling them back to a safe, consistent, intermediate checkpoint, thus reducing conflict costs, which can result in as good as 17% reduction in the conflict costs originating from the need to redo all the shared memory read operations.
Abstract: A Software Transactional Memory is a concurrency control mechanism that executes multiple concurrent, optimistic, lockfree, atomic transactions, thus alleviating many problems associated with conventional mutual exclusion primitives such as monitors and locks. With the advent of massive multi-cores, more transactions can be initiated concurrently, however resulting in an increase in the percentage of conflicting transactions. Each time a transaction conflicts, it imposes a significant cost on the system, originating from the need to abort and redo all the operations, including the costly shared memory read operations, thus making the overall system significantly heavy and impractical. We present an algorithm, Clustered Checkpointing and Partial Rollback (CCPR), for reducing the conflict costs of transactions in the face of increasing conflicts. The algorithm is based on intelligent checkpointing of transactions as they proceed, and, in case of conflict, rolling them back to a safe, consistent, intermediate checkpoint, thus reducing conflict costs. The intelligence of the algorithm lies in the fact that as conflicts decrease, the checkpointing costs go low, however, when conflicts increase, the checkpointing costs increase but are still pretty much less than the amount of savings obtained by the partial rollback of the conflicting transactions. We simulated several applications in the CCPR framework and found that it can result in as good as 17% reduction in the conflict costs originating from the need to redo all the shared memory read operations.

Journal ArticleDOI
26 Oct 2010
TL;DR: In this paper, the authors propose a formal model to reason about properties of long-running transactions, and about systems exploiting them, by extending name passing calculi with dedicated primitives, and one of their main concerns is the interplay between communication and transactions.
Abstract: Computing systems are becoming more and more complex, composed by a huge number of components interacting in different ways. Also, interactions are frequently loosely-coupled, in the sense that each component has scarce information on its communication partners, which may be unreliable (e.g., they may disconnect, or may not follow the expected protocol). Communication may be unreliable too, for instance in the case of wireless networks. Nevertheless, applications are expected to provide reliable services to their users. For these reasons, a main concern is the management of unexpected events. In the case of loosely-coupled distributed systems (e.g., for web services), unexpected events are managed according to the long-running transaction approach. A long-running transaction is a computation that either commits (i.e., succeeds), or it aborts and is compensated. Compensating a (long-running) transaction means executing a sequence of actions that revert the effect of the actions that lead to abortion, so as to reach a consistent state. This is a relaxation of the properties of ACID transactions from database theory [12], based on the fact that in the systems we are interested in rollback cannot always be perfect (e.g., one can not undo the sending of an e-mail, and if one tries to undo an airplane reservation, (s)he may have to pay some penalty). Recently, many proposals of formal models to reason about properties of long-running transactions, and about systems exploiting them, have been put forward. We concentrate on process calculi, since they are a good tool to experiment with different primitives and compare their relative merits and drawbacks. Later on, the results of these experiments can drive the design of real languages. Process calculi approaches to long-running transactions divide in two main categories: interaction-based calculi and flow composition approaches. Interaction-based calculi are obtained by extending name passing calculi with dedicated primitives, and one of their main concerns is the interplay between communication and transactions. We recall among them the πt-calculus [1], c-join [5], webπ [15], dcπ [18], the ATc calculus [2] and SOCK [11]. Flow composition approaches instead deal with the composition of atomic activities, studying how to derive compensations for complex activities from compensations of basic ones. We recall for instance SAGAs [10], StAC [8], cCSP [7] and the SAGAs calculi [6]. Some of the primitives for long-running transactions have been introduced in real languages such as WS-BPEL [17] and Jolie [16]. Long-running transactions have also been analyzed in a choreographic setting in [9]. However, only a

Patent
13 Sep 2010
TL;DR: In this paper, a method for replicating transaction data from a source database to a target database wherein the transaction data is communicated from a change queue associated with the source database, to the target database is presented.
Abstract: A method is provided for replicating transaction data from a source database to a target database wherein the transaction data is communicated from a change queue associated with the source database to the target database. An initial path is provided between the change queue and the target database for transaction data to flow. The initial path has a maximum transaction load capacity. It is then detected whether the current transaction load is close or equal to the maximum transaction load capacity of the initial path. If so, another path is provided between the change queue and the target database. Also, a method is provided of replicating transaction data from a source database to a target database wherein an associated with the target database has a maximum transaction threshold limit. The applier normally posts transaction data to the target database only upon receipt of a commit step or operation associated with respective transaction data. First, it is detected as to whether the maximum transaction threshold limit of the applier has been reached. If so, a commit step or operation is prematurely conducted on at least some of the transaction data in the applier, thereby causing the transaction data to become posted to the target database and deleted from the applier.

01 May 2010
TL;DR: In this article, the authors show that high pressure and ultra high pressure rocks are chiefly exhumed where subduction zones display transient behaviours, which lead to contrasted flow regimes in the subduction mantle wedge.
Abstract: SUMMARY The burial–exhumation cycle of crustal material in subduction zones can either be driven by the buoyancy of the material, by the surrounding flow, or by both. High pressure and ultrahigh pressure rocks are chiefly exhumed where subduction zones display transient behaviours, which lead to contrasted flow regimes in the subduction mantle wedge. Subduction zones with stationary trenches (mode I) favour the burial of rock units, whereas slab rollback (mode II) moderately induces an upward flow that contributes to the exhumation, a regime that is reinforced when slab dip decreases (mode III). Episodic regimes of subduction that involve different lithospheric units successively activate all three modes and thus greatly favour the exhumation of rock units from mantle depth to the surface without need for fast and sustained erosion.

Patent
23 Jul 2010
TL;DR: In this article, the authors present a system for rapidly rolling back and retrying a data migration between a first and a second storage system, where the data of the baseline dataset, first incremental dataset, and second incremental data set is made available to the client.
Abstract: Methods and systems for rapidly rolling back and retrying a data migration between a first and a second storage system. In one embodiment, upon receiving a request at a provisioning manager to perform a rollback of a first data migration, the first storage system merges, to a baseline dataset, a first incremental dataset received by the second storage system after the first data migration. In another embodiment, upon receiving a request at a provisioning manager to perform a retry of the data migration, the second storage system merges, to the data received by the second storage system during and immediately after the first data migration, a second incremental dataset received to the first storage system after performance of the rollback. Throughout the migration rollback and retry, the data of the baseline dataset, first incremental dataset, and second incremental data set is made available to the client.

Patent
16 Jun 2010
TL;DR: In this paper, a hardware transactional nesting method for supporting rollback of a conditional part is presented, where a global data set is maintained by each layer of a nested transaction, and the access condition of each layer to the data is recorded in the set.
Abstract: The invention discloses a hardware transactional nesting method for supporting rollback of a conditional part. According to the method, a global data set is maintained by each layer of nesting transaction, and the access condition of each layer of transaction to the data is recorded in the set. The method comprises the following steps: if the data set accessed by a conflict layer transaction has no superposition with the data sets of other transactions before the conflict layer transaction, rolling back to the initial position of the conflict layer transaction; and if the conflict layer transaction has the same conflict variable with the data set of the transaction before the conflict layer transaction, rolling back to the initial position of the first appearing transaction with the same conflict variable. The method of the invention reduces the great cost required for rolling back the transaction to the outmost layer in a closed nesting mode, and effectively improves the transactional nesting performance without greatly increasing the complexity of the hardware.