scispace - formally typeset
Search or ask a question
Topic

Transactional memory

About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.


Papers
More filters
Book ChapterDOI
14 Dec 2010
TL;DR: It is proved that the performance of Clairvoyant is tight, since there is no polynomial time contention management algorithm that is better than O((√s)1-e)-competitive for any constant e > 0, unless NP⊆ZPP.
Abstract: We consider transactional memory contention management in the context of balanced workloads, where if a transaction is writing, the number of write operations it performs is a constant fraction of its total reads and writes. We explore the theoretical performance boundaries of contention management in balanced workloads from the worstcase perspective by presenting and analyzing two new polynomial time contention management algorithms. The first algorithm Clairvoyant is O(√s)-competitive, where s is the number of shared resources. This algorithm depends on explicitly knowing the conflict graph. The second algorithm Non-Clairvoyant is O(√s) ċ log n)-competitive, with high probability, which is only a O(log n) factor worse, but does not require knowledge of the conflict graph, where n is the number of transactions. Both of these algorithms are greedy. We also prove that the performance of Clairvoyant is tight, since there is no polynomial time contention management algorithm that is better than O((√s)1-e)-competitive for any constant e > 0, unless NP⊆ZPP. To our knowledge, these results are significant improvements over the best previously known O(s) competitive ratio bound.

10 citations

Book ChapterDOI
08 Sep 2011
TL;DR: Two algorithms are presented that improve the state-of-the-art performance for TMs that support the concurrent execution of locks and transactions and demonstrate that an algorithm’s concurrent throughput potential does not always lead to realized performance gains.
Abstract: Transactional memory (TM) is a promising alternative to mutual exclusion. In spite of this, it may be unrealistic for TM programs to be devoid of locks due to their abundant use in legacy software systems. Consequently, for TMs to be practical they may need to manage the interaction of transactions and locks when they access the same shared-memory. This paper presents two algorithms, one coarse-grained and one fine-grained, that improve the state-of-the-art performance for TMs that support the concurrent execution of locks and transactions. We also discuss the programming language constructs that are necessary to implement such algorithms and present analyses that compare and contrast our approach with prior work. Our analyses demonstrate that, (i) in general, our proposed coarse- and fine-grained algorithms improve program concurrency but (ii) an algorithm’s concurrent throughput potential does not always lead to realized performance gains.

9 citations

Proceedings Article
01 Jan 2010
TL;DR: The results show that TM is attractive under low contention workloads, while spinlocks can tolerate high contention workloadS well, and that in some cases these approaches can beat a simple implementation of a traditional database lock manager by an order of magnitude.
Abstract: Currently, hardware trends include a move toward multicore processors, cheap and persistent variants of memory, and even sophisticated hardware support for mutual exclusion in the form of transactional memory. These trends, coupled with a growing desire for extremely high performance on short database transactions, raise the question of whether the hardware primitives developed for mutual exclusion can be exploited to run database transactions. In this paper, we present a preliminary exploration of this question. We conduct a set of experiments on both a hardware prototype and a simulator of a multi-core processor with Transactional Memory (TM.) Our results show that TM is attractive under low contention workloads, while spinlocks can tolerate high contention workloads well, and that in some cases these approaches can beat a simple implementation of a traditional database lock manager by an order of magnitude.

9 citations

Proceedings Article
15 Jun 1998
TL;DR: It is discussed how the extension structure of Rhino can solve performance problems previously unavoidable in traditional systems, and its benefits are quantified.
Abstract: This paper describes Rhino, a transactional memory service implemented on top of the SPIN operating system. Rhino is implemented as an extension that runs in SPIN kernel's address space. We discuss how the extension structure of Rhino can solve performance problems previously unavoidable in traditional systems, and we quantify its benefits. We also introduce three alternative buffer management schemes and study their performance under various workloads.

9 citations

Book ChapterDOI
01 Jan 2015
TL;DR: In this chapter, solutions for managing concurrency of distributed transactional memory accesses in partially replicated deployments are described and designed for helping designers to understand the execution model that better fits their requirements.
Abstract: In this chapter we describe solutions for managing concurrency of distributed transactional memory accesses in partially replicated deployments. A system is classified as partially replicated if, for each shared object, there is more than one node responsible for storing the object, thus resulting in multiple copies available in the system. In contrast to full replication, where all objects are replicated on all nodes, partial replication allows storing a huge amount of data that, by nature, cannot fit in a single node and improving scalability by (significantly) increasing the number of node serving transaction requests. Solutions that assume partially replicated deployments are categorized according to the mobility of shared objects. In the control-flow approach shared objects are pinned to nodes for the entire system’s lifetime, whereas in the data-flow objects are allowed to change residence node (also called owner) whenever a transaction commits a new version of the object. Intuitively, adopting the data-flow model, objects follow committing transactions whereas, relying on the control-flow model, transactions’ flow is routed towards objects. There is a number of key factors to be evaluated before preferring one transaction execution model to another. This chapter surveys all of them and provides solutions suited for different deployments. The chapter aims for helping designers to understand the execution model that better fits their requirements.

9 citations


Network Information
Related Topics (5)
Compiler
26.3K papers, 578.5K citations
87% related
Cache
59.1K papers, 976.6K citations
86% related
Parallel algorithm
23.6K papers, 452.6K citations
84% related
Model checking
16.9K papers, 451.6K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202129
202063
201970
201888