scispace - formally typeset
Search or ask a question
Topic

Transactional memory

About: Transactional memory is a research topic. Over the lifetime, 2365 publications have been published within this topic receiving 60818 citations.


Papers
More filters
Patent
06 May 2009
TL;DR: In this paper, the authors proposed a method for realizing transactional memory, which comprises the steps of compiling a paragraph of program statement into byte codes, identifying and extracting the relevant byte codes of transaction from the byte codes and carrying out marking to shared objects therein.
Abstract: The invention provides a method for realizing transactional memory. The method comprises the steps of compiling a paragraph of program statement into byte codes, identifying and extracting the relevant byte codes of transaction from the byte codes and carrying out marking to shared objects therein, and compiling the relevant byte codes of the transaction into local codes with transactional version and adding TxLoad instructions or TxStore instructions or calling to a transactional memory bank interface of software after compiling results according to the semanteme of the byte codes. The invention adopts a TMSI protocol to accelerate read-write interception and collision detection, thus effectively reducing the overhead of the transactional memory of pure software; and compared with the pure hardware mode, the complexity of hardware is less due to no need of realizing all the functions of the transactional memory.

5 citations

Proceedings ArticleDOI
18 Dec 2007
TL;DR: This tutorial begins by describing the architectural issues in multicore architectures and describes transactional memory, a method that brings the idea of database transactions for programming these massive compute engines.
Abstract: Most vendors have realized that multicores is probably the best bet to meet increasing transistor densities with scalable performance and with a reasonable control on power. The community is almost unanimous in the view that effective programming of these machines requires domain knowledge in parallel architectures, parallel programming as well as multithreaded programming. In this tutorial we begin by describing the architectural issues in multicore architectures. Clearly the power/performance advantage of these architectures can only be exploited if we have both parallel applications (workloads) and "efficient" parallel programming techniques. We will review current techniques and also provide pointers towards "promising" research directions. We will also describe transactional memory, a method that brings the idea of database transactions for programming these massive compute engines.

5 citations

Patent
30 Mar 1999
TL;DR: In this paper, a multi-memory technology card belonging to an individual is used to encrypt communications regarding the individual's identification, financial, medical and other personal information onto at least one memory of the card.
Abstract: According to the present invention, a system and method for personal banking with a multi-memory technology card is disclosed. A multi-memory technology card belonging to an individual is utilized to encrypt communications regarding the individual's identification, financial, medical and other personal information onto at least one memory of the card. The card has at least a transactional memory or chip memory that stores and runs an encryption application so as to encrypt communications stored in a large capacity memory (e.g., optical memory or thin film semiconductor memory) or within the remaining limited memory of the chip. This encrypted information, when decrypted, informs the reader (e.g., financial institution representative) of, among other things, the identification of the individual as a person having high net worth, thus allowing for increased speed of service at financial institutions that are initially unfamiliar with the individual.

5 citations

Book ChapterDOI
01 Jan 2015
TL;DR: This chapter will formalize the requirement that unrelated transactions progress independently, without interference, even if they occur at the same time, by presenting impossibility results and discussing some of the disjoint-access parallel STM implementations.
Abstract: Disjoint-access parallelism captures the requirement that unrelated transactions progress independently, without interference, even if they occur at the same time. That is, an implementation should not cause two transactions, which are unrelated at the high-level, i.e. they access disjoint sets of data items, to simultaneously access the same low-level shared memory locations. This chapter will formalize this notion and will discuss if and when STM can achieve disjoint-access parallelism, by presenting impossibility results and discussing some of the disjoint-access parallel STM implementations. For example, no dynamic STM can be disjoint-access parallel, if it ensures wait-freedom for read-only transactions and a weak liveness property, known as minimal progress, for update transactions. In fact, even if transactions are static, STM implementations cannot be disjoint-access parallel, when read-only transactions must be wait-free and invisible. These impossibility results hold even when only snapshot isolation is required for the STM, and not stronger conditions like opacity or strict serializability. The second of these impossibility results holds for serializable STM as well.

5 citations

Book ChapterDOI
TL;DR: This chapter overviews a set of recent techniques aimed at building “application-specific” performance models that can be exploited to dynamically tune the level of concurrency to the best-suited value.
Abstract: Synchronization transparency offered by Software Transactional Memory (STM) must not come at the expense of run-time efficiency, thus demanding from the STM-designer the inclusion of mechanisms properly oriented to performance and other quality indexes. Particularly, one core issue to cope with in STM is related to exploiting parallelism while also avoiding thrashing phenomena due to excessive transaction rollbacks, caused by excessively high levels of contention on logical resources, namely concurrently accessed data portions. A means to address run-time efficiency consists in dynamically determining the best-suited level of concurrency (number of threads) to be employed for running the application (or specific application phases) on top of the STM layer. For too low levels of concurrency, parallelism can be hampered. Conversely, over-dimensioning the concurrency level may give rise to the aforementioned thrashing phenomena caused by excessive data contention—an aspect which has reflections also on the side of reduced energy-efficiency. In this chapter we overview a set of recent techniques aimed at building “application-specific” performance models that can be exploited to dynamically tune the level of concurrency to the best-suited value. Although they share some base concepts while modeling the system performance vs the degree of concurrency, these techniques rely on disparate methods, such as machine learning or analytic methods (or combinations of the two), and achieve different tradeoffs in terms of the relation between the precision of the performance model and the latency for model instantiation. Implications of the different tradeoffs in real-life scenarios are also discussed.

5 citations


Network Information
Related Topics (5)
Compiler
26.3K papers, 578.5K citations
87% related
Cache
59.1K papers, 976.6K citations
86% related
Parallel algorithm
23.6K papers, 452.6K citations
84% related
Model checking
16.9K papers, 451.6K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202240
202129
202063
201970
201888