scispace - formally typeset

Journal ArticleDOI

Revocation techniques for Java concurrency

01 Oct 2006-Concurrency and Computation: Practice and Experience (John Wiley & Sons, Ltd.)-Vol. 18, Iss: 12, pp 1613-1656

TL;DR: Two approaches to managing concurrency in Java using a guarded region abstraction are proposed, one of which extends the functionality of revocable monitors by implementing guarded regions as lightweight transactions that can be executed concurrently (or in parallel on multiprocessor platforms).
Abstract: This paper proposes two approaches to managing concurrency in Java using a guarded region abstraction. Both approaches use revocation of such regions—the ability to undo their effects automatically and transparently. These new techniques alleviate many of the constraints that inhibit construction of transparently scalable and robust concurrent applications. The first solution, revocable monitors, augments existing mutual exclusion monitors with the ability to dynamically resolve priority inversion and deadlock, by reverting program execution to a consistent state when such situations are detected, while preserving Java semantics. The second technique, transactional monitors, extends the functionality of revocable monitors by implementing guarded regions as lightweight transactions that can be executed concurrently (or in parallel on multiprocessor platforms). The presentation includes discussion of design and implementation issues for both schemes, as well as a detailed performance study to compare their behavior with the traditional, state-of-the-art implementation of Java monitors based on mutual exclusion. Copyright © 2006 John Wiley & Sons, Ltd.
Topics: Java concurrency (63%), Mutual exclusion (57%), Java (56%), Isolation (database systems) (55%), Concurrency (55%)

Content maybe subject to copyright    Report

CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE
Concurrency Computat.: Pract. Exper. 2005; 00:1–41 Prepared using cpeauth.cls [Version: 2002/09/19 v2.02]
Revocation techniques for Java
concurrency
Adam Welc
, Suresh Jagannathan
, Antony L. Hosking
§
Department of Computer Sciences
Purdue University
250 N. University Street
West Lafayette, IN 47907-2066, U.S.A.
SUMMARY
This paper proposes two approaches to managing concurrency in Java using a guarded region abstraction.
Both approaches use revocation of such regions the ability to undo their effects automatically
and transparently. These new techniques alleviate many of the constraints that inhibit construction
of transparently scalable and robust concurrent applications. The first solution, revocable monitors,
augments existing mutual exclusion monitors with the ability to resolve priority inversion and deadlock
dynamically, by reverting program execution to a consistent state when such situations are detected,
while preserving Java semantics. The second technique, transactional monitors, extends the functionality
of revocable monitors by implementing guarded regions as lightweight transactions that can be executed
concurrently (or in parallel on multiprocessor platforms). The presentation includes discussion of design
and implementation issues for both schemes, as well as a detailed performance study to compare their
behavior with the traditional, state-of-the-art implementation of Java monitors based on mutual exclusion.
KEY WORDS: isolation, atomicity, concurrency, synchronization, Java, speculation
1. Introduction
Managing complexity is a major challenge in constructing robust large-scale server applications
(such as database management systems, application servers, airline reservation systems, etc). In
a typical environment, large numbers of clients may access a server application concurrently. To
provide satisfactory response time and throughput, applications are often made concurrent. Thus, many
programming languages (eg, Smalltalk, C++, ML, Modula-3, Java) provide mechanisms that enable
concurrent programming via a thread abstraction, with threads being the smallest unit of concurrent
E-mail: welc@cs.purdue.edu
E-mail: suresh@cs.purdue.edu
§
E-mail: hosking@cs.purdue.edu
Contract/grant sponsor: National Science Foundation; contract/grant number: IIS-9988637, CCR-0085792, STI-0034141
Copyright
c
2005 John Wiley & Sons, Ltd.

2 A. WELC, S. JAGANNATHAN, A. L. HOSKING
execution. Another key mechanism offered by these languages is the notion of guarded code regions in
which accesses to shared data performed by one thread are isolated from accesses performed by other
threads, and all updates performed by a thread within a guarded region become visible to the other
threads atomically, once the executing thread exits the region. Guarded regions (eg, Java synchronized
methods and blocks, Modula-3 LOCK statements) are usually implemented using mutual-exclusion
locks.
In this paper, we explore two new approaches to concurrent programming, comparing their
performance against use of a state-of-the-art mutual exclusion implementation that uses thin locks
to minimize the overhead of locking [4]. Our discussion is grounded in the context of the Java
programming language, but is applicable to any language that offers the following mechanisms:
Multithreading: concurrent threads of control executing over objects in a shared address space.
Synchronized blocks: lexically-delimited blocks of code, guarded by dynamically-scoped
monitors (locks). Threads synchronize on a given monitor, acquiring it on entry to the block
and releasing it on exit. Only one thread may be perceived to execute within a synchronized
block at any time, ensuring exclusive access to all monitor-protected blocks.
Exception scopes: blocks of code in which an error condition can change the normal flow
of control of the active thread, by exiting active scopes, and transferring control to a handler
associated with each block.
Difficulties arising in the use of mutual exclusion locking with multiple threads are widely-
recognized, such as race conditions, priority inversion and deadlock.
Race conditions are a serious issue for non-trivial concurrent programs. A race exists when two
threads can access the same object, and one of the accesses is a write. To avoid races, programmers
must carefully construct their application to trade off performance and throughput (by maximizing
concurrent access to shared data) for correctness (by limiting concurrent access when it could lead to
incorrect behavior), or rely on race detector tools that identify when races occur [7, 8, 18]. Recent work
has advocated higher-level safety properties such as atomicity for concurrent applications [19].
In languages with priority scheduling of threads, a low-priority thread may hold a lock even while
other threads, which may have higher priority, are waiting to acquire it. Priority inversion results when
a low-priority thread T
l
holds a lock required by some high-priority thread T
h
, forcing the high-priority
T
h
to wait until T
l
releases the lock. Even worse, an unbounded number of runnable medium-priority
threads T
m
may exist, thus preventing T
l
from running, making unbounded the time that T
l
(and hence
T
h
) must wait. Such situations can cause havoc in applications where high-priority threads demand
some level of guaranteed throughput.
Deadlock results when two or more threads are unable to proceed because each is waiting on a lock
held by another. Such a situation is easily constructed for two threads, T
1
and T
2
: T
1
first acquires lock
L
1
while T
2
acquires L
2
, then T
1
tries to acquire L
2
while T
2
tries to acquire L
1
, resulting in deadlock.
Deadlocks may also result from a far more complex interaction among multiple threads and may stay
undetected until and beyond application deployment. The ability to resolve a deadlock dynamically is
much more attractive than permanently stalling some subset of concurrent threads.
For real-world concurrent programs with complex module and dependency structures, it is difficult
to perform an exhaustive exploration of the space of possible interleavings to determine statically
when races, deadlocks, or priority inversions may arise. For such applications, the ability to redress
undesirable interactions transparently among scheduling decisions and lock managementis very useful.
Copyright
c
2005 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2005; 00:1–41
Prepared using cpeauth.cls

REVOCATION TECHNIQUES FOR JAVA CONCURRENCY 3
These observations inspire the first solution we propose: revocable monitors. Our technique augments
existing mutual exclusion monitors with the ability to resolve priority inversion dynamically (and
automatically). Some instances of deadlock may be resolved by revocation. However, we note that
deadlocks inherent to a program that are independent of scheduling decisions will manifest themselves
as livelock when revocation is used.
A second difficulty with using mutual exclusion to mediate data accesses among threads is ensuring
adequate performance when running on multi-processor platforms. To manipulate a complex shared
data structure like a tree or heap, applications must either impose a global locking scheme on the
roots, or employ locks at lower-level nodes in the structure. The former strategy is simple, but reduces
realizable concurrency and may induce false exclusion: threads wishing to access a distinct piece of the
structure may nonetheless block while waiting for another thread that is accessing an unrelated piece
of the structure. The latter approach permits multiple threads to access the structure simultaneously,
but incurs implementation complexity, and requires more memory to hold the necessary lock state.
Our solution to this problem is an alternative to lock-based mutual exclusion: transactional
monitors. These extend the functionality of revocable monitors by implementing guarded regions as
lightweight transactions that can be executed concurrently (or in parallel on multiprocessor platforms).
Transactional monitors define the following data visibility property that preserves isolation and
atomicity invariants on shared data protected by the monitor: all updates to objects guarded by a
transactional monitor become visible to other threads only on successful completion of the monitor
transaction.
Because transactional monitors impose serializability invariants on the regions they
protect (ie, preserve the appearance of serial execution), they can help reduce race conditions by
allowing programmers to more aggressively guard code regions that may access shared data without
paying a significant performance penalty. Since the system dynamically records and redresses state
violations (by revoking the effects of the transaction when a serializability violation is detected),
programmers are relieved from the burden of having to determine when mutual exclusion can safely
be relaxed. Thus, programmers can afford to over-specify code regions that must be guarded, provided
the implementation can relax such over-specification safely and efficiently whenever possible.
While revocable monitors and transactional monitors rely on similar mechanisms, and can exist
side-by-side in the same virtual machine, their semantics and intended utility are quite different. We
expect revocable monitors to be used primarily to resolve deadlock as well as to improvethroughputfor
high-priority threads by transparently averting priority inversion. In contrast, we envision transactional
monitors as an entirely new synchronization framework that addresses the performance impact of
classical mutual exclusion while simplifying concurrent programming.
We examine the performance and scalability of these different approaches in the context of a state-of-
the-art Java compiler and virtual machine, namely the Jikes Research Virtual Machine (RVM) [3] from
IBM. Jikes RVM is an ideal platform to compare our solutions with pure lock-based mutual exclusion,
since it already uses sophisticated strategies to minimize the overhead of traditional mutual-exclusion
locks [4]. A detailed evaluation in this context provides an accurate depiction of the tradeoffs embodied
and benefits obtained using the solutions we propose.
A slightly weaker visibility property is present in Java for updates performed within a synchronized block (or method);
these are guaranteed to be visible to other threads only upon exit from the block.
Copyright
c
2005 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2005; 00:1–41
Prepared using cpeauth.cls

4 A. WELC, S. JAGANNATHAN, A. L. HOSKING
T
l
T
h
T
m
synchronized(mon) {
o1.f++;
o2.f++;
bar();
}
foo();
Figure 1. Priority inversion
2. Revocable monitors: Overview
There are several ways to remedy erroneous or undesirable behavior in concurrent programs. Static
techniques can sometimes identify erroneous conditions, allowing programmers to restructure their
application appropriately. When static techniques are infeasible, dynamic techniques can be used both
to identify problems and remedy them when possible. Solutions to priority inversionsuch as the priority
ceiling and priority inheritance protocols [40] are good examples of such dynamic solutions.
Priority ceiling and priority inheritance solve an unbounded priority inversion problem, illustrated
using the code fragment in Figure 1 (both T
l
and T
h
execute the same code and methods foo() and
bar() contain an arbitrary sequence of operations). Let us assume that thread T
l
(low priority) is first
to acquire the monitor mon, modifies objects o
1
and o
2
, and is then preempted by thread T
m
(medium
priority). Note that thread T
h
(high priority) is not permitted to enter monitor mon until it has been
released by T
l
, but since method foo() executed by T
m
may contain arbitrary sequence of actions (eg,
synchronous communication with another thread), it may take arbitrary time before T
l
is allowed to run
again (and exit the monitor). Thus thread T
h
may be forced to wait for an unbounded amount of time
before it is allowed to complete its actions.
The priority ceiling technique raises the priority of any locking thread to the highest priority of
any thread that ever uses that lock (ie, its priority ceiling). This requires the programmer to supply
the priority ceiling for each lock used throughout the execution of a program. In contrast, priority
inheritance will raise the priority of a thread only when holding a lock causes it to block a higher
priority thread. When this happens, the low priority thread inherits the priority of the higher priority
thread it is blocking. Both of these solutions prevent a medium priority thread from blocking the
execution of the low priority thread (and thus also the high priority thread) indefinitely. However, even
in the absence of the medium priority thread, the high priority thread is forced to wait until the low
priority thread releases its lock. In the example given, the time to execute method bar() is potentially
unbounded, thus high priority thread T
h
may still be delayed indefinitely until low priority thread T
l
finishes executing bar() and releases the monitor. Neither priority ceiling nor priority inheritance
offer a solution to this problem.
Besides priority inversion, deadlock is another potentially unwanted consequence of using mutual-
exclusion abstractions. A typical deadlock situation is illustrated with the code fragment in Figure 2.
Let us assume the following sequence of actions: thread T
1
acquires monitor mon1 and updates object
Copyright
c
2005 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2005; 00:1–41
Prepared using cpeauth.cls

REVOCATION TECHNIQUES FOR JAVA CONCURRENCY 5
T
1
T
2
synchronized(mon1) {
o1.f++;
synchronized(mon2) {
bar();
}
}
synchronized(mon2) {
o2.f++;
synchronized(mon1) {
bar();
}
}
Figure 2. Deadlock
o
1
, thread T
2
acquires monitor mon2 and updates object o
2
, thread T
1
attempts to acquire monitor mon2
(T
1
blocks since mon2 is already held by thread T
2
) and thread T
2
attempts to acquire monitor mon1
(T
2
blocks as well since mon1 is already held by T
1
). The result is that both threads are deadlocked
they will remain blocked indefinitely and method bar() will never get executed by any of the threads.
In both of the scenarios illustrated by Figures 1 and 2, one can identify a single offending thread that
must be revoked in order to resolve either the priority inversion or the deadlock. For priority inversion
the offending thread is the low-priority thread currently executing the monitor. For deadlock, it is either
of the threads engaged in deadlock there exist various techniques for preventing or detecting deadlock
[21], but all require that the actions of one of the threads leading to deadlock be revoked.
Revocable monitors can alleviate both these issues. Our approach to revocation combines compiler
techniques with run-time detection and resolution. When the need for revocation is encountered, the
run-time system selectively revokes the offending thread executing the monitor (ie, synchronized
block) and its effects. All updates to shared data performed within the monitor are logged. Upon
detecting priority inversion or deadlock (either at lock acquisition, or in the background), the run-time
system interrupts the offending thread, uses the logged updates to undo that thread’s shared updates,
and transfers control of the thread back to the beginning of the block for retry. Externally, the effect of
the roll-back is to make it appear that the offending thread never entered the block.
The process of revoking the effects performed by a low priority thread within a monitor is illustrated
in Figure 3 where wavy lines represent threads T
l
and T
h
, circles represent objects o
1
and o
2
, updated
objects are marked grey, and the box represents the dynamic scope of a common monitor guarding a
synchronized block executed by the threads. This scenario is based on the code from Figure 1 (data
access operations performed within method bar() have been omitted for brevity). In Figure 3(a) low-
priority thread T
l
is about to enter the synchronized block, which it does in Figure 3(b), modifying
object o
1
. High-priority thread T
h
tries to acquire the same monitor, but is blocked by low-priority
T
l
(Figure 3(c)). Here, a priority inheritance approach [40] would raise the priority of thread T
l
to
that of T
h
, but T
h
would still have to wait for T
l
to release the lock. If a priority ceiling protocol was
used, the priority of T
l
would be raised to the ceiling upon its entry to the synchronized block, but
the problem of T
h
being forced to wait for T
l
to release the lock would remain. Instead, our approach
preempts T
l
, undoing any updates to o
1
, and transfers control in T
l
back to the point of entry to the
synchronized block. Here T
l
must wait while T
h
enters the monitor, and updates objects o
1
(Figure 3(e))
Copyright
c
2005 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2005; 00:1–41
Prepared using cpeauth.cls

Citations
More filters

Journal ArticleDOI
11 Jun 2006-
TL;DR: A high-performance software transactional memory system (STM) integrated into a managed runtime environment is presented and the JIT compiler is the first to optimize the overheads of STM, and novel techniques for enabling JIT optimizations on STM operations are shown.
Abstract: Programmers have traditionally used locks to synchronize concurrent access to shared data. Lock-based synchronization, however, has well-known pitfalls: using locks for fine-grain synchronization and composing code that already uses locks are both difficult and prone to deadlock. Transactional memory provides an alternate concurrency control mechanism that avoids these pitfalls and significantly eases concurrent programming. Transactional memory language constructs have recently been proposed as extensions to existing languages or included in new concurrent language specifications, opening the door for new compiler optimizations that target the overheads of transactional memory.This paper presents compiler and runtime optimizations for transactional memory language constructs. We present a high-performance software transactional memory system (STM) integrated into a managed runtime environment. Our system efficiently implements nested transactions that support both composition of transactions and partial roll back. Our JIT compiler is the first to optimize the overheads of STM, and we show novel techniques for enabling JIT optimizations on STM operations. We measure the performance of our optimizations on a 16-way SMP running multi-threaded transactional workloads. Our results show that these techniques enable transactional memory's performance to compete with that of well-tuned synchronization.

315 citations


Proceedings ArticleDOI
10 Jun 2007-
TL;DR: The results on a set of Java programs show that strong atomicity can be implemented efficiently in a high-performance STM system and introduces a dynamic escape analysis that differentiates private and public data at runtime to make barriers cheaper and a static not-accessed-in-transaction analysis that removes many barriers completely.
Abstract: Transactional memory provides a new concurrency control mechanism that avoids many of the pitfalls of lock-based synchronization. High-performance software transactional memory (STM) implementations thus far provide weak atomicity: Accessing shared data both inside and outside a transaction can result in unexpected, implementation-dependent behavior. To guarantee isolation and consistent ordering in such a system, programmers are expected to enclose all shared-memory accesses inside transactions.A system that provides strong atomicity guarantees isolation even in the presence of threads that access shared data outside transactions. A strongly-atomic system also orders transactions with conflicting non-transactional memory operations in a consistent manner.In this paper, we discuss some surprising pitfalls of weak atomicity, and we present an STM system that avoids these problems via strong atomicity. We demonstrate how to implement non-transactional data accesses via efficient read and write barriers, and we present compiler optimizations that further reduce the overheads of these barriers. We introduce a dynamic escape analysis that differentiates private and public data at runtime to make barriers cheaper and a static not-accessed-in-transaction analysis that removes many barriers completely. Our results on a set of Java programs show that strong atomicity can be implemented efficiently in a high-performance STM system.

208 citations


Journal ArticleDOI
26 Jan 2011-
TL;DR: A language together with a type and effect system that supports nondeterministic computations with a deterministic-by-default guarantee, which provides a static semantics, dynamic semantics, and a complete proof of soundness for the language, both with and without the barrier removal feature.
Abstract: A number of deterministic parallel programming models with strong safety guarantees are emerging, but similar support for nondeterministic algorithms, such as branch and bound search, remains an open question. We present a language together with a type and effect system that supports nondeterministic computations with a deterministic-by-default guarantee: nondeterminism must be explicitly requested via special parallel constructs (marked nd), and any deterministic construct that does not execute any nd construct has deterministic input-output behavior. Moreover, deterministic parallel constructs are always equivalent to a sequential composition of their constituent tasks, even if they enclose, or are enclosed by, nd constructs. Finally, in the execution of nd constructs, interference may occur only between pairs of accesses guarded by atomic statements, so there are no data races, either between atomic statements and unguarded accesses (strong isolation) or between pairs of unguarded accesses (stronger than strong isolation alone). We enforce the guarantees at compile time with modular checking using novel extensions to a previously described effect system. Our effect system extensions also enable the compiler to remove unnecessary transactional synchronization. We provide a static semantics, dynamic semantics, and a complete proof of soundness for the language, both with and without the barrier removal feature. An experimental evaluation shows that our language can achieve good scalability for realistic parallel algorithms, and that the barrier removal techniques provide significant performance gains.

71 citations


Proceedings ArticleDOI
Dan Grossman1, Jeremy Manson2, William Pugh3Institutions (3)
22 Oct 2006-
TL;DR: The semantics of transactions with respect to a memory model weaker than sequential consistency is considered, and cases where semantics are more subtle than people expect include the actual meaning of both strong and weak atomicity.
Abstract: Many people have proposed adding transactions, or atomic blocks, to type-safe high-level programming languages. However, researchers have not considered the semantics of transactions with respect to a memory model weaker than sequential consistency. The details of such semantics are more subtle than many people realize, and the interaction between compiler transformations and transactions could produce behaviors that many people find surprising. A language's memory model, which determines these interactions, must clearly indicate which behaviors are legal, and which are not. These design decisions affect both the idioms that are useful for designing concurrent software and the compiler transformations that are legal within the language.Cases where semantics are more subtle than people expect include the actual meaning of both strong and weak atomicity; correct idioms for thread safe lazy initialization; compiler transformations of transactions that touch only thread local memory; and whether there is a well-defined notion for transactions that corresponds to the notion of correct and incorrect use of synchronization in Java. Open questions for a high-level memory-model that includes transactions involve both issues of isolation and ordering.

67 citations


Journal Article
Abstract: A transaction defines a locus of computation that satisfies important concurrency and failure properties; these so-called ACID properties provide strong serialization guarantees that allow us to reason about concurrent and distributed programs in terms of higher-level units of computation (e.g., transactions) rather than lower-level data structures (e.g., mutual-exclusion locks). This paper presents a framework for specifying the semantics of a transactional facility integrated within a host programming language. The TFJ calculus supports nested and multi-threaded transactions. We give a semantics to TFJ that is parameterized by the definition of the transactional mechanism that permits the study of different transaction models.

36 citations


References
More filters

Book
12 Sep 1996-
TL;DR: The Java Language Specification, Second Edition is the definitive technical reference for the Java programming language and provides complete, accurate, and detailed coverage of the syntax and semantics of the Java language.
Abstract: From the Publisher: Written by the inventors of the technology, The Java(tm) Language Specification, Second Edition is the definitive technical reference for the Java(tm) programming language If you want to know the precise meaning of the language's constructs, this is the source for you The book provides complete, accurate, and detailed coverage of the syntax and semantics of the Java programming language It describes all aspects of the language, including the semantics of all types, statements, and expressions, as well as threads and binary compatibility

4,343 citations


Book
01 Jan 1992-
TL;DR: Using transactions as a unifying conceptual framework, the authors show how to build high-performance distributed systems and high-availability applications with finite budgets and risk.
Abstract: From the Publisher: The key to client/server computing. Transaction processing techniques are deeply ingrained in the fields of databases and operating systems and are used to monitor, control and update information in modern computer systems. This book will show you how large, distributed, heterogeneous computer systems can be made to work reliably. Using transactions as a unifying conceptual framework, the authors show how to build high-performance distributed systems and high-availability applications with finite budgets and risk. The authors provide detailed explanations of why various problems occur as well as practical, usable techniques for their solution. Throughout the book, examples and techniques are drawn from the most successful commercial and research systems. Extensive use of compilable C code fragments demonstrates the many transaction processing algorithms presented in the book. The book will be valuable to anyone interested in implementing distributed systems or client/server architectures.

3,496 citations


Book
19 Sep 1996-
Abstract: Preface. 1. Introduction. A Bit of History. The Java Virtual Machine. Summary of Chapters. Notation. 2. Java Programming Language Concepts. Unicode. Identifiers. Literals. Types and Values. Primitive Types and Values. Operators on Integral Values. Floating-Point Types, Value Sets, and Values. Operators on Floating-Point Values. Operators on boolean Values. Reference Types, Objects, and Reference Values. The Class Object. The Class String. Operators on Objects. Variables. Initial Values of Variables. Variables Have Types, Objects Have Classes. Conversions and Promotions. Identity Conversions. Widening Primitive Conversions. Narrowing Primitive Conversions. Widening Reference Conversions. Narrowing Reference Conversions. Value Set Conversion. Assignment Conversion. Method Invocation Conversion. Casting Conversion. Numeric Promotion. Names and Packages. Names. Packages. Members. Package Members. The Members of a Class Type. The Members of an Interface Type. The Members of an Array Type. Qualified Names and Access Control. Fully Qualified Names. Classes. Class Names. Class Modifiers. Superclasses and Subclasses. The Class Members. Fields. Field Modifiers. Initialization of Fields. Methods. Formal Parameters. Method Signature. Method Modifiers. Static Initializers. Constructors. Constructor Modifiers. Interfaces. Interface Modifiers. Superinterfaces. Interface Members. Interface (Constant) Fields. Interface (Abstract) Methods. Overriding, Inheritance, and Overloading in Interfaces. Nested Classes and Interfaces. Arrays. Array Types. Array Variables. Array Creation. Array Access. Exceptions. The Causes of Exceptions. Handling an Exception. The Exception Hierarchy. The Classes Exception and RuntimeException. Execution. Virtual Machine Start-up. Loading. Linking: Verification, Preparation, and Resolution. Initialization. Detailed Initialization Procedure. Creation of New Class Instances. Finalization of Class Instances. Unloading of Classes and Interfaces. Virtual Machine Exit. FP-strict Expressions. Threads. 3. The Structure of the Java Virtual Machine. The class File Format. Data Types. Primitive Types and Values. Integral Types and Values. Floating-Point Types, Value Sets, and Values. The returnAddress Type and Values. The boolean Type. Reference Types and Values. Runtime Data Areas. The pc Register. Java Virtual Machine Stacks. Heap. Method Area. Runtime Constant Pool. Native Method Stacks. Frames. Local Variables. Operand Stacks. Dynamic Linking. Normal Method Invocation Completion. Abrupt Method Invocation Completion. Additional Information. Representation of Objects. Floating-Point Arithmetic. Java Virtual Machine Floating-Point Arithmetic and IEEE 754. Floating-Point Modes. Value Set Conversion. Specially Named Initialization Methods. Exceptions. Instruction Set Summary. Types and the Java Virtual Machine. Load and Store Instructions. Arithmetic Instructions. Type Conversion Instructions. Object Creation and Manipulation. Operand Stack Management Instructions. Control Transfer Instructions. Method Invocation and Return Instructions. Throwing Exceptions. Implementing finally. Synchronization. Class Libraries. Public Design, Private Implementation. 4. The class File Format. The ClassFile Structure. The Internal Form of Fully Qualified Class and Interface Names. Descriptors. Grammar Notation. Field Descriptors. Method Descriptors. The Constant Pool. The CONSTANT_Class_info Structure. The CONSTANT_Fieldref_info, CONSTANT_Methodref_info, and CONSTANT_InterfaceMethodref_info Structures. The CONSTANT_String_info Structure. The CONSTANT_Integer_info and CONSTANT_Float_info Structures. The CONSTANT_Long_info and CONSTANT_Double_info Structures. The CONSTANT_NameAndType_info Structure. The CONSTANT_Utf8_info Structure. Fields. Methods. Attributes. Defining and Naming New Attributes. The ConstantValue Attribute. The Code Attribute. The Exceptions Attribute. The InnerClasses Attribute. The Synthetic Attribute. The SourceFile Attribute. The LineNumberTable Attribute. The LocalVariableTable Attribute. The Deprecated Attribute. Constraints on Java Virtual Machine Code. Static Constraints. Structural Constraints. Verification of class Files. The Verification Process. The Bytecode Verifier. Values of Types long and double. Instance Initialization Methods and Newly Created Objects. Exception Handlers. Exceptions and finally. Limitations of the Java Virtual Machine. 5. Loading, Linking, and Initializing. The Runtime Constant Pool. Virtual Machine Start-up. Creation and Loading. Loading Using the Bootstrap Class Loader. Loading Using a User-defined Class Loader. Creating Array Classes. Loading Constraints. Deriving a Class from a class File Representation. Linking. Verification. Preparation. Resolution. Access Control. Initialization. Binding Native Method Implementations. 6. The Java Virtual Machine Instruction Set. Assumptions: The Meaning of "Must." Reserved Opcodes. Virtual Machine Errors. Format of Instruction Descriptions. 7. Compiling for the Java Virtual Machine. Format of Examples. Use of Constants, Local Variables, and Control Constructs. Arithmetic. Accessing the Runtime Constant Pool. More Control Examples. Receiving Arguments. Invoking Methods. Working with Class Instances. Arrays. Compiling Switches. Operations on the Operand Stack. Throwing and Handling Exceptions. Compiling finally. Synchronization. Compiling Nested Classes and Interfaces. 8. Threads and Locks. Terminology and Framework. Execution Order and Consistency. Rules About Variables. Nonatomic Treatment of double and long Variables. Rules About Locks. Rules About the Interaction of Locks and Variables. Rules for volatile Variables. Prescient Store Operations. Discussion. Example: Possible Swap. Example: Out-of-Order Writes. Threads. Locks and Synchronization. Wait Sets and Notification. 9. Opcode Mnemonics by Opcode. Appendix: Summary of Clarifications and Amendments. Index. 0201432943T04062001

3,111 citations


Journal ArticleDOI
TL;DR: An investigation is conducted of two protocols belonging to the priority inheritance protocols class; the two are called the basic priority inheritance protocol and the priority ceiling protocol, both of which solve the uncontrolled priority inversion problem.
Abstract: An investigation is conducted of two protocols belonging to the priority inheritance protocols class; the two are called the basic priority inheritance protocol and the priority ceiling protocol. Both protocols solve the uncontrolled priority inversion problem. The priority ceiling protocol solves this uncontrolled priority inversion problem particularly well; it reduces the worst-case task-blocking time to at most the duration of execution of a single critical section of a lower-priority task. This protocol also prevents the formation of deadlocks. Sufficient conditions under which a set of periodic tasks using this protocol may be scheduled is derived. >

2,399 citations


Journal ArticleDOI
Hsiang-Tsung Kung1, John T. Robinson1Institutions (1)
Abstract: Most current approaches to concurrency control in database systems rely on locking of data objects as a control mechanism. In this paper, two families of nonlocking concurrency controls are presented. The methods used are “optimistic” in the sense that they rely mainly on transaction backup as a control mechanism, “hoping” that conflicts between transactions will not occur. Applications for which these methods should be more efficient than locking are discussed.

1,478 citations


Network Information
Related Papers (5)
26 Oct 2003

Tim Harris, Keir Fraser

15 Jun 2005

Tim Harris, Simon Marlow +2 more

20 Jun 2013

Peter Dinges, Minas Charalambides +1 more

01 Apr 1993, Information Systems

Pankaj Goyal, T. S. Narayanan +1 more

Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20121
20111
20101
20072
20062
20041