scispace - formally typeset
Search or ask a question

Showing papers presented at "USENIX Conference on Object-Oriented Technologies and Systems in 1999"


Proceedings Article
03 May 1999
TL;DR: A general-purpose, portable, and extensible approach for obtaining comprehensive profiling information from the Java virtual machine, which can uncover CPU usage hot spots, heavy memory allocation sites, unnecessary object retention, contended monitors, and thread deadlocks.
Abstract: Existing profilers for Java applications typically rely on custom instrumentation in the Java virtual machine, and measure only limited types of resource consumption. Garbage collection and multi-threading pose additional challenges to profiler design and implementation. In this paper we discuss a general-purpose, portable, and extensible approach for obtaining comprehensive profiling information from the Java virtual machine. Profilers based on this framework can uncover CPU usage hot spots, heavy memory allocation sites, unnecessary object retention, contended monitors, and thread deadlocks. In addition, we discuss a novel algorithm for thread-aware statistical CPU time profiling, a heap profiling technique independent of the garbage collection implementation, and support for interactive profiling with minimum overhead.

74 citations


Proceedings Article
03 May 1999
TL;DR: A key result of this work is to demonstrate that the ability of CORBA ORBs to support real-time systems is mostly an implementation detail, and relatively few changes are required to the standard CORBA reference model and programming API to supportreal-time applications.
Abstract: First-generation CORBA middleware was reasonably successful at meeting the demands of request/response applications with best-effort quality of service (QoS) requirements. Supporting applications with more stringent QoS requirements poses new challenges for next-generation real-time CORBA middleware, however. This paper provides three contributions to the design and optimization of real-time CORBA middleware. First, we outline the challenges faced by real-time ORBs implementers, focusing on optimization principle patterns that can be applied to CORBA's Object Adapter and ORB Core. Second, we describe how TAO, our real-time CORBA implementation, addresses these challenges and applies key ORB optimization principle patterns. Third, we present the results of empirical benchmarks that compare the impact of TAO's design strategies on ORB efficiency, predictability, and scalability. Our findings indicate that ORBs must be highly configurable and adaptable to meet the QoS requirements for a wide range of real-time applications. In addition, we show how TAO can be configured to perform predictably and scalably, which is essential to support real-time applications. A key result of our work is to demonstrate that the ability of CORBA ORBs to support real-time systems is mostly an implementation detail. Thus, relatively few changes are required to the standard CORBA reference model and programming API to support real-time applications.

51 citations


Proceedings Article
03 May 1999
TL;DR: A generic model for reifying dependencies in distributed component systems and how it can be used to support automatic configuration is presented and the use of this model in a new distributed operating system is discussed.
Abstract: Recent developments in Component technology enable the construction of complex software systems by assembling together off-the-shelf components. However, it is still difficult to develop efficient, reliable, and dynamically configurable component-based systems. Components are often developed by different groups with different methodologies. Unspecified dependencies and behavior lead to unexpected failures. Component-based software systems must maintain explicit representations of inter-component dependence and component requirements. This provides a common ground for supporting fault-tolerance and automating dynamic configuration. In this paper, we present a generic model for reifying dependencies in distributed component systems and discuss how it can be used to support automatic configuration. We describe our experience deploying the framework in a CORBA-compliant reflective ORB and discuss the use of this model in a new distributed operating system.

43 citations


Proceedings Article
03 May 1999
TL;DR: The language is introduced but focuses on the runtime representation of QoS expressions, which shows how to dynamically create new expressions at runtime and how to use comparison of expressions as a foundation for building higher-level QoS components such as QoS-based traders.
Abstract: Computing systems deliver their functionality at a certain level of performance, reliability, and security. We refer to such non-functional aspects as quality-of-service (QoS) aspects. Delivering a satisfactory level of QoS is very challenging for systems that operate in open, resource varying environments such as the Internet or corporate intranets. A system that operates in an open environment may rely on services that are deployed under the control of a different organization, and it cannot per se make assumptions about the QoS delivered by such services. Furthermore, since resources vary, a system cannot be built to operate with a fixed level of available resources. To deliver satisfactory QoS in the context of external services and varying resources, a system must be QoS aware so that it can communicate its QoS expectations to those external services, monitor actual QoS based on currently available resources, and adapt to changes in available resources. A QoS-aware system knows which level of QoS it needs from other services and which level of QoS it can provide. To build QoS-aware systems, we need a way to express QoS requirements and properties, and we need a way to communicate such expressions. In a realistic system, such expressions can become rather complex. For example, they typically contain constraints over user-defined domains where constraint satisfaction is determined relative to a user-defined ordering on the domain elements. To cope with this complexity we are developing a specification language and accompanying runtime representation for QoS expressions. This paper introduces our language but focuses on the runtime representation of QoS expressions. We show how to dynamically create new expressions at runtime and how to use comparison of expressions as a foundation for building higher-level QoS components such as QoS-based traders.

43 citations


Proceedings Article
03 May 1999
TL;DR: Most design decisions presented in this paper can be transported to other programming languages and MOPs, improving their flexibility, reconfigurability, security and meta-level code reuse.
Abstract: Several reflective architectures have attempted to improve meta-object reuse by supporting composition of meta-objects, but have done so using limited mechanisms such as Chains of Responsibility. We advocate the adoption of the Composite pattern to define meta-configurations. In the meta-object protocol (MOP) of Guarana, a composer meta-object can control reconfiguration of its component meta-objects and their interactions with base-level objects, resolving conflicts that may arise and establishing meta-level security policies. Guarana is currently implemented as an extension of Kaffe OpenVMTM, a free implementation of the Java Virtual Machine. Nevertheless, most design decisions presented in this paper can be transported to other programming languages and MOPs, improving their flexibility, reconfigurability, security and meta-level code reuse. We present performance figures that show that it is possible to introduce run-time reflection support in a language like Java without much impact on execution speed.

37 citations


Proceedings Article
03 May 1999
TL;DR: The described system serves as the foundation for the Coign Automatic Distributed Partitioning System (ADPS), the first ADPS to automatically partition and distribute binary applications.
Abstract: Binary standard object models, such as Microsoft's Component Object Model (COM) enable the development of not just reusable components, but also an incredible variety of useful component services through run-time interception of binary standard interfaces. Interception of binary components can be used for conformance testing, debugging, profiling, transaction management, serialization and locking, cross-standard middleware interoperability, automatic distributed partitioning, security enforcement, clustering, just-in-time activation, and transparent component aggregation. We describe the implementation of an interception and instrumentation system tested on over 300 COM binary components, 700 unique COM interfaces, 2 million lines of code, and on 3 major commercial-grade applications including Microsoft PhotoDraw 2000. The described system serves as the foundation for the Coign Automatic Distributed Partitioning System (ADPS), the first ADPS to automatically partition and distribute binary applications. While the techniques described in this paper were developed specifically for COM, they have relevance to other object models with binary standards, such as individual CORBA implementations.

31 citations


Proceedings Article
03 May 1999
TL;DR: The filter approach is introduced which provides a novel, intuitive, and powerful language support for the instantiation of large program structures like design patterns.
Abstract: Scripting languages are designed for glueing software components together. Such languages provide features like dynamic extensibility and dynamic typing with automatic conversion that make them well suited for rapid application development. Although these features entail runtime penalties, modern CPUs are fast enough to execute even large applications in scripting languages efficiently. Large applications typically entail complex program structures. Object-orientation offers the means to solve some of the problems caused by this complexity, but focuses only on entities up to the size of a single class. The object-oriented design community proposes design patterns as a solution for complex interactions that are poorly supported by current object-oriented programming languages. In order to use patterns in an application, their implementation has to be scattered over several classes. This fact makes patterns hard to locate in the actual code and complicates their maintenance in an application. This paper presents a general approach to combine the ideas of scripting and object-orientation in a way that preserves the benefits of both of them. It describes the object-oriented scripting language XOTcl (Extended OTcl), which is equipped with several language functionalities that help in the implementation of design patterns. We introduce the filter approach which provides a novel, intuitive, and powerful language support for the instantiation of large program structures like design patterns.

25 citations


Proceedings Article
03 May 1999
TL;DR: All three kinds of architectural evolution in object-oriented systems are shown to be viewed as transformations applied to an evolving design, and all three are automatable with refactorings -- behavior-preserving program transformations.
Abstract: Architectural evolution is a costly yet unavoidable consequence of a successful application. One method for reducing cost is to automate aspects of the evolutionary cycle when possible. Three kinds of architectural evolution in object-oriented systems are: schema transformations, the introduction of design pattern microarchitectures, and the hot-spot-driven-approach. This paper shows that all three can be viewed as transformations applied to an evolving design. Further, the transformations are automatable with refactorings -- behavior-preserving program transformations. A comprehensive list of refactorings used to evolve large applications is provided and an analysis of supported schema transformations, design patterns, and hot-spot meta patterns is presented. Refactorings enable the evolution of architectures on an if-needed basis reducing unnecessary complexity and inefficiency.

24 citations


Proceedings Article
03 May 1999
TL;DR: The Performance Pattern Language and the Performance Measurement Object are introduced which address problems of standard and user extendable performance benchmark suites exercising all aspects of the ORB endsystem under realistic application scenarios by providing an automated script based framework.
Abstract: The performance of CORBA (Common Object Request Broker Architecture) objects is greatly influenced by the application context and by the performance of the ORB endsystem, which consists of the middleware, the operating system and the underlying network. Application developers need to evaluate how candidate application object architectures will perform within heterogenous computing environments, but a lack of standard and user extendable performance benchmark suites exercising all aspects of the ORB endsystem under realistic application scenarios makes this difficult. This paper introduces the Performance Pattern Language and the Performance Measurement Object which address these problems by providing an automated script based framework within which extensive ORB endsystem performance benchmarks may be efficiently described and automatically executed.

20 citations


Proceedings Article
03 May 1999
TL;DR: Results from this study show that the execution of Java code will benefit from more sophisticated branch-predictors, and a path history based predictor is investigated to accurately determine the target of these virtual methods.
Abstract: Java's object oriented nature along with its distributed nature make it a good choice for network computing. The use of virtual methods associated with Java's object oriented behavior requires accurate target prediction for indirect branches. This is critical to the performance of Java applications executed on deeply pipelined, wide issue processors. In this paper, we investigate the use of a path history based predictor to accurately determine the target of these virtual methods. The effect of varying the various parameters of the predictor on the misprediction rates is studied using various Java benchmarks. Results from this study show that the execution of Java code will benefit from more sophisticated branch-predictors.

12 citations


Proceedings Article
03 May 1999
TL;DR: This paper describes the highest level of abstraction in CO2P3S, using two example programs to demonstrate the programming model and the supported patterns, and introduces phased parallel design patterns, a new class of patterns that allow temporal phase relationships in a parallel program to be specified.
Abstract: The CO2P3S parallel programming system uses design patterns and object-oriented programming to reduce the complexities of parallel programming. The system generates correct frameworks from pattern template specifications and provides a layered programming model to address both the problems of correctness and openness. This paper describes the highest level of abstraction in CO2P3S, using two example programs to demonstrate the programming model and the supported patterns. Further, we introduce phased parallel design patterns, a new class of patterns that allow temporal phase relationships in a parallel program to be specified, and provide two patterns in this class. Our results show that the frameworks can be used to quickly implement parallel programs, reusing sequential code where possible. The resulting parallel programs provide substantial performance gains over their sequential counterparts.

Proceedings Article
03 May 1999
TL;DR: The specific results in this paper demonstrate the beneficial effects of agent adaptation both for a single mobile agent and for several cooperating agents, using the adaptation techniques of agent morphing and agent fusion.
Abstract: Mobile agents as a new design paradigm for distributed computing potentially permit network applications to operate across dynamic and heterogeneous systems and networks Agent computing, however, is subject to inefficiencies Namely, due to the heterogeneous nature of the environments in which agents are executed, agent-based programs must rely on underlying agent systems to mask some of those complexities by using system-wide, uniform representations of agent code and data and by 'hiding' the volatility in agents' 'spatial' relationships This paper explores runtime adaptation and agent specialization for improving the performance of agent-based programs Our general aim is to enable programmers to employ these techniques to improve program performance without sacrificing the fundamental advantages promised by mobile agent programming The specific results in this paper demonstrate the beneficial effects of agent adaptation both for a single mobile agent and for several cooperating agents, using the adaptation techniques of agent morphing and agent fusion Experimental results are attained with two sample high performance distributed applications, derived from the scientific domain and from sensor-based codes, respectively

Proceedings Article
03 May 1999
TL;DR: JMAS is a prototype network computing infrastructure based on mobile actors using Java technology that allows a programmer to create mobile actors, initialize their behaviors, and send them messages using constructs provided by the JMAS Mobile Actor API.
Abstract: JMAS is a prototype network computing infrastructure based on mobile actors [10] using Java technology. JMAS requires a programming style different from commonly used approaches to distributed computing. JMAS allows a programmer to create mobile actors, initialize their behaviors, and send them messages using constructs provided by the JMAS Mobile Actor API. Applications are decomposed by the programmer into small, self-contained sub-computations and distributed among a virtual network of Distributed Run-Time Managers (D-RTM); which execute and manage all mobile computations. This system is well suited for course grain computations for network computing clusters. Performance evaluation is done using two benchmarks: a Mersenne Prime Application, and the Traveling Salesman Problem.

Proceedings Article
03 May 1999
TL;DR: This work argues for a mixed-granularity approach where a coarse-grained mechanism is used as the primary address translation scheme, and a fine-Grained approach is used for specialized data structures that are less suitable for the coarse- grained approach.
Abstract: Texas is a highly portable, high-performance persistent object store that can be used with conventional compilers and operating systems, without the need for a preprocessor or special operating system privileges. Texas uses pointer swizzling at page fault time as its primary address translation mechanism, translating addresses from a persistent format into conventional virtual addresses for an entire page at a time as it is loaded into memory. Existing classifications of persistent systems typically focus only on address translation taxonomies based on semantics that we consider to be confusing and ambiguous. Instead, we contend that the granularity choices for design issues are much more important because they facilitate classification of different systems in an unambiguous manner unlike the taxonomies based only on address translation. We have identified five primary design issues that we believe are relevant in this context. We describe these design issues in detail and present a new general classification for persistence based on the granularity choices for these issues. Although the coarse granularity of pointer swizzling at page fault time is efficient in most case, it is sometimes desirable to use finer-grained techniques. We examine different issues related to fine-grained address translation mechanisms, and discuss why these are not suitable as general-purpose address translation techniques. Instead, we argue for a mixed-granularity approach where a coarse-grained mechanism is used as the primary address translation scheme, and a fine-grained approach is used for specialized data structures that are less suitable for the coarse-grained approach. We have incorporated fine-grained address translation in Texas using the C++ smart pointer idiom, allowing programmers to choose the kind of pointer used for any data member in a particular class definition. This approach maintains the important features of the system: persistence that is orthogonal to type, high performance with standard compilers and operating systems, suitability for huge shared address spaces across heterogeneous platforms, and the ability to optimize away pointer swizzling costs when the persistent store is smaller than the hardware-supported virtual address size.

Proceedings Article
03 May 1999
TL;DR: This paper has implemented a CORBA service, called COPE, that is implemented by using causal logging, and its implementation in OrbixWeb, and the problems it encountered.
Abstract: Some form of replicated data management is a basic service of nearly all distributed systems. Replicated data management maintains the consistency of replicated data. In wide-area distributed systems, causal consistency is often used, because it is strong enough to allow one to easily solve many problems while still keeping the cost low even with the large variance in latency that one finds in a wide-area network. Causal logging is a useful technique for implementing causal consistency because it greatly reduces the latency in reading causally consistent data by piggybacking updates on existing network traffic. We have implemented a CORBA service, called COPE, that is implemented by using causal logging. COPE also shares features with some CORBA security services and is naturally implemented using the OrbixWeb interception facilities. In implementing COPE in OrbixWeb, we encountered several problems. We discuss COPE, its implementation in OrbixWeb, and the problems we encountered in this paper. We hope that this discussion will be of interest to both those who are implementing and who are planning on using CORBA interception facilities.

Proceedings Article
03 May 1999
TL;DR: This case study reveals the many pitfalls that can derail a software re-engineering effort, but also shows promising initial results from continued perseverance in this effort.
Abstract: Object Oriented Analysis and Design (OOAD) is increasingly popular as a set of techniques that can be used to initially analyze and design software. Unfortunately, OOAD is a relatively new concept and many large legacy systems predate it. This paper presents the approach one company followed in applying OOAD techniques to an existing 2.5 million line code base. We present an iterative process that provides an avenue for the software to evolve while balancing the needs of business and software engineering. Our case study reveals the many pitfalls that can derail a software re-engineering effort, but also shows promising initial results from continued perseverance in this effort.

Proceedings Article
03 May 1999
TL;DR: It is concluded that, minimally, a database system supporting extensions should have a built-in resource monitoring and controlling mechanism that enhances the security of the database server in the presence of extensions.
Abstract: While object-relational database servers can be extended with user-defined functions (UDFs), the security of the server may be compromised by these extensions. The use of Java to implement the UDFs is promising because it addresses some security concerns. However, it still permits interference between different users through the uncontrolled consumption of resources. In this paper, we explore the use of a Java resource management mechanism (JRes) to monitor resource consumption and enforce usage constraints. JRes enhances the security of the database server in the presence of extensions allowing for (i) detection and neutralization of denial-of-service attacks aimed at resource monopolization, (ii) monitoring resource consumption which enables precise billing of users relying on UDFs, and (iii) obtaining feedback that can be used for adaptive query optimization. The feedback can be utilized either by the UDFs themselves or by the database system to dynamically modify the query execution plan. Both models have been prototyped in the Cornell Predator database system. We describe the implementation techniques, and present experiments that demonstrate the effects of the adaptive behavior facilitated by JRes. We conclude that, minimally, a database system supporting extensions should have a built-in resource monitoring and controlling mechanism. Moreover, in order to fully exploit information provided by the resource control mechanisms, both the query optimizer and the UDFs themselves should have access to this information.