scispace - formally typeset
Search or ask a question

Showing papers presented at "Virtual Execution Environments in 2006"


Proceedings ArticleDOI
14 Jun 2006
TL;DR: This work presents a runtime framework with a goal of collecting a complete, machine- and task-independent, user-mode trace of a program's execution that can be re-simulated deterministically with full fidelity down to the instruction level.
Abstract: Program execution traces provide the most intimate details of a program's dynamic behavior. They can be used for program optimization, failure diagnosis, collecting software metrics like coverage, test prioritization, etc. Two major obstacles to exploiting the full potential of information they provide are: (i) performance overhead while collecting traces, and (ii) significant size of traces even for short execution scenarios. Reducing information output in an execution trace can reduce both performance overhead and the size of traces. However, the applicability of such traces is limited to a particular task. We present a runtime framework with a goal of collecting a complete, machine- and task-independent, user-mode trace of a program's execution that can be re-simulated deterministically with full fidelity down to the instruction level. The framework has reasonable runtime overhead and by using a novel compression scheme, we significantly reduce the size of traces. Our framework enables building a wide variety of tools for understanding program behavior. As examples of the applicability of our framework, we present a program analysis and a data locality profiling tool. Our program analysis tool is a time travel debugger that enables a developer to debug in both forward and backward direction over an execution trace with nearly all information available as in a regular debugging session. Our profiling tool has been used to improve data locality and reduce the dynamic working sets of real world applications.

286 citations


Proceedings ArticleDOI
14 Jun 2006
TL;DR: The design and implementation of the Squawk VM is described as applied to the Sun™ Small Programmable Object Technology (SPOT) wireless device; a device developed at Sun Microsystems Laboratories for experimentation with wireless sensor and actuator applications.
Abstract: The Squawk virtual machine is a small Java™ virtual machine (VM) written mostly in Java that runs without an operating system on a wireless sensor platform. Squawk translates standard class file into an internal pre-linked, position independent format that is compact and allows for efficient execution of bytecodes that have been placed into a read-only memory. In addition, Squawk implements an application isolation mechanism whereby applications are represented as object and are therefore treated as first class objects (i.e., they can be reified). Application isolation also enables Squawk to run multiple applications at once with all immutable state being shared between the applications. Mutable state is not shared. The combination of these features reduce the memory footprint of the VM, making it ideal for deployment on small devices.Squawk provides a wireless API that allows developers to write applications for wireless sensor networks (WSNs), this API is an extension of the generic connection framework (GCF). Authentication of deployed files on the wireless device and migration of applications between devices is also performed by the VM.This paper describes the design and implementation of the Squawk VM as applied to the Sun™ Small Programmable Object Technology (SPOT) wireless device; a device developed at Sun Microsystems Laboratories for experimentation with wireless sensor and actuator applications.

238 citations


Proceedings ArticleDOI
14 Jun 2006
TL;DR: A just-in-time compiler for a Java VM that is small enough to fit on resource-constrained devices, yet is surprisingly effective, and benchmarks show a speedup that in some cases rivals heavy-weight just- in-time compilers.
Abstract: We present a just-in-time compiler for a Java VM that is small enough to fit on resource-constrained devices, yet is surprisingly effective. Our system dynamically identifies traces of frequently executed bytecode instructions (which may span several basic blocks across several methods) and compiles them via Static Single Assignment (SSA) construction. Our novel use of SSA form in this context allows to hoist instructions across trace side-exits without necessitating expensive compensation code in off-trace paths. The overall memory consumption (code and data) of our system is only 150 kBytes, yet benchmarks show a speedup that in some cases rivals heavy-weight just-in-time compilers.

155 citations


Proceedings ArticleDOI
14 Jun 2006
TL;DR: A working prototype, LUCOS, is presented, which supports live update capability on Linux running on Xen virtual machine monitor, and the proposed approach allows a broad range of patches and upgrades to be applied at any time without the requirement of a quiescence state.
Abstract: Many critical IT infrastructures require non-disruptive operations. However, the operating systems thereon are far from perfect that patches and upgrades are frequently applied, in order to close vulnerabilities, add new features and enhance performance. To mitigate the loss of availability, such operating systems need to provide features such as live update through which patches and upgrades can be applied without having to stop and reboot the operating system. Unfortunately, most current live updating approaches cannot be easily applied to existing operating systems: some are tightly bound to specific design approaches (e.g. object-oriented); others can only be used under particular circumstances (e.g. quiescence states).In this paper, we propose using virtualization to provide the live update capability. The proposed approach allows a broad range of patches and upgrades to be applied at any time without the requirement of a quiescence state. Moreover, such approach shares good portability for its OS-transparency and is suitable for inclusion in general virtualization systems. We present a working prototype, LUCOS, which supports live update capability on Linux running on Xen virtual machine monitor. To demonstrate the applicability of our approach, we use real-life kernel patches from Linux kernel 2.6.10 to Linux kernel 2.6.11, and apply some of those kernel patches on the fly. Performance measurements show that our implementation incurs negligible performance overhead: a less than 1% performance degradation compared to a Xen-Linux. The time to apply a patch is also very minimal.

121 citations


Proceedings ArticleDOI
14 Jun 2006
TL;DR: This paper describes a secure and efficient implementation of instruction-set randomization (ISR) using software dynamic translation and describes an implementation that uses a strong cipher algorithm--the Advanced Encryption Standard (AES), to perform randomization.
Abstract: One of the most common forms of security attacks involves exploiting a vulnerability to inject malicious code into an executing application and then cause the injected code to be executed. A theoretically strong approach to defending against any type of code-injection attack is to create and use a process-specific instruction set that is created by a randomization algorithm. Code injected by an attacker who does not know the randomization key will be invalid for the randomized processor effectively thwarting the attack. This paper describes a secure and efficient implementation of instruction-set randomization (ISR) using software dynamic translation. The paper makes three contributions beyond previous work on ISR. First, we describe an implementation that uses a strong cipher algorithm--the Advanced Encryption Standard (AES), to perform randomization. AES is generally believed to be impervious to known attack methodologies. Second, we demonstrate that ISR using AES can be implemented practically and efficiently (considering both execution time and code size overheads) without requiring special hardware support. The third contribution is that our approach detects malicious code before it is executed. Previous approaches relied on probabilistic arguments that execution of non-randomized foreign code would eventually cause a fault or runtime exception.

118 citations


Proceedings ArticleDOI
Yang Yu1, Fanglu Guo1, Susanta Nanda1, Lap-Chung Lam1, Tzi-cker Chiueh1 
14 Jun 2006
TL;DR: Experimental results demonstrate that FVM is more flexible and scalable, requires less system resource, incurs lower start-up and run-time performance overhead than existing hardware-level virtual machine technologies, and thus makes a compelling building block for security and fault-tolerant applications.
Abstract: Many fault-tolerant and intrusion-tolerant systems require the ability to execute unsafe programs in a realistic environment without leaving permanent damages. Virtual machine technology meets this requirement perfectly because it provides an execution environment that is both realistic and isolated. In this paper, we introduce an OS level virtual machine architecture for Windows applications called Feather-weight Virtual Machine (FVM), under which virtual machines share as many resources of the host machine as possible while still isolated from one another and from the host machine. The key technique behind FVM is namespace virtualization, which isolates virtual machines by renaming resources at the OS system call interface. Through a copy-on-write scheme, FVM allows multiple virtual machines to physically share resources but logically isolate their resources from each other. A main technical challenge in FVM is how to achieve strong isolation among different virtual machines and the host machine, due to numerous namespaces and interprocess communication mechanisms on Windows. Experimental results demonstrate that FVM is more flexible and scalable, requires less system resource, incurs lower start-up and run-time performance overhead than existing hardware-level virtual machine technologies, and thus makes a compelling building block for security and fault-tolerant applications.

113 citations


Proceedings ArticleDOI
14 Jun 2006
TL;DR: An evaluation of translation overhead under both benchmark and less idealized conditions is presented, showing that conventional benchmarks do not provide a good prediction oftranslation overhead when used pervasively and static pre-translation is effective only when expensive instrumentation or optimization is performed.
Abstract: Dynamic translation is a general purpose tool used for instrumenting programs at run time. Performance of translated execution relies on balancing the cost of translation against the benefits of any optimizations achieved, and many current translators perform substantial rewriting during translation in an attempt to reduce execution time. Our results show that these optimizations offer no significant benefit even when the translated program has a small, hot working set. When used in a broader range of applications, such as ubiquitous policy enforcement or penetration detection, translator performance cannot rely on the presence of a hot working set to amortize the cost of translation. A simpler, more maintainable, adaptable, and smaller translator appears preferable to more complicated designs in most cases.HDTrans is a light-weight dynamic instrumentation system for the IA-32 architecture that uses some simple and effective translation techniques in combination with established trace linearization and code caching optimizations. We present an evaluation of translation overhead under both benchmark and less idealized conditions, showing that conventional benchmarks do not provide a good prediction of translation overhead when used pervasively.A further contribution of this paper is an analysis of the effectiveness of post-link static pre-translation techniques for overhead reduction. Our results indicate that static pre-translation is effective only when expensive instrumentation or optimization is performed.

74 citations


Proceedings ArticleDOI
14 Jun 2006
TL;DR: This paper examines the implementation of a VMM-based intrusion detection and monitoring system for collecting information about attacks on honeypots, and document and evaluate three designs implemented on two open-source virtualization platforms: User-Mode Linux and Xen.
Abstract: Virtual Machine Monitors (VMMs) are a common tool for implementing honeypots. In this paper we examine the implementation of a VMM-based intrusion detection and monitoring system for collecting information about attacks on honeypots. We document and evaluate three designs we have implemented on two open-source virtualization platforms: User-Mode Linux and Xen. Our results show that our designs give the monitor good visibility into the system and thus, a small number of monitoring sensors can detect a large number of intrusions. In a three month period, we were able to detect five different attacks, as well as collect and try 46 more exploits on our honeypots. All attacks were detected with only two monitoring sensors. We found that the performance overhead for monitoring such intrusions is independent of which events are being monitored, but depends entirely on the number of monitoring events and the underlying monitoring implementation. The performance overhead can be significantly improved by implementing the monitor directly in the privileged code of the VMM, though at the cost of increasing the size of the trusted computing base of the system.

65 citations


Proceedings ArticleDOI
Leendert van Doorn1
14 Jun 2006
TL;DR: The new virtualization technologies that will be introduced over the next few years, how they help virtualization, what challenges they pose and how thesevirtualization technologies will likely consolidate are discussed.
Abstract: As Intel is rolling out its Vanderpool processor virtualization technology and AMD its Secure Virtual Machine technology, we are only seeing the first wave of processor virtualization assists. Over the next few years the x86 space will change dramatically. We will see the introduction of massive multi-core, 64-bit, 2 nd generation processor virtualization capabilities, I/O isolation capabilities, and hardware security assists.Both Intel and AMD are differentiating their processors by providing enhancements that enable you to run multiple virtual machines in such a way that the guest is unaware that it is being virtualized. Ironically, largely because these technologies have been unavailable for so long, Linux and Windows are going into a different direction: paravirtualization. With paravirtualization the guest operating system collaborates closely with the virtual machine monitor through a set of well defined software interfaces. This approach does not require any new hardware features at all and has the potential of performing much better. So, this raises an interesting dilemma: Some of the new virtualization capabilities may already be obsolete before they are brought to market.In this talk I will discuss the new virtualization technologies that will be introduced over the next few years, how they help virtualization, what challenges they pose and how these virtualization technologies will likely consolidate.

38 citations


Proceedings ArticleDOI
14 Jun 2006
TL;DR: This work evaluates many alternative policies for the creation of fragments within the Strata SDT framework, finding that effective translation strategies are vital to program performance, improving performance from as much as 28% overhead, to as little as 3% overhead on average for the SPEC CPU2000 benchmark suite.
Abstract: Software Dynamic Translation (SDT) systems have been used for program instrumentation, dynamic optimization, security policy enforcement, intrusion detection, and many other uses. To be widely applicable, the overhead (runtime, memory usage, and power consumption) should be as low as possible. For instance, if an SDT system is protecting a web server against possible attacks, but causes 30% slowdown, a company may need 30% more machines to handle the web traffic they expect. Consequently, the causes of SDT overhead should be studied rigorously. This work evaluates many alternative policies for the creation of fragments within the Strata SDT framework. In particular, we examine the effects of ending translation at conditional branches; ending translation at unconditional branches; whether to use partial inlining for call instructions; whether to build the target of calls immediately or lazily; whether to align branch targets; and how to place code to transition back to the dynamic translator. We find that effective translation strategies are vital to program performance, improving performance from as much as 28% overhead, to as little as 3% overhead on average for the SPEC CPU2000 benchmark suite. We further demonstrate that these translation strategies are effective across several platforms, including Sun SPARC UltraSparc IIi, AMD Athlon Opteron, and Intel Pentium IV processors.

36 citations


Proceedings ArticleDOI
14 Jun 2006
TL;DR: This work examines the specific and overall performance changes due to a new garbage collection optimization in two different Java virtual machines and shows how unintended side-effects can contribute to, and distort the final assessment.
Abstract: Many new Java runtime optimizations report relatively small, single-digit performance improvements. On modern virtual and actual hardware, however, the performance impact of an optimization can be influenced by a variety of factors in the underlying systems. Using a case study of a new garbage collection optimization in two different Java virtual machines, we show the relative effects of issues that must be taken into consideration when claiming an improvement. We examine the specific and overall performance changes due to our optimization and show how unintended side-effects can contribute to, and distort the final assessment. Our experience shows that VM and hardware concerns can generate variances of up to 9.5% in whole program execution time. Consideration of these confounding effects is critical to a good, objective understanding of Java performance and optimization.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: The snBench provides execution environments and a run-time support infrastructure to provide each user a Virtual Sensor Network characterized by efficient automated program deployment, resource management, and a truly extensible architecture.
Abstract: We envision future Sensor Networks (SNs) that will be composed of a hybrid collection of a variety of sensing devices embedded into shared environments. In such environments it follows that the embedded SN infrastructure would also be shared by various users, occupants, or administrators of these shared spaces. As such a clear need emerges to virtualize the SN, sharing the resources of the SN across various tasks executing simultaneously. To achieve this goal, we present the snBench (SN Workbench). The snBench abstracts a collection of dissimilar and disjoint resources into a shared virtual SN. The snBench provides an accessible high-level programming language that enables users to write "macro-level" program for their own virtual SN (i.e., programs are written at the scope of the SN rather than its individual components and specific details of the components or deployment need not be specified by the developer). To this end snBench provides execution environments and a run-time support infrastructure to provide each user a Virtual Sensor Network characterized by efficient automated program deployment, resource management, and a truly extensible architecture. In this paper we present an overview of the snBench, detailing its salient functionalities that support the entire life-cycle of a SN application.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: These experiments show that the V-ISA design captures vector parallelism for two quite different classes of architectures and provides virtual object code portability within the class of subword SIMD architectures.
Abstract: We present Vector LLVA, a virtual instruction set architecture (VISA) that exposes extensive static information about vector parallelism while avoiding the use of hardware-specific parameters. We provide both arbitrary-length vectors (for targets that allow vectors of arbitrary length, or where the target length is not known) and fixed-length vectors (for targets that have a fixed vector length, such as subword SIMD extensions), together with a rich set of operations on both vector types. We have implemented translators that compile (1) Vector LLVA written with arbitrary-length vectors to the Motorola RSVP architecture and (2) Vector LLVA written with fixed-length vectors to both AltiVec and Intel SSE2. Our translatorgenerated code achieves speedups competitive with handwritten native code versions of several benchmarks on all three architectures. These experiments show that our V-ISA design captures vector parallelism for two quite different classes of architectures and provides virtual object code portability within the class of subword SIMD architectures.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: A Dynamic Code Management system (DCM) in a managed runtime that performs whole program code layout optimizations to improve code locality and proposes three new code placement algorithms that target ITLB misses, which typically have the greatest impact on performance.
Abstract: Poor code locality degrades application performance by increasing memory stalls due to instruction cache and TLB misses. This problem is particularly an issue for large server applications written in languages such as Java and C# that provide just-in-time (JIT) compilation, dynamic class loading, and dynamic recompilation. However, managed runtimes also offer an opportunity to dynamically profile applications and adapt them to improve their performance. This paper describes a Dynamic Code Management system (DCM) in a managed runtime that performs whole program code layout optimizations to improve instruction locality.We begin by implementing the widely used Pettis-Hansen algorithm for method layout to improve code locality. Unfortunately, this algorithm is too costly for a dynamic optimization system, O(n3) in time in the call graph. For example, Pettis-Hansen requires a prohibitively expensive 35 minutes to lay out MiniBean which has 15,586 methods. We propose three new code placement algorithms that target ITLB misses, which typically have the greatest impact on performance. The best of these algorithms, Code Tiling, groups methods into page sized tiles by performing a depth-first traversal of the call graph based on call frequency. Excluding overhead, experimental results show that DCM with Code Tiling improves performance by 6% on the large MiniBean benchmark over a baseline that orders methods based on invocation order, whereas Pettis-Hansen placement offers less improvement, 2%, over the same base. Furthermore, Code Tiling lays out MiniBean in just 0.35 seconds for 15,586 methods (6000 times faster than Pettis-Hansen) which makes it suitable for high-performance managed runtimes.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: This paper discusses the behavior of traditional copy-on-write implementations of checkpointing in the context of real-time systems, and shows how such implementations may, in pathological cases, seriously impair the ability of the user code to meet its deadlines.
Abstract: The progress towards programming methodologies that simplify the work of the programmer involves automating, whenever possible, activities that are secondary to the main task of designing algorithms and developing applications. Automatic memory management, using garbage collection, and automatic persistence, using checkpointing, are both examples of mechanisms that operate behind the scenes, simplifying the work of the programmer. Implementing such mechanisms in the presence of real-time constraints, however, is particularly difficult.In this paper we review the behavior of traditional copy-on-write implementations of checkpointing in the context of real-time systems, and we show how such implementations may, in pathological cases, seriously impair the ability of the user code to meet its deadlines. We discuss the source of the problem, supply benchmarks, and discuss possible remedies. We subsequently propose a novel approach that does not rely on copy-on-write and that, while more expensive in terms of CPU time overhead, is unaffected by pathological user code. We also describe our implementation of the proposed solution, based on the Ovm RTSJ Java Virtual Machine, and we discuss our experimental results.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: The CubeVM is presented, an interpreter architecture for an applied variant of the Pi-calculus, focusing on its operational semantics, and it is shown, in a formal way, that the resource management model inside the VM may be greatly simplified without the need for nested stack frames.
Abstract: The Pi-calculus is a formalism to model and reason about highly concurrent and dynamic systems. Most of the expressive power of the language comes from the ability to pass communication channels among concurrent processes, as any other value. We present in this paper the CubeVM, an interpreter architecture for an applied variant of the Pi-calculus, focusing on its operational semantics. The main characteristic of the CubeVM comes from its stackless architecture. We show, in a formal way, that the resource management model inside the VM may be greatly simplified without the need for nested stack frames. This is particularly true for the garbage collection of processes and channels. The proposed GC, based on a reference counting scheme, is highly concurrent and, most interestingly, does automatically detect and reclaim cycles of disabled processes. We also address the main performance issues raised by the fine-grained concurrency model of the Pi-calculus. We introduce the reactive variant of the semantics that allows, when applicable, to increase the performance drastically by bypassing the scheduler. We define the language subset of processes in so called chain-reaction forms for which the sequential semantics can be proved statically. We illustrate the expressive power and performance gains of such chain-reactions with examples of functional, dataflow and object-oriented systems. Encodings for the pure Pi-calculus are also demonstrated.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: Both JIT optimization and garbage collection alter a program's behavior and runtime requirements, which considerably affects the adaptation of configurable hardware units, and influences the overall energy consumption.
Abstract: During recent years, microprocessor energy consumption has been surging and efforts to reduce power and energy have received a lot of attention. At the same time, virtual execution environments (VEEs), such as Java virtual machines, have grown in popularity. Hence, it is important to evaluate the impact of virtual execution environments on microprocessor energy consumption. This paper characterizes the energy and power impact of two important components of VEEs, Just-in-time(JIT) optimization and garbage collection. We find that by reducing instruction counts, JIT optimization significantly reduces energy consumption, while garbage collection incurs runtime overhead that consumes more energy. Importantly, both JIT optimization and garbage collection decrease the average power dissipated by a program. Detailed analysis reveals that both JIT optimizer and JIT optimized code dissipate less power than un-optimized code. On the other hand, being memory bound and with low ILP, the garbage collector dissipates less power than the application code, but rarely affects the average power of the latter.Adaptive microarchitectures are another recent trend for energy reduction where microarchitectural resources can be dynamically tuned to match program runtime requirements. This research reveals that both JIT optimization and garbage collection alter a program's behavior and runtime requirements, which considerably affects the adaptation of configurable hardware units, and influences the overall energy consumption. This work also demonstrates that the adaptation preferences of the two VEE services differ substantially from those of the application code. Both VEE services prefer a simple core for high energy reduction. On the other hand, the JIT optimizer usually requires larger data caches, while the garbage collector rarely benefits from large data caches. The insights gained in this paper point to novel techniques that can further reduce microprocessor energy consumption.

Proceedings Article
14 Jun 2006
TL;DR: The steering committee believes that the decision to move the submission deadline significantly earlier (and before the PLDI notification date) was the primary cause of the decline in the number of submissions, and is reconsidering this decision for future conferences.
Abstract: It is our great pleasure to welcome you to the Second International Conference on Virtual Execution Environments -- VEE'06. Interest in virtual execution environments spans the programming language, operating system, and computer architecture communities. The goal of the VEE conference is to provide a single venue to bring together researchers and practitioners from all three of these communities to share their perspectives with others interested in the various aspects of virtual execution environments. This year's VEE is co-located with PLDI 2006 in Ottawa, Ontario and, for the first time, is equally co-sponsored by ACM SIGOPS and ACM SIGPLAN. VEE'07 will be held as part of FCRC in June 2007 and we hope VEE'08 will be co-located with a leading operating systems conference.The call for papers attracted 44 submissions, which was a significant decline from the 65 papers submitted to VEE'05. However, the average quality of the submissions was quite high. We believe that the decision to move the submission deadline significantly earlier (and before the PLDI notification date) was the primary cause of the decline in the number of submissions. The steering committee is reconsidering this decision for future conferences.Every submitted paper was reviewed by three program committee members. Program committee members had the option of asking for input from additional external reviewers when preparing their reviews; 20% of the PC reviews were prepared with the help of external reviewers. The program committee met at IBM Research in Hawthorne, NY on February 14, 2006 -- two days after the largest snowfall in the history of New York City. As a result, only half of the committee was able to successfully travel to New York and attend the meeting in person. However, all of the committee members who were not physically present participated in the meeting via an all day teleconference. The committee accepted 17 excellent papers that cover a wide spectrum of topics related to virtual execution environments. In addition to the technical papers, the program includes keynote talks by James Larus and Leendert van Doorn.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: Dimension is presented, a flexible tool that provides instrumentation services for a variety of VEEs, and is the first stand-alone instrumentation tool that is specially designed for use in Vees.
Abstract: Translation-based virtual execution environments (VEEs) are becoming increasingly popular because of their usefulness. With dynamic translation, a program in a VEE has two binaries: an input source binary and a dynamically generated target binary. Program analysis is important for these binaries, and both the developers and users of VEEs need an instrumentation system to customize program analysis tools. However, existing instrumentation systems for use in VEEs have two drawbacks. First, they are tightly bound with a specific VEE and thus are difficult to reuse without a lot of effort. Second, most of them can not support instrumentation on both the source and target binaries.This paper presents Dimension, a flexible tool that provides instrumentation services for a variety of VEEs. To our knowledge, it is the first stand-alone instrumentation tool that is specially designed for use in VEEs. Given an instrumentation specification, Dimension can be used by a VEE to provide customized instrumentation, enabling analyses on both the source and target binaries.We present two case studies demonstrating that Dimension can be reused easily by different VEEs. We experiment with the two cases and show that the same instrumentation provided by Dimension does not lose efficiency compared to its manual implementation for that particular VEE (the average performance difference is within 2%). We also illustrate that by interfacing with a special VEE that has the same source and target binary formats, Dimension can be used to build an efficient dynamic instrumentation system for traditional execution environments.

Proceedings ArticleDOI
James R. Larus1
14 Jun 2006
TL;DR: This talk will describe Singularity and then explain why conventional runtime systems, such as the JVM and CLR, should go away, like punch cards, teletypes, time sharing, etc.
Abstract: Singularity [1] is a research project in Microsoft Research that started with a question: what would a software platform look like if it was designed from scratch with the primary goal of dependability? Singularity is working to answer this question by building on advances in programming languages and tools to develop a new system architecture and operating system (named Singularity), with the aim of producing a more robust and dependable software platform.Singularity made some design decisions that distinguish it from other systems. First, Singularity is written, for the most part, in safe, managed code and it will only run verifiably safe programs. Second, the system is the runtime; there is no separate JVM or CLR. Third, each process's execution environment is independent, with its own, distinct runtime, garbage collector, and libraries. As a consequence, Singularity uses control of the execution environment as a mechanism to enforce system policy and enhance system dependability.This talk will describe Singularity and then explain why conventional runtime systems, such as the JVM and CLR, should go away, like punch cards, teletypes, time sharing, etc.