scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Dhara: A Service Abstraction-Based OS Kernel Design Model

TL;DR: A new kernel design model Dhara is presented, that raises the level of abstraction from objects and procedures to services to services and paves the way for building a new distributed OS kernel.
Abstract: Traditional procedural operating system (OS) kernels sacrifice maintainability and understandability for optimum performance. Though object oriented (OO) kernels can address these problems up to a certain extent, they lack the layered approach of services and service compositions. We present a new kernel design model Dhara, that raises the level of abstraction from objects and procedures to services. The service model of Dhara is richer in abstractions than current web service model and paves the way for building a new distributed OS kernel. Dhara conceives an OS as being constructed by multiple stacks of services containing several layers of abstracted services. A key research challenge we envisage in building such model is automatic service compositions of kernel services which can provide desired QoS. A kernel built using Dhara can easily be customized using composed services to derive optimal performance for different applications such as databases. A prototype is developed using Linux kernel as a case study by applying the design concepts of Dhara. We show that overhead of implementation of Dhara is 5% to 15%, which is reasonable, considering the advantages of new design and increased capacity of the hardware in recent times.
References
More filters
Proceedings ArticleDOI
03 Dec 1995
TL;DR: The prototype exokernel system implemented here is at least five times faster on operations such as exception dispatching and interprocess communication, and allows applications to control machine resources in ways not possible in traditional operating systems.
Abstract: Traditional operating systems limit the performance, flexibility, and functionality of applications by fixing the interface and implementation of operating system abstractions such as interprocess communication and virtual memory. The exokernel operating system architecture addresses this problem by providing application-level management of physical resources. In the exokernel architecture, a small kernel securely exports all hardware resources through a low-level interface to untrusted library operating systems. Library operating systems use this interface to implement system objects and policies. This separation of resource protection from management allows application-specific customization of traditional operating system abstractions by extending, specializing, or even replacing libraries. We have implemented a prototype exokemel operating system. Measurements show that most primitive kernel operations (such as exception handling and protected control transfer) are ten to 100 times faster than in Ultrix, a mature monolithic UNIX operating system. In addition, we demonstrate that an exokernel allows applications to control machine resources in ways not possible in traditional operating systems. For instance, virtual memory and interprocess communication abstractions are implemented entirely within an application-level library. Measurements show that application-level virtual memory and interprocess communication primitives are five to 40 times faster than Ultrix's kernel primitives. Compared to state-of-the-art implementations from the literature, the prototype exokernel system is at least five times faster on operations such as exception dispatching and interprocess communication.

1,309 citations


"Dhara: A Service Abstraction-Based ..." refers background in this paper

  • ...Exokernel[4] and K42[3] have previously tried to provide such extendability to the operating system, but such approaches require the programmer to be aware of the services present or to use an existing library....

    [...]

Proceedings Article
22 Jan 1996
TL;DR: lmbench is a micro-benchmark suite designed to focus attention on the basic building blocks of many common system applications, such as databases, simulations, software development, and networking.
Abstract: lmbench is a micro-benchmark suite designed to focus attention on the basic building blocks of many common system applications, such as databases, simulations, software development, and networking. In almost all cases, the individual tests are the result of analysis and isolation of a customer's actual performance problem. These tools can be, and currently are, used to compare different system implementations from different vendors. In several cases, the benchmarks have uncovered previously unknown bugs and design flaws. The results have shown a strong correlation between memory system performance and overall performance. lmbench includes an extensible database of results from systems current as of late 1995.

990 citations

Journal ArticleDOI
01 Sep 1992
TL;DR: This work describes the design, implementation and evaluation of a virtual memory system that provides application control of physical memory using external page-cache management and claims that this approach can significantly improve performance for many memory-bound applications while reducing kernel complexity, yet does not complicate other applications or reduce their performance.
Abstract: Next generation computer systems will have gigabytes of physical memory and processors in the 100 MIPS range or higher. Contrary to some conjectures, this trend requires more sophisticated memory management support for memory-bound computations such as scientific simulations and systems such as large-scale database systems, even though memory management for most programs will be less of a concern. We describe the design, implementation and evaluation of a virtual memory system that provides application control of physical memory using external page-cache management. In this approach, a sophisticated application is able to monitor and control the amount of physical memory it has available for execution, the exact contents of this memory, and the scheduling and nature of page-in and page-out using the abstraction of a physical page cache provided by the kernel. We claim that this approach can significantly improve performance for many memory-bound applications while reducing kernel complexity, yet does not complicate other applications or reduce their performance.

178 citations


"Dhara: A Service Abstraction-Based ..." refers background in this paper

  • ...For example, if the virtual memory is managed according to the recommendation of the applications[7], page faults can be reduced....

    [...]

  • ...The benefits of multiple kernel policies for resource management are well known[5], [7]....

    [...]

Book
02 Jan 1986
TL;DR: Several operating system services are examined with a view toward their applicability to support of database management functions.
Abstract: Several operating system services are examined with a view toward their applicability to support of database management functions. These services include buffer pool management; the file system; scheduling, process management, and interprocess communication; and consistency control.

172 citations


"Dhara: A Service Abstraction-Based ..." refers background in this paper

  • ...The applications may perform suboptimally under this assumption[20]....

    [...]

Proceedings ArticleDOI
28 Oct 1996
TL;DR: This paper presents CPU inheritance scheduling, a novel processor scheduling framework in which arbitrary threads can act as schedulers for other threads, and supports processor management techniques such as processor affinity and scheduler activations.
Abstract: Traditional processor scheduling mechanisms in operating systems are fairly rigid, often supportingonly one fixed scheduling policy, or, at most, a few “scheduling classes” whose implementations are closely tied together in the OS kernel. This paper presents CPU inheritance scheduling, a novel processor scheduling framework in which arbitrary threads can act as schedulers for other threads. Widely different scheduling policies can be implemented under the framework, and many different policies can coexist in a single system, providing much greater scheduling flexibility. Modular, hierarchical control can be provided over the processor utilization of arbitrary administrative domains, such as processes, jobs, users, and groups, and the CPU resources consumed can be accounted for and attributed accurately. Applications, as well as the OS, can implement customized local scheduling policies; the framework ensures that all the different policies work together logically and predictably. As a side effect, the framework also cleanly addresses priority inversion by providing a generalized form of priority inheritance that automatically works within and among diverse scheduling policies. CPU inheritance scheduling extends naturally to multiprocessors, and supports processor management techniques such as processor affinity[29] and scheduler activations[3]. We show that this flexibility can be provided with acceptable overhead in typical environments, depending on factors such as context switch speed and frequency.

148 citations


"Dhara: A Service Abstraction-Based ..." refers background in this paper

  • ...Loadable Kernel Modules (LKM), Virtual File System (VFS), hierarchical scheduler[6], etc....

    [...]