scispace - formally typeset
Search or ask a question
Author

Adam H. Leventhal

Other affiliations: Oracle Corporation, Sun Microsystems
Bio: Adam H. Leventhal is an academic researcher from Business International Corporation. The author has contributed to research in topics: Computer data storage & Flash memory. The author has an hindex of 7, co-authored 12 publications receiving 625 citations. Previous affiliations of Adam H. Leventhal include Oracle Corporation & Sun Microsystems.

Papers
More filters
Proceedings Article
27 Jun 2004
TL;DR: DTrace features the ability to dynamically instrument both user-level and kernel-level software in a unified and absolutely safe fashion and features a C-like high-level control language to describe the predicates and actions at a given point of instrumentation.
Abstract: This paper presents DTrace, a new facility for dynamic instrumentation of production systems. DTrace features the ability to dynamically instrument both user-level and kernel-level software in a unified and absolutely safe fashion. When not explicitly enabled, DTrace has zero. probe effect--the system operates exactly as if DTrace were not present at all. DTrace allows for many tens of thousands of instrumentation points, with even the smallest of systems offering on the order of 30,000 such points in the kernel alone. We have developed a C-like high-level control language to describe the predicates and actions at a given point of instrumentation. The language features user-defined variables, including thread-local variables and associative arrays. To eliminate the need for most postprocessing, the facility features a scalable mechanism for aggregating data and a mechanism for speculative tracing. DTrace has been integrated into the Solaris operating system and has been used to find serious systemic performance problems on production systems-problems that could not be found using pre-existing facilities.

512 citations

Journal ArticleDOI
TL;DR: The past few years have been an exciting time for flash memory as the cost has fallen dramatically as fabrication has become more efficient and the market has grown; the density has improved with the advent of better processes and additional bits per cell; and flash has been adopted in a wide array of applications.
Abstract: The past few years have been an exciting time for flash memory. The cost has fallen dramatically as fabrication has become more efficient and the market has grown; the density has improved with the advent of better processes and additional bits per cell; and flash has been adopted in a wide array of applications. The flash ecosystem has expanded and continues to expand especially for thumb drives, cameras, ruggedized laptops, and phones in the consumer space.

36 citations

Patent
25 Jun 2010
TL;DR: In this paper, the authors present a non-optional I/O request for a storage medium consisting of software instructions, which when executed by a processor, performs a method, the method including obtaining a first nonoptional Input/Output (I/O) request from a queue, determining that a second non-i.i.d. request and an optional I/o request are adjacent to the first non-iiO request, generating a new data payload using a first data payload from the first request, a second data payload for the second request, and a third data
Abstract: A computer readable storage medium comprising software instructions, which when executed by a processor, perform a method, the method including obtaining a first non-optional Input/Output (I/O) request from an I/O queue, determining that a second non-optional I/O request and an optional I/O request are adjacent to the first non-optional I/O request, generating a new data payload using a first data payload from the first non-optional I/O request, a second data payload for the second non-optional I/O request, and a third data payload corresponding to the optional I/O request, wherein the third data payload is interposed between the first data payload and the second data payload, generating a new non-optional I/O request comprising the new data payload, and issuing the new non-optional I/O request to a storage pool, wherein the new data payload is written to a contiguous storage location in the storage pool.

17 citations

Journal ArticleDOI
TL;DR: Recent trends in hard drives show that triple-parity RAID must soon become pervasive, and the incredible growth of hard-drive capacities could impose serious limitations on the reliability even of RAID-6 systems.
Abstract: How much longer will current RAID techniques persevere? The RAID levels were codified in the late 1980s; double-parity RAID, known as RAID-6, is the current standard for high-availability, space-efficient storage. The incredible growth of hard-drive capacities, however, could impose serious limitations on the reliability even of RAID-6 systems. Recent trends in hard drives show that triple-parity RAID must soon become pervasive. In 2005, Scientific American reported on Kryder’s law, which predicts that hard-drive density will double annually. While the rate of doubling has not quite maintained that pace, it has been close.

17 citations

Patent
30 Nov 2006
TL;DR: In this article, a method for emulating a system call includes making the system call by a first process in a first operating system (OS) for interacting with a second process, where the first OS is emulated in a second OS, spawning an agent process, wherein the agent process is a child process of the first process.
Abstract: A method for emulating a system call includes making the system call by a first process in a first operating system (OS) for interacting with a second process, wherein the first OS is emulated in a second OS, spawning an agent process, wherein the agent process is a child process of the first process, implementing a functionality of the system call using a general mechanism in the second OS between the agent process and the second process, passing a result associated with the system call from the second process to the agent process using the general mechanism, and relaying the result from the agent process to the first process using a system call in the second OS, wherein the result is stored by the first process.

14 citations


Cited by
More filters
Journal ArticleDOI
12 Jun 2005
TL;DR: The goals are to provide easy-to-use, portable, transparent, and efficient instrumentation, and to illustrate Pin's versatility, two Pintools in daily use to analyze production software are described.
Abstract: Robust and powerful software instrumentation tools are essential for program analysis tasks such as profiling, performance evaluation, and bug detection. To meet this need, we have developed a new instrumentation system called Pin. Our goals are to provide easy-to-use, portable, transparent, and efficient instrumentation. Instrumentation tools (called Pintools) are written in C/C++ using Pin's rich API. Pin follows the model of ATOM, allowing the tool writer to analyze an application at the instruction level without the need for detailed knowledge of the underlying instruction set. The API is designed to be architecture independent whenever possible, making Pintools source compatible across different architectures. However, a Pintool can access architecture-specific details when necessary. Instrumentation with Pin is mostly transparent as the application and Pintool observe the application's original, uninstrumented behavior. Pin uses dynamic compilation to instrument executables while they are running. For efficiency, Pin uses several techniques, including inlining, register re-allocation, liveness analysis, and instruction scheduling to optimize instrumentation. This fully automated approach delivers significantly better instrumentation performance than similar tools. For example, Pin is 3.3x faster than Valgrind and 2x faster than DynamoRIO for basic-block counting. To illustrate Pin's versatility, we describe two Pintools in daily use to analyze production software. Pin is publicly available for Linux platforms on four architectures: IA32 (32-bit x86), EM64T (64-bit x86), Itanium®, and ARM. In the ten months since Pin 2 was released in July 2004, there have been over 3000 downloads from its website.

4,019 citations

Journal Article
TL;DR: AspectJ as mentioned in this paper is a simple and practical aspect-oriented extension to Java with just a few new constructs, AspectJ provides support for modular implementation of a range of crosscutting concerns.
Abstract: Aspect] is a simple and practical aspect-oriented extension to Java With just a few new constructs, AspectJ provides support for modular implementation of a range of crosscutting concerns. In AspectJ's dynamic join point model, join points are well-defined points in the execution of the program; pointcuts are collections of join points; advice are special method-like constructs that can be attached to pointcuts; and aspects are modular units of crosscutting implementation, comprising pointcuts, advice, and ordinary Java member declarations. AspectJ code is compiled into standard Java bytecode. Simple extensions to existing Java development environments make it possible to browse the crosscutting structure of aspects in the same kind of way as one browses the inheritance structure of classes. Several examples show that AspectJ is powerful, and that programs written using it are easy to understand.

2,947 citations

Proceedings Article
01 Jan 2011
TL;DR: Starfish is introduced, a self-tuning system for big data analytics that builds on Hadoop while adapting to user needs and system workloads to provide good performance automatically, without any need for users to understand and manipulate the many tuning knobs in Hadoops.
Abstract: Timely and cost-effective analytics over “Big Data” is now a key ingredient for success in many businesses, scientific and engineering disciplines, and government endeavors. The Hadoop software stack—which consists of an extensible MapReduce execution engine, pluggable distributed storage engines, and a range of procedural to declarative interfaces—is a popular choice for big data analytics. Most practitioners of big data analytics—like computational scientists, systems researchers, and business analysts—lack the expertise to tune the system to get good performance. Unfortunately, Hadoop’s performance out of the box leaves much to be desired, leading to suboptimal use of resources, time, and money (in payas-you-go clouds). We introduce Starfish, a self-tuning system for big data analytics. Starfish builds on Hadoop while adapting to user needs and system workloads to provide good performance automatically, without any need for users to understand and manipulate the many tuning knobs in Hadoop. While Starfish’s system architecture is guided by work on self-tuning database systems, we discuss how new analysis practices over big data pose new challenges; leading us to different design choices in Starfish.

663 citations

Proceedings Article
06 Dec 2004
TL;DR: This paper describes and evaluates the capability of Magpie to accurately extract requests and construct representative models of system behaviour, and constructs concise workload models suitable for performance prediction and change detection.
Abstract: Tools to understand complex system behaviour are essential for many performance analysis and debugging tasks, yet there are many open research problems in their development. Magpie is a toolchain for automatically extracting a system's workload under realistic operating conditions. Using low-overhead instrumentation, we monitor the system to record fine-grained events generated by kernel, middleware and application components. The Magpie request extraction tool uses an application-specific event schema to correlate these events, and hence precisely capture the control flow and resource consumption of each and every request. By removing scheduling artefacts, whilst preserving causal dependencies, we obtain canonical request descriptions from which we can construct concise workload models suitable for performance prediction and change detection. In this paper we describe and evaluate the capability of Magpie to accurately extract requests and construct representative models of system behaviour.

639 citations

Proceedings ArticleDOI
17 May 2009
TL;DR: The Native Client project as mentioned in this paper is a sandbox for untrusted x86 native code that uses software fault isolation and a secure runtime to direct system interaction and side effects through interfaces managed by Native Client.
Abstract: This paper describes the design, implementation and evaluation of Native Client, a sandbox for untrusted x86 native code. Native Client aims to give browser-based applications the computational performance of native applications without compromising safety. Native Client uses software fault isolation and a secure runtime to direct system interaction and side effects through interfaces managed by Native Client. Native Client provides operating system portability for binary code while supporting performance-oriented features generally absent from web application programming environments, such as thread support, instruction set extensions such as SSE, and use of compiler intrinsics and hand-coded assembler. We combine these properties in an open architecture that encourages community review and 3rd-party tools.

560 citations