scispace - formally typeset
Search or ask a question
Author

Bryan M. Cantrill

Other affiliations: Oracle Corporation
Bio: Bryan M. Cantrill is an academic researcher from Sun Microsystems. The author has contributed to research in topics: Object (computer science) & Tracing. The author has an hindex of 14, co-authored 41 publications receiving 1216 citations. Previous affiliations of Bryan M. Cantrill include Oracle Corporation.

Papers
More filters
Proceedings Article
27 Jun 2004
TL;DR: DTrace features the ability to dynamically instrument both user-level and kernel-level software in a unified and absolutely safe fashion and features a C-like high-level control language to describe the predicates and actions at a given point of instrumentation.
Abstract: This paper presents DTrace, a new facility for dynamic instrumentation of production systems. DTrace features the ability to dynamically instrument both user-level and kernel-level software in a unified and absolutely safe fashion. When not explicitly enabled, DTrace has zero. probe effect--the system operates exactly as if DTrace were not present at all. DTrace allows for many tens of thousands of instrumentation points, with even the smallest of systems offering on the order of 30,000 such points in the kernel alone. We have developed a C-like high-level control language to describe the predicates and actions at a given point of instrumentation. The language features user-defined variables, including thread-local variables and associative arrays. To eliminate the need for most postprocessing, the facility features a scalable mechanism for aggregating data and a mechanism for speculative tracing. DTrace has been integrated into the Solaris operating system and has been used to find serious systemic performance problems on production systems-problems that could not be found using pre-existing facilities.

512 citations

Patent
29 Dec 2011
TL;DR: In this article, the authors present a set of methods for generating heat maps of event data, which can be classified into three categories: discrete decomposition, at least one constraint, and heat map generation.
Abstract: Systems, methods, and media for generating heat maps of event data are provided herein. Methods may include gathering instances of event data according to a performance characteristic, discretely decomposing the instances by applying at least one constraint to the instances, assigning a hue to each instance, the hue being associated with the at least one constraint, and generating a heat map that includes representations of the instances, wherein each representation includes the hue associated with the at least one constraint to which the instance has been assigned.

72 citations

Journal ArticleDOI
TL;DR: What does the proliferation of concurrency mean for the software you develop?
Abstract: In this look at how concurrency affects practitioners in the real world, Cantrill and Bonwick argue that much of the anxiety over concurrency is unwarranted.

66 citations

Patent
15 Mar 2013
TL;DR: In this paper, the authors propose a versioning scheme for compute-centric object stores, where the metadata of the first object is stored in the object store on a first path, and a copy on write link between the first path and a second path is established for the metadata clone.
Abstract: Versioning schemes for compute-centric object stores are provided herein. An exemplary method may include creating a metadata clone of a first object within an object store via a versioning scheme module, the metadata of the first object being stored in the object store on a first path, establishing a copy on write link between the first path and a second path for the first object via the versioning scheme module, and storing the cloned metadata on the second path via the versioning scheme module.

62 citations

Patent
25 Oct 2013
TL;DR: In this paper, the authors propose a method for providing a compute-centric object store, where the first user requests a compute operation on at least a portion of an object store from a first user, the request identifying parameters of the compute operation, assigning virtual operating system containers to the objects of the object store.
Abstract: Systems and methods for providing a compute-centric object store. An exemplary method may include receiving a request to perform a compute operation on at least a portion of an object store from a first user, the request identifying parameters of the compute operation, assigning virtual operating system containers to the objects of the object store from a pool of virtual operating system containers. The virtual operating system containers may perform the compute operation on the objects according to the identified parameters of the request. The method may also include clearing the virtual operating system containers and returning the virtual operating system containers to the pool.

55 citations


Cited by
More filters
Journal ArticleDOI
12 Jun 2005
TL;DR: The goals are to provide easy-to-use, portable, transparent, and efficient instrumentation, and to illustrate Pin's versatility, two Pintools in daily use to analyze production software are described.
Abstract: Robust and powerful software instrumentation tools are essential for program analysis tasks such as profiling, performance evaluation, and bug detection. To meet this need, we have developed a new instrumentation system called Pin. Our goals are to provide easy-to-use, portable, transparent, and efficient instrumentation. Instrumentation tools (called Pintools) are written in C/C++ using Pin's rich API. Pin follows the model of ATOM, allowing the tool writer to analyze an application at the instruction level without the need for detailed knowledge of the underlying instruction set. The API is designed to be architecture independent whenever possible, making Pintools source compatible across different architectures. However, a Pintool can access architecture-specific details when necessary. Instrumentation with Pin is mostly transparent as the application and Pintool observe the application's original, uninstrumented behavior. Pin uses dynamic compilation to instrument executables while they are running. For efficiency, Pin uses several techniques, including inlining, register re-allocation, liveness analysis, and instruction scheduling to optimize instrumentation. This fully automated approach delivers significantly better instrumentation performance than similar tools. For example, Pin is 3.3x faster than Valgrind and 2x faster than DynamoRIO for basic-block counting. To illustrate Pin's versatility, we describe two Pintools in daily use to analyze production software. Pin is publicly available for Linux platforms on four architectures: IA32 (32-bit x86), EM64T (64-bit x86), Itanium®, and ARM. In the ten months since Pin 2 was released in July 2004, there have been over 3000 downloads from its website.

4,019 citations

Journal Article
TL;DR: AspectJ as mentioned in this paper is a simple and practical aspect-oriented extension to Java with just a few new constructs, AspectJ provides support for modular implementation of a range of crosscutting concerns.
Abstract: Aspect] is a simple and practical aspect-oriented extension to Java With just a few new constructs, AspectJ provides support for modular implementation of a range of crosscutting concerns. In AspectJ's dynamic join point model, join points are well-defined points in the execution of the program; pointcuts are collections of join points; advice are special method-like constructs that can be attached to pointcuts; and aspects are modular units of crosscutting implementation, comprising pointcuts, advice, and ordinary Java member declarations. AspectJ code is compiled into standard Java bytecode. Simple extensions to existing Java development environments make it possible to browse the crosscutting structure of aspects in the same kind of way as one browses the inheritance structure of classes. Several examples show that AspectJ is powerful, and that programs written using it are easy to understand.

2,947 citations

01 Jan 2004

2,223 citations

Proceedings Article
01 Jan 2011
TL;DR: Starfish is introduced, a self-tuning system for big data analytics that builds on Hadoop while adapting to user needs and system workloads to provide good performance automatically, without any need for users to understand and manipulate the many tuning knobs in Hadoops.
Abstract: Timely and cost-effective analytics over “Big Data” is now a key ingredient for success in many businesses, scientific and engineering disciplines, and government endeavors. The Hadoop software stack—which consists of an extensible MapReduce execution engine, pluggable distributed storage engines, and a range of procedural to declarative interfaces—is a popular choice for big data analytics. Most practitioners of big data analytics—like computational scientists, systems researchers, and business analysts—lack the expertise to tune the system to get good performance. Unfortunately, Hadoop’s performance out of the box leaves much to be desired, leading to suboptimal use of resources, time, and money (in payas-you-go clouds). We introduce Starfish, a self-tuning system for big data analytics. Starfish builds on Hadoop while adapting to user needs and system workloads to provide good performance automatically, without any need for users to understand and manipulate the many tuning knobs in Hadoop. While Starfish’s system architecture is guided by work on self-tuning database systems, we discuss how new analysis practices over big data pose new challenges; leading us to different design choices in Starfish.

663 citations

Proceedings Article
06 Dec 2004
TL;DR: This paper describes and evaluates the capability of Magpie to accurately extract requests and construct representative models of system behaviour, and constructs concise workload models suitable for performance prediction and change detection.
Abstract: Tools to understand complex system behaviour are essential for many performance analysis and debugging tasks, yet there are many open research problems in their development. Magpie is a toolchain for automatically extracting a system's workload under realistic operating conditions. Using low-overhead instrumentation, we monitor the system to record fine-grained events generated by kernel, middleware and application components. The Magpie request extraction tool uses an application-specific event schema to correlate these events, and hence precisely capture the control flow and resource consumption of each and every request. By removing scheduling artefacts, whilst preserving causal dependencies, we obtain canonical request descriptions from which we can construct concise workload models suitable for performance prediction and change detection. In this paper we describe and evaluate the capability of Magpie to accurately extract requests and construct representative models of system behaviour.

639 citations