scispace - formally typeset
Search or ask a question

Showing papers by "Martin Rinard published in 2010"


Proceedings ArticleDOI
01 May 2010
TL;DR: The experimental results from applying the implemented quality of service profiler to a challenging set of benchmark applications show that it can enable developers to identify promising optimization opportunities and deliver successful optimizations that substantially increase the performance with only smallquality of service losses.
Abstract: Many computations exhibit a trade off between execution time and quality of service. A video encoder, for example, can often encode frames more quickly if it is given the freedom to produce slightly lower quality video. A developer attempting to optimize such computations must navigate a complex trade-off space to find optimizations that appropriately balance quality of service and performance. We present a new quality of service profiler that is designed to help developers identify promising optimization opportunities in such computations. In contrast to standard profilers, which simply identify time-consuming parts of the computation, a quality of service profiler is designed to identify subcomputations that can be replaced with new (and potentially less accurate) subcomputations that deliver significantly increased performance in return for acceptably small quality of service losses. Our quality of service profiler uses loop perforation (which transforms loops to perform fewer iterations than the original loop) to obtain implementations that occupy different points in the performance/quality of service trade-off space. The rationale is that optimizable computations often contain loops that perform extra iterations, and that removing iterations, then observing the resulting effect on the quality of service, is an effective way to identify such optimizable subcomputations. Our experimental results from applying our implemented quality of service profiler to a challenging set of benchmark applications show that it can enable developers to identify promising optimization opportunities and deliver successful optimizations that substantially increase the performance with only small quality of service losses.

202 citations


Proceedings Article
17 Oct 2010
TL;DR: It is my pleasure to welcome you to SPLASH, the next step in the evolution of the well-known OOPSLA conference, which includes workshops, panels, tutorials, co-located conferences, posters, and a doctoral symposium.
Abstract: It is my pleasure to welcome you to SPLASH, the next step in the evolution of the well-known OOPSLA conference. SPLASH is the premier forum for practitioners, researchers, educators, and students who are passionate about improving the state of the art and practice in the development of software systems and applications through improved programming tools and languages. SPLASH is a new name for the overall OOPSLA conference, which includes workshops, panels, tutorials, co-located conferences, posters, and a doctoral symposium. The OOPSLA name is being retained for the technical research track that is the core of SPLASH. It would have been easier to just rename OOPSLA to be SPLASH, but that would lose continuity with the strong OOPSLA brand. As a result, we have adopted a phased approach where both names will be used for the foreseeable future. Although SPLASH/OOPSLA has its origin in object technologies, SPLASH is no longer explicitly tied to object-oriented programming. There is an implicit connection, however, since most modern software development incorporates or builds on ideas from object-oriented programming. From its inception, OOPSLA has incubated new technologies and practices. Dynamic compilation and optimization, software patterns, refactoring, aspect-oriented software development, agile methods, service-oriented architectures, and model-driven development (to name just a few) all have roots in OOPSLA. SPLASH 2010 continues and strengthens this tradition. SPLASH has as its foundation the most successful software development theories and practices, yet is always striving to find new and better techniques which will define the future of software development. SPLASH is pleased to host a range of co-located conferences. Onward! is more radical, more visionary, and more open to new ideas, allowing it to accept papers that present strong arguments even though the ideas in the paper may not be fully proven. The Dynamic Languages Symposium (DLS) discusses dynamic languages, including scripting languages. The Pattern Languages of Programming (PLoP) conference explores patterns of software and effective ways to present them. The International Lisp Conference (ILC) is focused on Lisp, a language with a great history and future. The Educators' and Trainer's Symposium and Doctoral Symposium focus on the essential task of educating the next generation of software developers and researchers. In the end, SPLASH is about people, not technology. While SPLASH inherits SPLA from OOPSLA, it adds a new twist on the end: Software for Humanity. While this may seem an afterthought, I have come to realize over the last year that it is the most important idea in the new name. Our community is strong and diverse. Even as we promote diverse technologies, we share deep values that enable us to work together. One is the simple idea that software can improve the daily lives of humans all around the planet. Like any technology, software has the potential for great benefit and also great harm. Let's try to use it for good.

80 citations


Proceedings ArticleDOI
12 Jul 2010
TL;DR: A system, Snap, for automatically grouping related input bytes into fields and classifying each field and corresponding regions of code as critical or forgiving, given an application and one or more inputs is presented.
Abstract: Applications that process complex inputs often react in different ways to changes in different regions of the input. Small changes to forgiving regions induce correspondingly small changes in the behavior and output. Small changes to critical regions, on the other hand, can induce disproportionally large changes in the behavior or output. Identifying the critical and forgiving regions in the input and the corresponding critical and forgiving regions of code is directly relevant to many software engineering tasks.We present a system, Snap, for automatically grouping related input bytes into fields and classifying each field and corresponding regions of code as critical or forgiving. Given an application and one or more inputs, Snap uses targeted input fuzzing in combination with dynamic execution and influence tracing to classify regions of input fields and code as critical or forgiving. Our experimental evaluation shows that Snap makes classifications with close to perfect precision (99%) and very good recall (between 99% and 73%, depending on the application).

66 citations


Proceedings ArticleDOI
17 Oct 2010
TL;DR: Several general, broadly applicable mechanisms that enable computations to execute with reduced resources, typically at the cost of some loss in the accuracy of the result they produce are presented.
Abstract: We present several general, broadly applicable mechanisms that enable computations to execute with reduced resources, typically at the cost of some loss in the accuracy of the result they produce.We identify several general computational patterns that interact well with these resource reduction mechanisms, present a concrete manifestation of these patterns in the form of simple model programs, perform simulationbased explorations of the quantitative consequences of applying these mechanisms to our model programs, and relate the model computations (and their interaction with the resource reduction mechanisms) to more complex benchmark applications drawn from a variety of fields.

44 citations


Patent
01 Oct 2010
TL;DR: In this paper, a system and method that enables a plurality of lay users to collaborate on automating computer tasks is described, rather than just documenting how to perform them, and a classifier is used to predict which steps are likely to be misinterpreted and requests human intervention to properly perform them.
Abstract: A system and method that enables a plurality of lay users to collaborate on automating computer tasks is disclosed. In one embodiment, the system automatically performs these tasks, rather than just documenting how to perform them. The system allows a database of solutions to be built for every important computer task. A key characteristic of this system is that users contribute to this database by simply performing the task. The system records the graphical user interface (GUI) actions as the user performs the task. It aggregates GUI traces from multiple users into a canonical sequence of GUI actions parameterized by user-environment that will successfully accomplish the task on a variety of different configurations. A classifier is used to predict which steps are likely to be misinterpreted and requests human intervention to properly perform them. This process can be done iteratively until the translation is believed to be correct.

43 citations


Journal ArticleDOI
TL;DR: This work presents Hapi, a new dynamic programming algorithm that ignores uninformative states and state transitions in order to efficiently compute minimum-recombinant and maximum likelihood haplotypes.
Abstract: Hapi is a new dynamic programming algorithm that ignores uninformative states and state transitions in order to efficiently compute minimum-recombinant and maximum likelihood haplotypes. When applied to a dataset containing 103 families, Hapi performs 3.8 and 320 times faster than state-of-the-art algorithms. Because Hapi infers both minimum-recombinant and maximum likelihood haplotypes and applies to related individuals, the haplotypes it infers are highly accurate over extended genomic distances.

35 citations


Proceedings Article
17 Oct 2010
TL;DR: It is my pleasure to welcome you to SPLASH, the next step in the evolution of the well-known OOPSLA conference, which includes workshops, panels, tutorials, co-located conferences, posters, and a doctoral symposium.
Abstract: It is my pleasure to welcome you to SPLASH, the next step in the evolution of the well-known OOPSLA conference. SPLASH is the premier forum for practitioners, researchers, educators, and students who are passionate about improving the state of the art and practice in the development of software systems and applications through improved programming tools and languages. SPLASH is a new name for the overall OOPSLA conference, which includes workshops, panels, tutorials, co-located conferences, posters, and a doctoral symposium. The OOPSLA name is being retained for the technical research track that is the core of SPLASH. It would have been easier to just rename OOPSLA to be SPLASH, but that would lose continuity with the strong OOPSLA brand. As a result, we have adopted a phased approach where both names will be used for the foreseeable future. Although SPLASH/OOPSLA has its origin in object technologies, SPLASH is no longer explicitly tied to object-oriented programming. There is an implicit connection, however, since most modern software development incorporates or builds on ideas from object-oriented programming. From its inception, OOPSLA has incubated new technologies and practices. Dynamic compilation and optimization, software patterns, refactoring, aspect-oriented software development, agile methods, service-oriented architectures, and model-driven development (to name just a few) all have roots in OOPSLA. SPLASH 2010 continues and strengthens this tradition. SPLASH has as its foundation the most successful software development theories and practices, yet is always striving to find new and better techniques which will define the future of software development. SPLASH is pleased to host a range of co-located conferences. Onward! is more radical, more visionary, and more open to new ideas, allowing it to accept papers that present strong arguments even though the ideas in the paper may not be fully proven. The Dynamic Languages Symposium (DLS) discusses dynamic languages, including scripting languages. The Pattern Languages of Programming (PLoP) conference explores patterns of software and effective ways to present them. The International Lisp Conference (ILC) is focused on Lisp, a language with a great history and future. The Educators' and Trainer's Symposium and Doctoral Symposium focus on the essential task of educating the next generation of software developers and researchers. In the end, SPLASH is about people, not technology. While SPLASH inherits SPLA from OOPSLA, it adds a new twist on the end: Software for Humanity. While this may seem an afterthought, I have come to realize over the last year that it is the most important idea in the new name. Our community is strong and diverse. Even as we promote diverse technologies, we share deep values that enable us to work together. One is the simple idea that software can improve the daily lives of humans all around the planet. Like any technology, software has the potential for great benefit and also great harm. Let's try to use it for good.

34 citations


14 May 2010
TL;DR: The experimental results show that PowerDial can enable benchmark applications to execute responsively in the face of power caps that would otherwise significantly impair the delivered performance and reduce the number of machines required to meet peak load.
Abstract: We present PowerDial, a system for dynamically adapting application behavior to execute successfully in the face of load and power fluctuations. PowerDial transforms static configuration parameters into dynamic knobs that the PowerDial control system can manipulate to dynamically trade off the accuracy of the computation in return for reductions in the computational resources that the application requires to produce its results. These reductions translate into power savings. Our experimental results show that PowerDial can enable our benchmark applications to execute responsively in the face of power caps (imposed, for example, in response to cooling system failures) that would otherwise significantly impair the delivered performance. They also show that PowerDial can reduce the number of machines required to meet peak load, in our experiments enabling up to a 75% reduction in direct power and capital costs.

17 citations


Book ChapterDOI
Peter Hawkins1, Alex Aiken1, Kathleen Fisher1, Martin Rinard1, Mooly Sagiv1 
28 Nov 2010
TL;DR: This work permits the user to specify different concrete shared representations for relations, and shows that the semantics of the relational specification are preserved.
Abstract: We consider the problem of specifying data structures with complex sharing in a manner that is both declarative and results in provably correct code. In our approach, abstract data types are specified using relational algebra and functional dependencies; a novel fuse operation on relational indexes specifies where the underlying physical data structure representation has sharing. We permit the user to specify different concrete shared representations for relations, and show that the semantics of the relational specification are preserved.

11 citations


10 Feb 2010
TL;DR: Results from the benchmark set of applications show that QuickStep can automatically generate parallel programs with good performance and statistically accurate outputs and the simplicity of the compilation strategy and the performance and statistical acceptability of the generated parallel programs demonstrate the advantages of the QuickStep approach.
Abstract: Traditional parallelizing compilers are designed to generate parallel programs that produce identical outputs as the original sequential program. The difficulty of performing the program analysis required to satisfy this goal and the restricted space of possible target parallel programs have both posed significant obstacles to the development of effective parallelizing compilers. The QuickStep compiler is instead designed to generate parallel programs that satisfy statistical accuracy guarantees. The freedom to generate parallel programs whose output may differ (within statistical accuracy bounds) from the output of the sequential program enables a dramatic simplification of the compiler and a significant expansion in the range of parallel programs that it can legally generate. QuickStep exploits this flexibility to take a fundamentally different approach from traditional parallelizing compilers. It applies a collection of transformations (loop parallelization, loop scheduling, synchronization introduction, and replication introduction) to generate a search space of parallel versions of the original sequential program. It then searches this space (prioritizing the parallelization of the most time-consuming loops in the application) to find a final parallelization that exhibits good parallel performance and satisfies the statistical accuracy guarantee. At each step in the search it performs a sequence of trial runs on representative inputs to examine the performance, accuracy, and memory accessing characteristics of the current generated parallel program. An analysis of these characteristics guides the steps the compiler takes as it explores the search space of parallel programs. Results from our benchmark set of applications show that QuickStep can automatically generate parallel programs with good performance and statistically accurate outputs. For two of the applications, the parallelization introduces noise into the output, but the noise remains within acceptable statistical bounds. The simplicity of the compilation strategy and the performance and statistical acceptability of the generated parallel programs demonstrate the advantages of the QuickStep approach.

8 citations


Patent
07 Dec 2010
TL;DR: In this article, an approach to detection and repair of application level semantic errors in deployed software includes inferring aspects of correct operation of a program using a suite of examples of operations that are known or assumed to be correct.
Abstract: An approach to detection and repair of application level semantic errors in deployed software includes inferring aspects of correct operation of a program. For instance, a suite of examples of operations that are known or assumed to be correct are used to infer correct operation. Further operation of the program can be compared to results found during correct operation and the logic of the program can be augmented to ensure that aspects of further examples of operation of the program are sufficiently similar to the examples in the correct suite. In some examples, the similarity is based on identifying invariants that are satisfied at certain points in the program execution, and augmenting (e.g., “patching”) the logic includes adding tests to confirm that the invariants are satisfied in the new examples. In some examples, the logic invokes an automatic or semi-automatic error handling procedure if the test is not satisfied. Augmenting the logic in this way may prevent malicious parties from exploiting the semantic errors, and may prevent failures in execution of the programs that may have been avoided.

Proceedings ArticleDOI
23 Aug 2010
TL;DR: In this article, the authors prove that the scheduling problem for asynchronous and preemptive urgent tasks can be solved in polynomial time for uniprocessor platforms, and they also introduce a new subclass of tasks, called urgent tasks.
Abstract: Tasks’ scheduling has always been a central problem in the embedded real-time systems community. As in general the scheduling problem is NP-hard, researchers have been looking for efficient heuristics to solve the scheduling problem in polynomial time. One of the most important scheduling strategies is the Earliest Deadline First (EDF). It is known that EDF is optimal for uniprocessor platforms for many cases, such as: non-preemptive synchronous tasks(i.e., all tasks have the same starting time and cannot be interrupted), and preemptive asynchronous tasks (i.e., the tasks may be interrupted and may have arbitrary starting time). However, Mok showed that EDF is not optimal in multiprocessor platforms. In fact, for the multiprocessor platforms, the scheduling problem is NP-complete in most of the cases where the corresponding scheduling problem can be solved by a polynomial-time algorithm for uniprocessor platforms. Coffman and Graham identified a class of tasks for which the scheduling problem can be solved by a polynomial time algorithm, that is, two-processor platform, no resources, arbitrary partial order relations, and every task is nonpreemptive and has a unit computation time. Our paper introduces a new non-trivial and practical subclass of tasks, called urgent tasks. Briefly, a task is urgent if it is executed right after it is ready or it can only wait one unit time after it is ready. Practical examples of embedded real time systems dealing with urgent tasks are all modern building alarm systems, as these include urgent tasks such as ‘checkingfor intruders’, ‘sending a warning signal to the security office’,‘informing the building’s owner about a potential intrusion’, and so on. By using propositional logic, we prove a new result in schedulability theory, namely that the scheduling problem for asynchronous and preemptive urgent tasks can be solved in polynomial time.