scispace - formally typeset
Search or ask a question

Showing papers by "Martin Rinard published in 2011"


Proceedings ArticleDOI
09 Sep 2011
TL;DR: The results indicate that, for a range of applications, this approach typically delivers performance increases of over a factor of two (and up to a factors of seven) while changing the result that the application produces by less than 10%.
Abstract: Many modern computations (such as video and audio encoders, Monte Carlo simulations, and machine learning algorithms) are designed to trade off accuracy in return for increased performance. To date, such computations typically use ad-hoc, domain-specific techniques developed specifically for the computation at hand. Loop perforation provides a general technique to trade accuracy for performance by transforming loops to execute a subset of their iterations. A criticality testing phase filters out critical loops (whose perforation produces unacceptable behavior) to identify tunable loops (whose perforation produces more efficient and still acceptably accurate computations). A perforation space exploration algorithm perforates combinations of tunable loops to find Pareto-optimal perforation policies. Our results indicate that, for a range of applications, this approach typically delivers performance increases of over a factor of two (and up to a factor of seven) while changing the result that the application produces by less than 10%.

490 citations


Proceedings ArticleDOI
05 Mar 2011
TL;DR: The experimental results show that PowerDial can enable benchmark applications to execute responsively in the face of power caps that would otherwise significantly impair responsiveness, and can significantly reduce the number of machines required to service intermittent load spikes, enabling reductions in power and capital costs.
Abstract: We present PowerDial, a system for dynamically adapting application behavior to execute successfully in the face of load and power fluctuations. PowerDial transforms static configuration parameters into dynamic knobs that the PowerDial control system can manipulate to dynamically trade off the accuracy of the computation in return for reductions in the computational resources that the application requires to produce its results. These reductions translate directly into performance improvements and power savings.Our experimental results show that PowerDial can enable our benchmark applications to execute responsively in the face of power caps that would otherwise significantly impair responsiveness. They also show that PowerDial can significantly reduce the number of machines required to service intermittent load spikes, enabling reductions in power and capital costs.

320 citations


Book ChapterDOI
14 Sep 2011
TL;DR: The standard approach to program transformation involves the use of discrete logical reasoning to prove that the transformation does not change the observable semantics of the program, but this work proposes a new approach that uses probabilistic reasoning to justify the application of transformations that may change, within Probabilistic accuracy bounds, the result that the program produces.
Abstract: The standard approach to program transformation involves the use of discrete logical reasoning to prove that the transformation does not change the observable semantics of the program. We propose a new approach that, in contrast, uses probabilistic reasoning to justify the application of transformations that may change, within probabilistic accuracy bounds, the result that the program produces. Our new approach produces probabilistic guarantees of the form P(|D| ≥ B) ≤ e, e ∈ (0, 1), where D is the difference between the results that the transformed and original programs produce, B is an acceptability bound on the absolute value of D, and e is the maximum acceptable probability of observing large |D|. We show how to use our approach to justify the application of loop perforation (which transforms loops to execute fewer iterations) to a set of computational patterns.

94 citations


Proceedings ArticleDOI
25 Jul 2011
TL;DR: Jolt is presented, a novel system for dynamically detecting and escaping infinite loops that attaches to an application to monitor its progress and reports to the user that the application is in an infinite loop.
Abstract: Infinite loops can make applications unresponsive. Potential problems include lost work or output, denied access to application functionality, and a lack of responses to urgent events. We present Jolt, a novel system for dynamically detecting and escaping infinite loops. At the user's request, Jolt attaches to an application to monitor its progress. Specifically, Jolt records the program state at the start of each loop iteration. If two consecutive loop iterations produce the same state, Jolt reports to the user that the application is in an infinite loop. At the user's option, Jolt can then transfer control to a statement following the loop, thereby allowing the application to escape the infinite loop and ideally continue its productive execution. The immediate goal is to enable the application to execute long enough to save any pending work, finish any in-progress computations, or respond to any urgent events.We evaluated Jolt by applying it to detect and escape eight infinite loops in five benchmark applications. Jolt was able to detect seven of the eight infinite loops (the eighth changes the state on every iteration). We also evaluated the effect of escaping an infinite loop as an alternative to terminating the application. In all of our benchmark applications, escaping an infinite loop produced a more useful output than terminating the application. Finally, we evaluated how well escaping from an infinite loop approximated the correction that the developers later made to the application. For two out of our eight loops, escaping the infinite loop produced the same output as the corrected version of the application.

77 citations


Proceedings ArticleDOI
17 Oct 2011
TL;DR: A new abstraction-refinement technique for automatically finding errors in Administrative Role-Based Access Control (ARBAC) security policies is presented, which complements conventional state-space exploration techniques such as model checking.
Abstract: Verifying that access-control systems maintain desired security properties is recognized as an important problem in security. Enterprise access-control systems have grown to protect tens of thousands of resources, and there is a need for verification to scale commensurately. We present a new abstraction-refinement technique for automatically finding errors in Administrative Role-Based Access Control (ARBAC) security policies. ARBAC is the first and most comprehensive administrative scheme for Role-Based Access Control (RBAC) systems. Underlying our approach is a change in mindset: we propose that error finding complements verification, can be more scalable, and allows for the use of a wider variety of techniques. In our approach, we use an abstraction-refinement technique to first identify and discard roles that are unlikely to be relevant to the verification question (the abstraction step), and then restore such abstracted roles incrementally (the refinement steps). Errors are one-sided: if there is an error in the abstracted policy, then there is an error in the original policy. If there is an error in a policy whose role-dependency graph diameter is smaller than a certain bound, then we find the error. Our abstraction-refinement technique complements conventional state-space exploration techniques such as model checking. We have implemented our technique in an access-control policy analysis tool. We show empirically that our tool scales well to realistic policies, and is orders of magnitude faster than prior tools.

74 citations


Journal ArticleDOI
04 Jun 2011
TL;DR: In this paper, the problem of specifying combinations of data structures with complex sharing in a manner that is both declarative and results in provably correct code is considered, where abstract data types are specified using relational algebra and functional dependencies.
Abstract: We consider the problem of specifying combinations of data structures with complex sharing in a manner that is both declarative and results in provably correct code. In our approach, abstract data types are specified using relational algebra and functional dependencies. We describe a language of decompositions that permit the user to specify different concrete representations for relations, and show that operations on concrete representations soundly implement their relational specification. It is easy to incorporate data representations synthesized by our compiler into existing systems, leading to code that is simpler, correct by construction, and comparable in performance to the code it replaces.

68 citations


Journal ArticleDOI
04 Jun 2011
TL;DR: Together, the commutativity conditions and inverse operations provide a key resource that language designers, developers of program analysis systems, and implementors of software systems can draw on to build languages, program analyses, and systems with strong correctness guarantees.
Abstract: We present a new technique for verifying commutativity conditions, which are logical formulas that characterize when operations commute. Because our technique reasons with the abstract state of verified linked data structure implementations, it can verify commuting operations that produce semantically equivalent (but not necessarily identical) data structure states in different execution orders. We have used this technique to verify sound and complete commutativity conditions for all pairs of operations on a collection of linked data structure implementations, including data structures that export a set interface (ListSet and HashSet) as well as data structures that export a map interface (AssociationList, HashTable, and ArrayList). This effort involved the specification and verification of 765 commutativity conditions.Many speculative parallel systems need to undo the effects of speculatively executed operations. Inverse operations, which undo these effects, are often more efficient than alternate approaches (such as saving and restoring data structure state). We present a new technique for verifying such inverse operations. We have specified and verified, for all of our linked data structure implementations, an inverse operation for every operation that changes the data structure state.Together, the commutativity conditions and inverse operations provide a key resource that language designers, developers of program analysis systems, and implementors of software systems can draw on to build languages, program analyses, and systems with strong correctness guarantees.

39 citations


Book ChapterDOI
01 Jan 2011
TL;DR: This work presents several mechanisms that can either excise or change system functionality in ways that may eliminate security vulnerabilities while enabling the system to continue to deliver acceptable service.
Abstract: Security vulnerabilities can be seen as excess undesirable functionality present in a software system. We present several mechanisms that can either excise or change system functionality in ways that may 1) eliminate security vulnerabilities while 2) enabling the system to continue to deliver acceptable service.

18 citations


01 Jun 2011
TL;DR: A language of decompositions that permit the user to specify different concrete representations for relations, and it is shown that operations on concrete representations soundly implement their relational specification.
Abstract: We consider the problem of specifying combinations of data structures with complex sharing in a manner that is both declarative and results in provably correct code. In our approach, abstract data types are specified using relational algebra and functional dependencies. We describe a language of decompositions that permit the user to specify different concrete representations for relations, and show that operations on concrete representations soundly implement their relational specification. It is easy to incorporate data representations synthesized by our compiler into existing systems, leading to code that is simpler, correct by construction, and comparable in performance to the code it replaces.

8 citations


Proceedings ArticleDOI
24 Jan 2011
TL;DR: Traditional program transformations operate under the onerous constraint that they must preserve the exact behavior of the transformed program, but many programs are designed to produce approximate results, and preserving the exact semantics simply misses the point.
Abstract: Traditional program transformations operate under the onerous constraint that they must preserve the exact behavior of the transformed program. But many programs are designed to produce approximate results. Lossy video encoders, for example, are designed to give up perfect fidelity in return for faster encoding and smaller encoded videos [10]. Machine learning algorithms usually work with probabilistic models that capture some, but not all, aspects of phenomena that are difficult (if not impossible) to model with complete accuracy [2]. Monte-Carlo computations use random simulation to deliver inherently approximate solutions to complex systems of equations that are, in many cases, computationally infeasible to solve exactly [5]. For programs that perform such computations, preserving the exact semantics simply misses the point. The underlying problems that these computations solve typically exhibit an inherent performance versus accuracy trade-off — the more computational resources (such as time or energy) one is willing to spend, the more accurate the result one may be able to obtain. Conversely, the less accurate a result one is willing to accept, the less resources one may need to expend to obtain that result. Any specific program occupies but a single point in this rich trade-off space. Preserving the exact semantics of this program abandons the other points, many of which may be, depending on the context, more desirable than the original point that the program happens to implement.

8 citations


DOI
01 Jan 2011
TL;DR: Dagstuhl seminar 11062 ``Self-Repairing Programs'' included 23 participants and organizers from research and industrial communities found common ground in discussions of concerns, challenges, and the state of the art.
Abstract: Dagstuhl seminar 11062 ``Self-Repairing Programs'' included 23 participants and organizers from research and industrial communities. Self-Repairing Programs are a new and emerging area, and many participants reported that they initially felt their first research home to be in another area, such as testing, program synthesis, debugging, self-healing systems, or security. Over the course of the seminar, the participants found common ground in discussions of concerns, challenges, and the state of the art.

15 Nov 2011
TL;DR: The dynamic semantics of the target programming language is formalized, the proof rules in Coq are verified, and the Coq implementation enables developers to obtain fully machine checked verifications of their relaxed programs.
Abstract: Approximate program transformations such as task skipping [27, 28], loop perforation [20, 21, 32], multiple selectable implementations [3, 4, 15], approximate function memoization [10], and approximate data types [31] produce programs that can execute at a variety of points in an underlying performance versus accuracy tradeoff space. Namely, these transformed programs trade accuracy of their results for increased performance by dynamically and nondeterministically modifying variables that control their execution. We call such transformed programs relaxed programs — they have been extended with additional nondeterminism to relax their semantics and enable greater flexibility in their execution. We present programming language constructs for developing and specifying relaxed programs. We also present proof rules for reasoning about properties of relaxed programs. Our proof rules enable programmers to directly specify and verify acceptability properties that characterize the desired correctness relationships between the values of variables in a program’s original semantics (before the transformation) and its relaxed semantics. Our proof rules also support the verification of safety properties (which characterize desirable properties involving values in only the current execution). The rules are designed to support a reasoning approach in which the majority of the reasoning effort uses the original semantics. This effort is then reused to establish the desired properties of the program under the relaxed semantics. We have formalized the dynamic semantics of our target programming language and the proof rules in Coq, and verified that the proof rules are sound with respect to the dynamic semantics. Our Coq implementation enables developers to obtain fully machine checked verifications of their relaxed programs.

Posted Content
TL;DR: A case study of designing an ARBAC policy for a bank comprising 18 branches is presented and an assessment about the features of ARBac that are likely to be used in realistic policies is provided.
Abstract: Administrative role-based access control (ARBAC) is the first comprehensive administrative model proposed for role-based access control (RBAC) ARBAC has several features for designing highly expressive policies, but current work has not highlighted the utility of these expressive policies In this report, we present a case study of designing an ARBAC policy for a bank comprising 18 branches Using this case study we provide an assessment about the features of ARBAC that are likely to be used in realistic policies

19 Jan 2011
TL;DR: This work presents a new foundation for the analysis and transformation of computer programs that uses probabilistic and statistical reasoning to justify the application of transformations that may change, within Probabilistic bounds, the result that the program produces.
Abstract: We present a new foundation for the analysis and transformation of computer programs. Standard approaches involve the use of logical reasoning to prove that the applied transformation does not change the observable semantics of the program. Our approach, in contrast, uses probabilistic and statistical reasoning to justify the application of transformations that may change, within probabilistic bounds, the result that the program produces. Loop perforation transforms loops to execute fewer iterations. We show how to use our basic approach to justify the application of loop perforation to a set of computational patterns. Empirical results from computations drawn from the PARSEC benchmark suite demonstrate that these computational patterns occur in practice. We also outline a specification methodology that enables the transformation of subcomputations and discuss how to automate the approach.

01 Feb 2011
TL;DR: Several decidability and undecidability results are proved for the satisfiability/validity problem of formulas over a language of finite-length strings and integers (interpreted as lengths of strings).
Abstract: We prove several decidability and undecidability results for the satisfiability/validity problem of formulas over a language of finite-length strings and integers (interpreted as lengths of strings). The atomic formulas over this language are equality over string terms (word equations), linear inequality over length function (length constraints), and membership predicate over regular expressions (r.e.). These decidability questions are important in logic, program analysis and formal verification. Logicians have been attempting to resolve some of these questions for many decades, while practical satisfiability procedures for these formulas are increasingly important in the analysis of string-manipulating programs such as web applications and scripts. We prove three main theorems. First, we consider Boolean combination of quantifier-free formulas constructed out of word equations and length constraints. We show that if word equations can be converted to a solved form, a form relevant in practice, then the satisfiability problem for Boolean combination of word equations and length constraints is decidable. Second, we show that the satisfiability problem for word equations in solved form that are regular, length constraints and r.e. membership predicate is also decidable. Third, we show that the validity problem for the set of sentences written as a ∀∃ quantifier alternation applied to positive word equations is undecidable. A corollary of this undecidability result is that this set is undecidable even with sentences with at most two occurrences of a string variable.

ReportDOI
01 Oct 2011
TL;DR: Techniques that enable a system to learn where it is vulnerable to an attack or programming error, then automatically generate and evaluate ways that it can thwart the attack or recover from the error to continue to execute successfully are investigated.
Abstract: : We investigated techniques that enable a system to learn where it is vulnerable to an attack or programming error, then automatically generate and evaluate ways that it can thwart the attack or recover from the error to continue to execute successfully The approach is designed to work for systems, such as existing standard information technology installations, that have large monocultures of identical applications By sharing information about attacks, errors, and response and recovery strategies, the system can quickly learn which strategies work best The end result is a system whose robustness and resilience automatically grow over time as it learns how to best adapt and respond to the attacks and errors that its components inevitably encounter