scispace - formally typeset
Search or ask a question
Author

Stephen P. Masticola

Other affiliations: Rutgers University
Bio: Stephen P. Masticola is an academic researcher from Princeton University. The author has contributed to research in topics: Deadlock prevention algorithms & Anomaly detection. The author has an hindex of 8, co-authored 16 publications receiving 363 citations. Previous affiliations of Stephen P. Masticola include Rutgers University.

Papers
More filters
Proceedings ArticleDOI
01 Jul 1993
TL;DR: This work presents a framework for non-concurrency analysis, capable of incorporating previous analysis algorithms [CS88, DS92] and improving upon them and exhibits dramatic accuracy improvements over [DS91], when the latter is used as a stand-alone algorithm.
Abstract: Non-concurrency analysis is a set of techniques for statically identifying pairs (or sets) of statements in a concurrent program which can never happen together. This information aids programmers in debugging and manually optimizing programs, improves the precision of data flow analysis, enables optimized translation of rendezvous, facilitates dead code elimination and other automatic optimizations, and allows anomaly detection in explicitly parallel programs. We present a framework for non-concurrency analysis, capable of incorporating previous analysis algorithms [CS88, DS92] and improving upon them. We show general theoretical results which are useful in estimating non-concurrency, and examples of non-concurrency analysis frameworks for two synchronization primitives: the Ada rendezvous and binary semaphores. Both of these frameworks have a low-order polynomial bound on worst-case solution time. We provide experimental evidence that static non-concurrency analysis of Ada programs can be accomplished in a reasonable time, and is generally quite accurate. Our framework, and the set of refinement components we develop, also exhibits dramatic accuracy improvements over [DS91], when the latter is used as a stand-alone algorithm, as demonstrated by our experiments.

113 citations

Proceedings ArticleDOI
01 Dec 1991
TL;DR: A safe, polynomial time approximation algorithm for static deadlock detection in a subset of the Ada language [MR90b] is designed and an automatic facility to accurately certify deadlock freedom for a large class of Ada programs is developed.
Abstract: We have designed a safe, polynomial time approximation algorithm for static deadlock detection in a subset of the Ada language [MR90b]. We extend the program representation to include nearly all of the Ada rendezvous primitives, and present preliminary experimental results for an implementation of our algorithm. Our goal is to develop an automatic facility to accurately certify deadlock freedom for a large class of Ada programs.

75 citations

Proceedings ArticleDOI
01 Sep 1990
TL;DR: Two polynomial time algorithms which operate on a statically derivable program representation, the sync graph, are presented, to certify a useful class of programs free of deadlocks.
Abstract: In nite wait anomalies associated with a barrier rendezvous model (e.g., Ada) can be divided into two classes: stalls and deadlocks. Although precise static deadlock detection is NP-hard, we present two polynomial time algorithms which operate on a statically derivable program representation, the sync graph, to certify a useful class of programs free of deadlocks. We identify three conditions local to any deadlocked tasks, and a fourth global condition on all tasks, which must occur in the sync graph of any program which can deadlock. Again, exact checking of the local conditions is NP-hard; the algorithms check them using conservative approximations. Certifying stall freedom is intractable for programs with conditional branching, including loops. We give program transforms which may help alleviate this di culty.

40 citations

DOI
02 Jan 1993
TL;DR: Preliminary algorithms for detecting deadlock in two very different synchronization paradigms: binary semaphores, and the dynamic, pointer-based process control of Concurrent C.
Abstract: Parallel and distributed programming languages often include explicit synchronization primitives, such as rendezvous and semaphores. Such programs are subject to synchronization anomalies; the program behaves incorrectly because it has a faulty synchronization structure. A deadlock is an anomaly in which some subset of the active tasks of the program mutually wait on each other to advance; thus, the program cannot complete execution. In static anomaly detection, the source code of a program is automatically analyzed to determine if the program can ever exhibit a specific anomaly. Static anomaly detection has the unique advantage that it can certify programs to be free of the tested anomaly; dynamic testing cannot generally do this. Though exact static detection of deadlocks is NP-hard (Tay83a), many researchers have tried to detect deadlock by exhaustive enumeration of synchronization states, using Petri nets or other program representations. In practice, programs often have large enough state spaces to render this approach impractical. Our approach, rather, is to make an approximate analysis of the program in time polynomial in the size of the source code. Our approximation is safe: if we certify a program free of deadlock, it will never deadlock. We do this using iterative flow analysis techniques to detect (but not enumerate) "deadlock cycles" in the program's control and synchronization structure. We identify four constraints on deadlock cycles which we use to prune invalid cycles, thus avoiding false alarms. One pruning method uses a "can't happen together" relation between statements; we show how such information can be found, and how it may be of value in other analyses. We have implemented our analysis for the rendezvous synchronization of the Ada language, and have tested it on over 100 programs obtained from government and industrial sources. We demonstrate that our technique is quite accurate compared to exhaustive state generation; few false alarms are seen in practice. Our technique is also well behaved in execution time, running faster than the exhaustive technique for all programs except those with the simplest state spaces. To demonstrate the generality of our methods, we show preliminary algorithms for detecting deadlock in two very different synchronization paradigms: binary semaphores, and the dynamic, pointer-based process control of Concurrent C.

38 citations

Proceedings ArticleDOI
15 Jun 1992
TL;DR: The authors show that optimization of hard real time programs cannot be separated from code generation, register allocation, and scheduling, even under a very simple model of program execution; it is therefore difficult.
Abstract: Classical compiler optimizations are designed to reduce the expected execution time or memory use of programs. Optimizations for hard real time programs must meet more stringent constraints: all transformations applied to the program must be safe, in that they will never cause a deadline to be missed in any execution of the program. The authors show that optimization of hard real time programs cannot be separated from code generation, register allocation, and scheduling, even under a very simple model of program execution; it is therefore difficult. Optimization is also necessary, in that it may be needed to ensure that the program meets its deadlines. They examine the classical source code transformations for both sequential optimization and parallel programming (vectorization and concurrentization), presenting brief examples showing when each transformation may be unsafe. They classify each of these transformations in a system of five categories of safety, and describe what additional information (if any) is required to ensure that each transformation is safe. >

28 citations


Cited by
More filters
01 Jan 1978
TL;DR: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.), and is a "must-have" reference for every serious programmer's digital library.
Abstract: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.). One of the best-selling programming books published in the last fifty years, "K&R" has been called everything from the "bible" to "a landmark in computer science" and it has influenced generations of programmers. Available now for all leading ebook platforms, this concise and beautifully written text is a "must-have" reference for every serious programmers digital library. As modestly described by the authors in the Preface to the First Edition, this "is not an introductory programming manual; it assumes some familiarity with basic programming concepts like variables, assignment statements, loops, and functions. Nonetheless, a novice programmer should be able to read along and pick up the language, although access to a more knowledgeable colleague will help."

2,120 citations

Book
01 Nov 2002
TL;DR: Drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short), which aims to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved.
Abstract: From the Book: “Clean code that works” is Ron Jeffries’ pithy phrase. The goal is clean code that works, and for a whole bunch of reasons: Clean code that works is a predictable way to develop. You know when you are finished, without having to worry about a long bug trail.Clean code that works gives you a chance to learn all the lessons that the code has to teach you. If you only ever slap together the first thing you think of, you never have time to think of a second, better, thing. Clean code that works improves the lives of users of our software.Clean code that works lets your teammates count on you, and you on them.Writing clean code that works feels good.But how do you get to clean code that works? Many forces drive you away from clean code, and even code that works. Without taking too much counsel of our fears, here’s what we do—drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short). In Test-Driven Development, you: Write new code only if you first have a failing automated test.Eliminate duplication. Two simple rules, but they generate complex individual and group behavior. Some of the technical implications are:You must design organically, with running code providing feedback between decisionsYou must write your own tests, since you can’t wait twenty times a day for someone else to write a testYour development environment must provide rapid response to small changesYour designs must consist of many highly cohesive, loosely coupled components, just to make testing easy The two rules imply an order to the tasks ofprogramming: 1. Red—write a little test that doesn’t work, perhaps doesn’t even compile at first 2. Green—make the test work quickly, committing whatever sins necessary in the process 3. Refactor—eliminate all the duplication created in just getting the test to work Red/green/refactor. The TDD’s mantra. Assuming for the moment that such a style is possible, it might be possible to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved. If so, writing only code demanded by failing tests also has social implications: If the defect density can be reduced enough, QA can shift from reactive to pro-active workIf the number of nasty surprises can be reduced enough, project managers can estimate accurately enough to involve real customers in daily developmentIf the topics of technical conversations can be made clear enough, programmers can work in minute-by-minute collaboration instead of daily or weekly collaborationAgain, if the defect density can be reduced enough, we can have shippable software with new functionality every day, leading to new business relationships with customers So, the concept is simple, but what’s my motivation? Why would a programmer take on the additional work of writing automated tests? Why would a programmer work in tiny little steps when their mind is capable of great soaring swoops of design? Courage. Courage Test-driven development is a way of managing fear during programming. I don’t mean fear in a bad way, pow widdle prwogwammew needs a pacifiew, but fear in the legitimate, this-is-a-hard-problem-and-I-can’t-see-the-end-from-the-beginning sense. If pain is nature’s way of saying “Stop!”, fear is nature’s way of saying “Be careful.” Being careful is good, but fear has a host of other effects: Makes you tentativeMakes you want to communicate lessMakes you shy from feedbackMakes you grumpy None of these effects are helpful when programming, especially when programming something hard. So, how can you face a difficult situation and: Instead of being tentative, begin learning concretely as quickly as possible.Instead of clamming up, communicate more clearly.Instead of avoiding feedback, search out helpful, concrete feedback.(You’ll have to work on grumpiness on your own.) Imagine programming as turning a crank to pull a bucket of water from a well. When the bucket is small, a free-spinning crank is fine. When the bucket is big and full of water, you’re going to get tired before the bucket is all the way up. You need a ratchet mechanism to enable you to rest between bouts of cranking. The heavier the bucket, the closer the teeth need to be on the ratchet. The tests in test-driven development are the teeth of the ratchet. Once you get one test working, you know it is working, now and forever. You are one step closer to having everything working than you were when the test was broken. Now get the next one working, and the next, and the next. By analogy, the tougher the programming problem, the less ground should be covered by each test. Readers of Extreme Programming Explained will notice a difference in tone between XP and TDD. TDD isn’t an absolute like Extreme Programming. XP says, “Here are things you must be able to do to be prepared to evolve further.” TDD is a little fuzzier. TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap. “What if I do a paper design for a week, then test-drive the code? Is that TDD?” Sure, it’s TDD. You were aware of the gap between decision and feedback and you controlled the gap deliberately. That said, most people who learn TDD find their programming practice changed for good. “Test Infected” is the phrase Erich Gamma coined to describe this shift. You might find yourself writing more tests earlier, and working in smaller steps than you ever dreamed would be sensible. On the other hand, some programmers learn TDD and go back to their earlier practices, reserving TDD for special occasions when ordinary programming isn’t making progress. There are certainly programming tasks that can’t be driven solely by tests (or at least, not yet). Security software and concurrency, for example, are two topics where TDD is not sufficient to mechanically demonstrate that the goals of the software have been met. Security relies on essentially defect-free code, true, but also on human judgement about the methods used to secure the software. Subtle concurrency problems can’t be reliably duplicated by running the code. Once you are finished reading this book, you should be ready to: Start simplyWrite automated testsRefactor to add design decisions one at a time This book is organized into three sections. An example of writing typical model code using TDD. The example is one I got from Ward Cunningham years ago, and have used many times since, multi-currency arithmetic. In it you will learn to write tests before code and grow a design organically.An example of testing more complicated logic, including reflection and exceptions, by developing a framework for automated testing. This example also serves to introduce you to the xUnit architecture that is at the heart of many programmer-oriented testing tools. In the second example you will learn to work in even smaller steps than in the first example, including the kind of self-referential hooha beloved of computer scientists.Patterns for TDD. Included are patterns for the deciding what tests to write, how to write tests using xUnit, and a greatest hits selection of the design patterns and refactorings used in the examples. I wrote the examples imagining a pair programming session. If you like looking at the map before wandering around, you may want to go straight to the patterns in Section 3 and use the examples as illustrations. If you prefer just wandering around and then looking at the map to see where you’ve been, try reading the examples through and refering to the patterns when you want more detail about a technique, then using the patterns as a reference. Several reviewers have commented they got the most out of the examples when they started up a programming environment and entered the code and ran the tests as they read. A note about the examples. Both examples, multi-currency calculation and a testing framework, appear simple. There are (and I have seen) complicated, ugly, messy ways of solving the same problems. I could have chosen one of those complicated, ugly, messy solutions to give the book an air of “reality.” However, my goal, and I hope your goal, is to write clean code that works. Before teeing off on the examples as being too simple, spend 15 seconds imagining a programming world in which all code was this clear and direct, where there were no complicated solutions, only apparently complicated problems begging for careful thought. TDD is a practice that can help you lead yourself to exactly that careful thought.

1,864 citations

Proceedings ArticleDOI
Patrice Godefroid1
01 Jan 1997
TL;DR: This paper discusses how model checking can be extended to deal directly with "actual" descriptions of concurrent systems, e.g., implementations of communication protocols written in programming languages such as C or C++, and introduces a new search technique that is suitable for exploring the state spaces of such systems.
Abstract: Verification by state-space exploration, also often referred to as "model checking", is an effective method for analyzing the correctness of concurrent reactive systems (e.g., communication protocols). Unfortunately, existing model-checking techniques are restricted to the verification of properties of models, i.e., abstractions, of concurrent systems.In this paper, we discuss how model checking can be extended to deal directly with "actual" descriptions of concurrent systems, e.g., implementations of communication protocols written in programming languages such as C or C++. We then introduce a new search technique that is suitable for exploring the state spaces of such systems. This algorithm has been implemented in VeriSoft, a tool for systematically exploring the state spaces of systems composed of several concurrent processes executing arbitrary C code. As an example of application, we describe how VeriSoft successfully discovered an error in a 2500-line C program controlling robots operating in an unpredictable environment.

867 citations

Journal ArticleDOI
Ganesan Ramalingam1
TL;DR: The article shows that an analysis that is simultaneously both context-sensitive and synchronization-sensitive (that is, a context- sensitive analysis that precisely takes into account the constraints on execution order imposed by the synchronization statements in the program) is impossible even for the simplest of analysis problems.
Abstract: Static program analysis is concerned with the computation of approximations of the runtime behavior of programs. Precise information about a program's runtime behavior is, in general, uncomputable for various different reasons, and each reason may necessitate making certain approximations in the information computed. This article illustrates one source of difficulty in static analysis of concurrent programs. Specifically, the article shows that an analysis that is simultaneously both context-sensitive and synchronization-sensitive (that is, a context-sensitive analysis that precisely takes into account the constraints on execution order imposed by the synchronization statements in the program) is impossible even for the simplest of analysis problems.

296 citations

Book ChapterDOI
TL;DR: This chapter briefly describes the concepts of software architecture and software architectural evaluations, describes a new process for software architectural evaluation, provides results from two case studies where this process was applied, and presents areas for future work.
Abstract: As software systems become increasingly complex, the need to investigate and evaluate them at high levels of abstraction becomes more important. When systems are very complex, evaluating the system from an architectural level is necessary in order to understand the structure and interrelationships among the components of the system. There are several existing approaches available for software architecture evaluation. Some of these techniques, pre-implementation software architectural evaluations, are performed before the system is implemented. Others, implementation-oriented software architectural evaluations, are performed after a version of the system has been implemented. This chapter briefly describes the concepts of software architecture and software architectural evaluations, describes a new process for software architectural evaluation, provides results from two case studies where this process was applied, and presents areas for future work.

266 citations