scispace - formally typeset
Search or ask a question
Author

James A. Jones

Bio: James A. Jones is an academic researcher from University of California, Irvine. The author has contributed to research in topics: Software system & Software. The author has an hindex of 24, co-authored 55 publications receiving 4615 citations. Previous affiliations of James A. Jones include Georgia Institute of Technology & University of California.


Papers
More filters
Proceedings ArticleDOI
07 Nov 2005
TL;DR: The studies show that, on the same set of subjects, the Tarantula technique consistently outperforms the other four techniques in terms of effectiveness in fault localization, and is comparable in efficiency to the least expensive of the other five techniques.
Abstract: The high cost of locating faults in programs has motivated the development of techniques that assist in fault localization by automating part of the process of searching for faults. Empirical studies that compare these techniques have reported the relative effectiveness of four existing techniques on a set of subjects. These studies compare the rankings that the techniques compute for statements in the subject programs and the effectiveness of these rankings in locating the faults. However, it is unknown how these four techniques compare with Tarantula, another existing fault-localization technique, although this technique also provides a way to rank statements in terms of their suspiciousness. Thus, we performed a study to compare the Tarantula technique with the four techniques previously compared. This paper presents our study---it overviews the Tarantula technique along with the four other techniques studied, describes our experiment, and reports and discusses the results. Our studies show that, on the same set of subjects, the Tarantula technique consistently outperforms the other four techniques in terms of effectiveness in fault localization, and is comparable in efficiency to the least expensive of the other four techniques.

1,142 citations

Proceedings ArticleDOI
19 May 2002
TL;DR: A new technique that uses color to visually map the participation of each program statement in the outcome of the execution of the program with a test suite, consisting of both passed and failed test cases is presented.
Abstract: One of the most expensive and time-consuming components of the debugging process is locating the errors or faults. To locate faults, developers must identify statements involved in failures and select suspicious statements that might contain faults. This paper presents a new technique that uses visualization to assist with these tasks. The technique uses color to visually map the participation of each program statement in the outcome of the execution of the program with a test suite, consisting of both passed and failed test cases. Based on this visual mapping, a user can inspect the statements in the program, identify statements involved in failures, and locate potentially faulty statements. The paper also describes a prototype tool that implements our technique along with a set of empirical studies that use the tool for evaluation of the technique. The empirical studies show that, for the subject we studied, the technique can be effective in helping a user locate faults in a program.

1,063 citations

Journal ArticleDOI
TL;DR: New algorithms for test-suite reduction and prioritization that can be tailored effectively for use with modified condition/decision coverage (MC/DC) adequate are presented.
Abstract: Software testing is particularly expensive for developers of high-assurance software, such as software that is produced for commercial airborne systems. One reason for this expense is the Federal Aviation Administration's requirement that test suites be modified condition/decision coverage (MC/DC) adequate. Despite its cost, there is evidence that MC/DC is an effective verification technique and can help to uncover safety faults. As the software is modified and new test cases are added to the test suite, the test suite grows and the cost of regression testing increases. To address the test-suite size problem, researchers have investigated the use of test-suite reduction algorithms, which identify a reduced test suite that provides the same coverage of the software according to some criterion as the original test suite, and test-suite prioritization algorithms, which identify an ordering of the test cases in the test suite according to some criteria or goals. Existing test-suite reduction and prioritization techniques, however, may not be effective in reducing or prioritizing MC/DC-adequate test suites because they do not consider the complexity of the criterion. This paper presents new algorithms for test-suite reduction and prioritization that can be tailored effectively for use with MC/DC. The paper also presents the results of empirical studies of these algorithms.

395 citations

Proceedings ArticleDOI
01 Oct 2001
TL;DR: A safe regression-test-selection technique that, based on the use of a suitable representation, handles the features of the Java language and also handles incomplete programs.
Abstract: Regression testing is applied to modified software to provide confidence that the changed parts behave as intended and that the unchanged parts have not been adversely affected by the modifications. To reduce the cost of regression testing, test cases are selected from the test suite that was used to test the original version of the software---this process is called regression test selection. A safe regression-test-selection algorithm selects every test case in the test suite that may reveal a fault in the modified software. Safe regression-test-selection technique that, based on the use of a suitable representation, handles the features of the Java language. Unlike other safe regression test selection techniques, the presented technique also handles incomplete programs. The technique can thus be safely applied in the (very common) case of Java software that uses external libraries of components; the analysis of the external code is note required for the technique to select test cases for such software. The paper also describes RETEST, a regression-test-selection algorithm can be effective in reducing the size of the test suite.

344 citations

Proceedings ArticleDOI
16 May 2009
TL;DR: The empirical results show that the cost of fault localization using combinations of coverage is less than using any individual coverage type and closer to the best case (without knowing in advance which kinds of faults are present), and using inferred data-dependence coverage retains most of the benefits of combinations.
Abstract: Lightweight fault-localization techniques use program coverage to isolate the parts of the code that are most suspicious of being faulty. In this paper, we present the results of a study of three types of program coverage—statements, branches, and data dependencies—to compare their effectiveness in localizing faults. The study shows that no single coverage type performs best for all faults—different kinds of faults are best localized by different coverage types. Based on these results, we present a new coverage-based approach to fault localization that leverages the unique qualities of each coverage type by combining them. Because data dependencies are noticeably more expensive to monitor than branches, we also investigate the effects of replacing data-dependence coverage with an approximation inferred from branch coverage. Our empirical results show that (1) the cost of fault localization using combinations of coverage is less than using any individual coverage type and closer to the best case (without knowing in advance which kinds of faults are present), and (2) using inferred data-dependence coverage retains most of the benefits of combinations.

263 citations


Cited by
More filters
Book
01 Nov 2002
TL;DR: Drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short), which aims to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved.
Abstract: From the Book: “Clean code that works” is Ron Jeffries’ pithy phrase. The goal is clean code that works, and for a whole bunch of reasons: Clean code that works is a predictable way to develop. You know when you are finished, without having to worry about a long bug trail.Clean code that works gives you a chance to learn all the lessons that the code has to teach you. If you only ever slap together the first thing you think of, you never have time to think of a second, better, thing. Clean code that works improves the lives of users of our software.Clean code that works lets your teammates count on you, and you on them.Writing clean code that works feels good.But how do you get to clean code that works? Many forces drive you away from clean code, and even code that works. Without taking too much counsel of our fears, here’s what we do—drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short). In Test-Driven Development, you: Write new code only if you first have a failing automated test.Eliminate duplication. Two simple rules, but they generate complex individual and group behavior. Some of the technical implications are:You must design organically, with running code providing feedback between decisionsYou must write your own tests, since you can’t wait twenty times a day for someone else to write a testYour development environment must provide rapid response to small changesYour designs must consist of many highly cohesive, loosely coupled components, just to make testing easy The two rules imply an order to the tasks ofprogramming: 1. Red—write a little test that doesn’t work, perhaps doesn’t even compile at first 2. Green—make the test work quickly, committing whatever sins necessary in the process 3. Refactor—eliminate all the duplication created in just getting the test to work Red/green/refactor. The TDD’s mantra. Assuming for the moment that such a style is possible, it might be possible to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved. If so, writing only code demanded by failing tests also has social implications: If the defect density can be reduced enough, QA can shift from reactive to pro-active workIf the number of nasty surprises can be reduced enough, project managers can estimate accurately enough to involve real customers in daily developmentIf the topics of technical conversations can be made clear enough, programmers can work in minute-by-minute collaboration instead of daily or weekly collaborationAgain, if the defect density can be reduced enough, we can have shippable software with new functionality every day, leading to new business relationships with customers So, the concept is simple, but what’s my motivation? Why would a programmer take on the additional work of writing automated tests? Why would a programmer work in tiny little steps when their mind is capable of great soaring swoops of design? Courage. Courage Test-driven development is a way of managing fear during programming. I don’t mean fear in a bad way, pow widdle prwogwammew needs a pacifiew, but fear in the legitimate, this-is-a-hard-problem-and-I-can’t-see-the-end-from-the-beginning sense. If pain is nature’s way of saying “Stop!”, fear is nature’s way of saying “Be careful.” Being careful is good, but fear has a host of other effects: Makes you tentativeMakes you want to communicate lessMakes you shy from feedbackMakes you grumpy None of these effects are helpful when programming, especially when programming something hard. So, how can you face a difficult situation and: Instead of being tentative, begin learning concretely as quickly as possible.Instead of clamming up, communicate more clearly.Instead of avoiding feedback, search out helpful, concrete feedback.(You’ll have to work on grumpiness on your own.) Imagine programming as turning a crank to pull a bucket of water from a well. When the bucket is small, a free-spinning crank is fine. When the bucket is big and full of water, you’re going to get tired before the bucket is all the way up. You need a ratchet mechanism to enable you to rest between bouts of cranking. The heavier the bucket, the closer the teeth need to be on the ratchet. The tests in test-driven development are the teeth of the ratchet. Once you get one test working, you know it is working, now and forever. You are one step closer to having everything working than you were when the test was broken. Now get the next one working, and the next, and the next. By analogy, the tougher the programming problem, the less ground should be covered by each test. Readers of Extreme Programming Explained will notice a difference in tone between XP and TDD. TDD isn’t an absolute like Extreme Programming. XP says, “Here are things you must be able to do to be prepared to evolve further.” TDD is a little fuzzier. TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap. “What if I do a paper design for a week, then test-drive the code? Is that TDD?” Sure, it’s TDD. You were aware of the gap between decision and feedback and you controlled the gap deliberately. That said, most people who learn TDD find their programming practice changed for good. “Test Infected” is the phrase Erich Gamma coined to describe this shift. You might find yourself writing more tests earlier, and working in smaller steps than you ever dreamed would be sensible. On the other hand, some programmers learn TDD and go back to their earlier practices, reserving TDD for special occasions when ordinary programming isn’t making progress. There are certainly programming tasks that can’t be driven solely by tests (or at least, not yet). Security software and concurrency, for example, are two topics where TDD is not sufficient to mechanically demonstrate that the goals of the software have been met. Security relies on essentially defect-free code, true, but also on human judgement about the methods used to secure the software. Subtle concurrency problems can’t be reliably duplicated by running the code. Once you are finished reading this book, you should be ready to: Start simplyWrite automated testsRefactor to add design decisions one at a time This book is organized into three sections. An example of writing typical model code using TDD. The example is one I got from Ward Cunningham years ago, and have used many times since, multi-currency arithmetic. In it you will learn to write tests before code and grow a design organically.An example of testing more complicated logic, including reflection and exceptions, by developing a framework for automated testing. This example also serves to introduce you to the xUnit architecture that is at the heart of many programmer-oriented testing tools. In the second example you will learn to work in even smaller steps than in the first example, including the kind of self-referential hooha beloved of computer scientists.Patterns for TDD. Included are patterns for the deciding what tests to write, how to write tests using xUnit, and a greatest hits selection of the design patterns and refactorings used in the examples. I wrote the examples imagining a pair programming session. If you like looking at the map before wandering around, you may want to go straight to the patterns in Section 3 and use the examples as illustrations. If you prefer just wandering around and then looking at the map to see where you’ve been, try reading the examples through and refering to the patterns when you want more detail about a technique, then using the patterns as a reference. Several reviewers have commented they got the most out of the examples when they started up a programming environment and entered the code and ran the tests as they read. A note about the examples. Both examples, multi-currency calculation and a testing framework, appear simple. There are (and I have seen) complicated, ugly, messy ways of solving the same problems. I could have chosen one of those complicated, ugly, messy solutions to give the book an air of “reality.” However, my goal, and I hope your goal, is to write clean code that works. Before teeing off on the examples as being too simple, spend 15 seconds imagining a programming world in which all code was this clear and direct, where there were no complicated solutions, only apparently complicated problems begging for careful thought. TDD is a practice that can help you lead yourself to exactly that careful thought.

1,864 citations

Journal ArticleDOI
TL;DR: This paper surveys each area of minimization, selection and prioritization technique and discusses open problems and potential directions for future research.
Abstract: Regression testing is a testing activity that is performed to provide confidence that changes do not harm the existing behaviour of the software. Test suites tend to grow in size as software evolves, often making it too costly to execute entire test suites. A number of different approaches have been studied to maximize the value of the accrued test suite: minimization, selection and prioritization. Test suite minimization seeks to eliminate redundant test cases in order to reduce the number of tests to run. Test case selection seeks to identify the test cases that are relevant to some set of recent changes. Test case prioritization seeks to order test cases in such a way that early fault detection is maximized. This paper surveys each area of minimization, selection and prioritization technique and discusses open problems and potential directions for future research. Copyright © 2010 John Wiley & Sons, Ltd.

1,276 citations

Proceedings ArticleDOI
07 Nov 2005
TL;DR: The studies show that, on the same set of subjects, the Tarantula technique consistently outperforms the other four techniques in terms of effectiveness in fault localization, and is comparable in efficiency to the least expensive of the other five techniques.
Abstract: The high cost of locating faults in programs has motivated the development of techniques that assist in fault localization by automating part of the process of searching for faults. Empirical studies that compare these techniques have reported the relative effectiveness of four existing techniques on a set of subjects. These studies compare the rankings that the techniques compute for statements in the subject programs and the effectiveness of these rankings in locating the faults. However, it is unknown how these four techniques compare with Tarantula, another existing fault-localization technique, although this technique also provides a way to rank statements in terms of their suspiciousness. Thus, we performed a study to compare the Tarantula technique with the four techniques previously compared. This paper presents our study---it overviews the Tarantula technique along with the four other techniques studied, describes our experiment, and reports and discusses the results. Our studies show that, on the same set of subjects, the Tarantula technique consistently outperforms the other four techniques in terms of effectiveness in fault localization, and is comparable in efficiency to the least expensive of the other four techniques.

1,142 citations

Book
01 Jan 2009
TL;DR: In this paper, the authors define testing as the process of applying a few well-defined, general-purpose test criteria to a structure or model of the software, and present an innovative approach to explaining the process.
Abstract: Extensively class tested, this text takes an innovative approach to explaining the process of software testing: it defines testing as the process of applying a few well-defined, general-purpose test criteria to a structure or model of the software. The structure of the text directly reflects the pedagogical approach and incorporates the latest innovations in testing, including techniques to test modern types of software such as OO, web applications, and embedded software.

1,126 citations

MonographDOI
01 Jan 2008
TL;DR: The structure of the text directly reflects the pedagogical approach and incorporates the latest innovations in testing, including techniques to test modern types of software such as OO, web applications and embedded software.
Abstract: Extensively class tested, this text takes an innovative approach to explaining the process of software testing: it defines testing as the process of applying a few well-defined, general-purpose test criteria to a structure or model of the software. The structure of the text directly reflects the pedagogical approach and incorporates the latest innovations in testing, including techniques to test modern types of software such as OO, web applications, and embedded software.

1,079 citations