scispace - formally typeset
Search or ask a question

Showing papers on "Test suite published in 2001"


Journal ArticleDOI
TL;DR: Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal as discussed by the authors, such as rate of fault detection, a measure of how quickly faults are detected within the testing process.
Abstract: Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal. Various goals are possible; one involves rate of fault detection, a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during testing can provide faster feedback on the system under test and let software engineers begin correcting faults earlier than might otherwise be possible. One application of prioritization techniques involves regression testing, the retesting of software following modifications; in this context, prioritization techniques can take advantage of information gathered about the previous execution of test cases to obtain test case orderings. We describe several techniques for using test execution information to prioritize test cases for regression testing, including: 1) techniques that order test cases based on their total coverage of code components; 2) techniques that order test cases based on their coverage of code components not previously covered; and 3) techniques that order test cases based on their estimated ability to reveal faults in the code components that they cover. We report the results of several experiments in which we applied these techniques to various test suites for various programs and measured the rates of fault detection achieved by the prioritized test suites, comparing those rates to the rates achieved by untreated, randomly ordered, and optimally ordered suites.

1,200 citations


Proceedings ArticleDOI
01 Oct 2001
TL;DR: A safe regression-test-selection technique that, based on the use of a suitable representation, handles the features of the Java language and also handles incomplete programs.
Abstract: Regression testing is applied to modified software to provide confidence that the changed parts behave as intended and that the unchanged parts have not been adversely affected by the modifications. To reduce the cost of regression testing, test cases are selected from the test suite that was used to test the original version of the software---this process is called regression test selection. A safe regression-test-selection algorithm selects every test case in the test suite that may reveal a fault in the modified software. Safe regression-test-selection technique that, based on the use of a suitable representation, handles the features of the Java language. Unlike other safe regression test selection techniques, the presented technique also handles incomplete programs. The technique can thus be safely applied in the (very common) case of Java software that uses external libraries of components; the analysis of the external code is note required for the technique to select test cases for such software. The paper also describes RETEST, a regression-test-selection algorithm can be effective in reducing the size of the test suite.

344 citations


Journal ArticleDOI
TL;DR: In this paper, the authors conducted an experiment to examine the relative costs and benefits of several regression test selection techniques, focusing on their relative ablilities to reduce regression testing effort and uncover faults in modified programs.
Abstract: Regression testing is the process of validating modified software to detect whether new errors have been introduced into previously tested code and to provide confidence that modifications are correct. Since regression testing is an expensive process, researchers have proposed regression test selection techniques as a way to reduce some of this expense. These techniques attempt to reduce costs by selecting and running only a subset of the test cases in a program's existing test suite. Although there have been some analytical and empirical evaluations of individual techniques, to our knowledge only one comparative study, focusing on one aspect of two of these techniques, has been reported in the literature. We conducted an experiment to examine the relative costs and benefits of several regression test selection techniques. The experiment examined five techniques for reusing test cases, focusing on their relative ablilities to reduce regression testing effort and uncover faults in modified programs. Our results highlight several differences between the techiques, and expose essential trade-offs that should be considered when choosing a technique for practical application.

334 citations


Journal ArticleDOI
TL;DR: A new method for generating motion fields from real sequences containing polyhedral objects is presented and a test suite for benchmarking optical flow algorithms consisting of complex synthetic sequences and real scenes with ground truth is presented.

294 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a test suite derivation algorithm for black-box conformance testing of timed I/O automata, inspired by the timed automaton model of Alur and Dill, together with a notion of test sequence for this model.

292 citations


Proceedings ArticleDOI
01 Sep 2001
TL;DR: This paper presents new coverage criteria to help determine whether a GUI has been adequately tested, and presents an important correlation between event-based coverage of a GUI and statement coverage of its software's underlying code.
Abstract: A widespread recognition of the usefulness of graphical user interfaces (GUIs) has established their importance as critical components of today's software. GUIs have characteristics different from traditional software, and conventional testing techniques do not directly apply to GUIs. This paper's focus is on coverage critieria for GUIs, important rules that provide an objective measure of test quality. We present new coverage criteria to help determine whether a GUI has been adequately tested. These coverage criteria use events and event sequences to specify a measure of test adequacy. Since the total number of permutations of event sequences in any non-trivial GUI is extremely large, the GUI's hierarchical structure is exploited to identify the important event sequences to be tested. A GUI is decomposed into GUI components, each of which is used as a basic unit of testing. A representation of a GUI component, called an event-flow graph, identifies the interaction of events within a component and intra-component criteria are used to evaluate the adequacy of tests on these events. The hierarchical relationship among components is represented by an integration tree, and inter-component coverage criteria are used to evaluate the adequacy of test sequences that cross components. Algorithms are given to construct event-flow graphs and an integration tree for a given GUI, and to evaluate the coverage of a given test suite with respect to the new coverage criteria. A case study illustrates the usefulness of the coverage report to guide further testing and an important correlation between event-based coverage of a GUI and statement coverage of its software's underlying code.

260 citations


01 Jan 2001
TL;DR: This thesis develops a unified solution to the GUI testing problem with the particular goals of automation and integration of tools and techniques used in various phases of GUI testing by developing a GUI testing framework with a GUI model as its central component.
Abstract: The widespread recognition of the usefulness of graphical user interfaces (GUIs) has established their importance as critical components of today's software Although the use of GUIs continues to grow, GUI testing has remained a neglected research area Since GUIs have characteristics that are different from those of conventional software, such as user events for input and graphical output, techniques developed to test conventional software cannot be directly applied to test GUIs This thesis develops a unified solution to the GUI testing problem with the particular goals of automation and integration of tools and techniques used in various phases of GUI testing These goals are accomplished by developing a GUI testing framework with a GUI model as its central component For efficiency and scalability, a GUI is represented as a hierarchy of components, each used as a basic unit of testing The framework also includes a test coverage evaluator, test case generator, test oracle, test executor, and regression tester The test coverage evaluator employs hierarchical, event-based coverage criteria to automatically specify what to test in a GUI and to determine whether the test suite has adequately tested the GUI The test case generator employs plan generation techniques from artificial intelligence to automatically generate a test suite A test executor automatically executes all the test cases on the GUI As test cases are being executed, a test oracle automatically determines the correctness of the GUI The test oracle employs a model of the expected state of the GUI in terms of its constituent objects and their properties After changes are made to a GUI, a regression tester partitions the original GUI test suite into valid test cases that represent correct input/output for the modified GUI and invalid test cases that no longer represent correct input/output The regression tester employs a new technique to reuse some of the invalid test cases by repairing them A cursory exploration of extending the framework to handle the new testing requirements of web-user interfaces (WUIs) is also done The framework has been implemented and experiments have demonstrated that the developed techniques are both practical and useful

193 citations


Proceedings ArticleDOI
07 Nov 2001
TL;DR: New algorithms for test-suite reduction and prioritization that can be tailored effectively for use with modified condition/decision coverage (MC/DC) adequate are presented.
Abstract: Software testing is particularly expensive for developers of high-assurance software, such as software that is produced for commercial airborne systems. One reason for this expense is the Federal Aviation Administration's requirement that test suites be modified condition/decision coverage (MC/DC) adequate. Despite its cost, there is evidence that MC/DC is an effective verification technique, and can help to uncover safety faults. As the software is modified and new test cases are added to the test suite, the test suite grows, and the cost of regression testing increases. To address the test-suite size problem, researchers have investigated the use of test-suite reduction algorithms, which identify a reduced test suite that provides the same coverage of the software, according to some criterion, as the original test suite, and test-suite prioritization algorithms, which identify an ordering of the test cases in the test suite according to some criteria or goals. Existing test-suite reduction and prioritization techniques, however, may not be effective in reducing or prioritizing MC/DC-adequate test suites because they do not consider the complexity of the criterion. The paper presents new algorithms for test-suite reduction and prioritization that can be tailored effectively for use with MC/DC. The paper also presents the results of a case study of the test-suite reduction algorithm.

151 citations


Journal ArticleDOI
TL;DR: The experimental results suggest that a hybrid regression test-selection tool that combines features of TestTube and DejaVu may be an answer to these complications; an initial case study is presented that demonstrates the potential benefit of such a tool.
Abstract: Regression test-selection techniques reduce the cost of regression testing by selecting a subset of an existing test suite to use in retesting a modified program. Over the past two decades, numerous regression test-selection techniques have been described in the literature. Initial empirical studies of some of these techniques have suggested that they can indeed benefit testers, but so far, few studies have empirically compared different techniques. In this paper, we present the results of a comparative empirical study of two safe regression test-selection techniques. The techniques we studied have been implemented as the tools DejaVu and TestTube; we compared these tools in terms of a cost model incorporating precision (ability to eliminate unnecessary test cases), analysis cost, and test execution cost. Our results indicate, that in many instances, despite its relative lack of precision, TestTube can reduce the time required for regression testing as much as the more precise DejaVu. In other instances, particularly where the time required to execute test cases is long, DejaVu's superior precision gives it a clear advantage over TestTube. Such variations in relative performance can complicate a tester's choice of which tool to use. Our experimental results suggest that a hybrid regression test-selection tool that combines features of TestTube and DejaVu may be an answer to these complications; we present an initial case study that demonstrates the potential benefit of such a tool.

122 citations


Proceedings ArticleDOI
08 Oct 2001
TL;DR: The initial experience shows that this approach of requirement-based test generation may provide significant benefits in terms of reduction in number of test cases and increase in quality of a test suite.
Abstract: Testing large software systems is very laborious and expensive. Model-based test generation techniques are used to automatically generate tests for large software systems. However, these techniques require manually created system models that are used for test generation. In addition, generated test cases are not associated with individual requirements. In this paper, we present a novel approach of requirement-based test generation. The approach accepts a software specification as a set of individual requirements expressed in textual and SDL formats (a common practice in the industry). From these requirements, system model is automatically created with requirement information mapped to the model. The system model is used to automatically generate test cases related to individual requirements. Several test generation strategies are presented. The approach is extended to requirement-based regression test generation related to changes on the requirement level. Our initial experience shows that this approach may provide significant benefits in terms of reduction in number of test cases and increase in quality of a test suite.

114 citations


Book ChapterDOI
02 Apr 2001
TL;DR: A new technique for automatic generation of real-time black-box conformance tests for non-deterministic systems from a determinizable class of timed automata specifications with a dense time interpretation is presented.
Abstract: Testing is the most dominating validation activity used by industry today, and there is an urgent need for improving its effectiveness, both with respect to the time and resources for test generation and execution, and obtained test coverage. We present a new technique for automatic generation of real-time black-box conformance tests for non-deterministic systems from a determinizable class of timed automata specifications with a dense time interpretation. In contrast to other attempts, our tests are generated using a coarse equivalence class partitioning of the specification. To analyze the specification, to synthesize the timed tests, and to guarantee coverage with respect to a coverage criterion, we use the efficient symbolic techniques recently developed for model checking of real-time systems. Application of our prototype tool to a realistic specification shows promising results in terms of both the test suite size, and the time and space used for test generation.

Patent
29 Nov 2001
TL;DR: In this paper, the authors present a method for distributed automated software GUI testing that maintains a centralized queue, which stores test instances to be executed by distributed test execution computers (test computers).
Abstract: A method for distributed automated software GUI testing includes maintaining a centralized queue, which stores test instances to be executed by distributed test execution computers (“test computers”). Each test computer includes a client platform and is connected to one or more server platforms, the client and server platforms collectively providing client-server combinations against which the tests may be executed. For each test computer: (1) a request for a test instance is received from a test computer in response to completion of a preceding test by the test computer; (2) in response, a test instance is retrieved from the queue and communicated to the test computer for execution using a testing component supported by the test computer; (3) the component performs automated software GUI testing and produces test results; and (5) in response to execution of the instance, a test result for the executed instance is received and stored.

Journal ArticleDOI
TL;DR: A method to automatically generate from ASM specifications test sequences which accomplish a desired coverage is introduced, which exploits the counter example generation of the model checker SMV.
Abstract: This paper tackles some aspects concerning the exploitation of Abstract State Machines (ASMs) for testing purposes. We define for ASM specifications a set of adequacy criteria measuring the coverage achieved by a test suite, and determining whether sufficient testing has been performed. We introduce a method to automatically generate from ASM specifications test sequences which accomplish a desired coverage. This method exploits the counter example generation of the model checker SMV. We use ASMs as test oracles to predict the expected outputs of units under test.

Journal ArticleDOI
TL;DR: In this article, the authors compared the performance of simulation problem analysis and research kernel (SPARK) and the HVACSIM+ programs by means of benchmark testing and showed that the graph-theoretic techniques employed in SPARK offer significant speed advantages over the other methods for significantly reducible problems and that even problem portions with little reduction potential can be solved efficiently.

Journal ArticleDOI
TL;DR: It is shown how the Rosenblum-Weyuker (RW) prediction model can be improved to provide such an accounting of the cost of regression testing and suggested that these results can be improve by incorporating information about the distribution of modifications.
Abstract: Regression testing is an important activity that can account for a large proportion of the cost of software maintenance. One approach to reducing the cost of regression testing is to employ a selective regression testing technique that: chooses a subset of a test suite that was used to test the software before the modifications; then uses this subset to test the modified software. Selective regression testing techniques reduce the cost of regression testing if the cost of selecting the subset from the test suite together with the cost of running the selected subset of test cases is less than the cost of rerunning the entire test suite. Rosenblum and Weyuker (1997) proposed coverage-based predictors for use in predicting the effectiveness of regression test selection strategies. Using the regression testing cost model of Leung and White (1989; 1990), Rosenblum and Weyuker demonstrated the applicability of these predictors by performing a case study involving 31 versions of the KornShell. To further investigate the applicability of the Rosenblum-Weyuker (RW) predictor, additional empirical studies have been performed. The RW predictor was applied to a number of subjects, using two different selective regression testing tools, Deja vu and TestTube. These studies support two conclusions. First, they show that there is some variability in the success with which the predictors work and second, they suggest that these results can be improved by incorporating information about the distribution of modifications. It is shown how the RW prediction model can be improved to provide such an accounting.

Journal Article
TL;DR: The annotated bibliography highlights work in the area of algorithmic test generation from formal specifications with guaranteed fault coverage, i.e., fault model-driven test derivation as a triple, comprising a finite state specification, conformance relation and fault domain that is the set of possible implementations.
Abstract: The annotated bibliography highlights work in the area of algorithmic test generation from formal specifications with guaranteed fault coverage, i.e., fault model-driven test derivation. A fault model is understood as a triple, comprising a finite state specification, conformance relation and fault domain that is the set of possible implementations. The fault model can be specialized to Input/Output FSM, Labeled Transition System, or Input/Output Automaton and to a number of conformance relations such as FSM equivalence, reduction or quasi-equivalence, trace inclusion or trace equivalence and others. The fault domain usually reflects test assumptions, as an example, it can be the universe of all possible I/O FSMs with a given number of states, a classical fault domain in FSM-based testing. A test suite is complete with respect to a given fault model when each implementation from the fault domain passes it if and only if the postulated conformance relation holds between the implementation and its specification. A complete test suite is said to provide fault coverage guarantee for a given fault model.

Patent
Joseph T. Apuzzo1, John Paul Marino1, Curtis L. Hoskins1, Timothy L. Race1, Hemant R. Suri1 
23 Oct 2001
TL;DR: In this article, a functional testing and evaluation technique is provided employing an abstraction matrix that describes a complex software component to be tested, and test cases are derived from the at least one test case scenario and used to test the software component.
Abstract: A functional testing and evaluation technique is provided employing an abstraction matrix that describes a complex software component to be tested. The abstraction matrix includes at least one test case scenario and mapped expected results therefore. Test cases are derived from the at least one test case scenario and used to test the software component, thereby generating test results. The test results are automatically evaluated using the abstraction matrix. The evaluating includes comparing a test case to the at least one test case scenario of the abstraction matrix and if a match is found, comparing the test result for that test case with the mapped expected result therefore in the abstraction matrix.

Journal ArticleDOI
TL;DR: The semismooth algorithm has the potential to meet both reliability and scalability requirements and is described in detail as a sophisticated implementation that scales well to very large problems.
Abstract: Complementarity solvers are continually being challenged by modelers demanding improved reliability and scalability. Building upon a strong theoretical background, the semismooth algorithm has the potential to meet both of these requirements. We discuss relevant theory associated with the algorithm and then describe a sophisticated implementation in detail. Particular emphasis is given to the use of preconditioned iterative methods to solve the (nonsymmetric) systems of linear equations generated at each iteration and robust methods for dealing with singularity. Results on the MCPLIB test suite indicate that the code is reliable and efficient and scales well to very large problems.

Patent
05 Sep 2001
TL;DR: In this article, a test coverage tool provides output that identifies differences between the actual coverage provided by a test suite run on a program under test and the coverage criteria required by the test/development team management.
Abstract: A test coverage tool provides output that identifies differences between the actual coverage provided by a test suite run on a program under test and the coverage criteria (e.g., the coverage criteria required by the test/development team management). The output from the test coverage tool is generated in the same language that was used to write the coverage criteria that are input to an automated test generator to create the test cases which form the test suite. As a result, the output from the coverage tool can be input back into the automated test generator to cause the generator to revise the test cases to correct the inadequacies. This allows iterative refinement of the test suite automatically, enabling automated test generation to be more effectively and efficiently used with more complex software and more complex test generation inputs. In preferred embodiments, test coverage analysis results of several different test suites, some manually generated and others automatically generated, are used to produce a streamlined automatically-generated test suite and/or to add missing elements to an automatically generated test-suite.

Proceedings ArticleDOI
01 Mar 2001
TL;DR: This work addresses difficulties of regression testing that follows modifications to database applications, and proposes a two-phase regression testing methodology that is based on dependencies that exist among the components of database applications.
Abstract: Database applications features such as SQL, exception programming, integrity constraints, and table triggers pose some difficulties for maintenance activities, especially for regression testing that follows modifications to database applications. In this work, we address these difficulties and propose a two-phase regression testing methodology. In Phase 1, we explore control flow and data flow analysis issues of database applications. Then, we propose an impact analysis technique that is based on dependencies that exist among the components of database applications. This analysis leads to selecting test cases from the initial test suite for regression testing the modified application. In Phase 2, further reduction in the regression test cases is performed by using reduction algorithms. We present two such algorithms. Finally, a maintenance environment for database applications is described. Our experience with the environment prototype shows promising results.

Proceedings ArticleDOI
04 Apr 2001
TL;DR: A series of experiments exploring three factors: program structure, test suite composition and change characteristics showed that the rate of fault detection of test suites could be significantly improved by using more powerful prioritization techniques.
Abstract: Test case prioritization techniques let testers order their test cases so that those with higher priority, according to some criterion, are executed earlier than those with lower priority. In previous work (1999, 2000), we examined a variety of prioritization techniques to determine their ability to improve the rate of fault detection of test suites. Our studies showed that the rate of fault detection of test suites could be significantly improved by using more powerful prioritization techniques. In addition, they indicated that rate of fault detection was closely associated with the target program. We also observed a large quantity of unexplained variance, indicating that other factors must be affecting prioritization effectiveness. These observations motivate the following research questions. (1) Are there factors other than the target program and the prioritization technique that consistently affect the rate of fault detection of test suites? (2) What metrics are most representative of each factor? (3) Can the consideration of additional factors lead to more efficient prioritization techniques? To address these questions, we performed a series of experiments exploring three factors: program structure, test suite composition and change characteristics. This paper reports the results and implications of those experiments.

Proceedings ArticleDOI
22 Oct 2001
TL;DR: A task model development environment centered around a machine learning engine that infers task models from examples is presented, support for a domain expert to refine past examples as he or she develops a clearer understanding of how to model the domain.
Abstract: Task models are used in many areas of computer science including planning, intelligent tutoring, plan recognition, interface design, and decision theory. However, developing task models is a significant practical challenge. We present a task model development environment centered around a machine learning engine that infers task models from examples. A novel aspect of the environment is support for a domain expert to refine past examples as he or she develops a clearer understanding of how to model the domain. Collectively, these examples constitute a "test suite" that the development environment manages in order to verify that changes to the evolving task model do not have unintended consequences.

Book ChapterDOI
02 Apr 2001
TL;DR: A new coarse grain approach to automated integrated (functional) testing is presented, which combines three paradigms: library-based test design, meaning construction of test graphs by combination of test case components on a coarse granular level, and incremental formalization, allowing continuous verification of application- and aspect-specific properties by means of model checking.
Abstract: In this paper we present a new coarse grain approach to automated integrated (functional) testing, which combines three paradigms: library-based test design, meaning construction of test graphs by combination of test case components on a coarse granular level, incremental formalization, through successive enrichment of a special-purpose environment for application-specific test development and execution, and library-based consistency checking, allowing continuous verification of application- and aspect-specific properties by means of model checking. These features and their impact for the test process and the test engineers are illustrated along an industrial application: an automated integrated testing environment for CTI-Systems.

Patent
Michael Dean Dallin1
10 Oct 2001
TL;DR: In this article, a test data is processed by a test generation system based upon the type template and the output template to automatically generate a test script file having at least one test case.
Abstract: A method and system for testing a software product is provided. Specifically, a type template, an output template, and a table of test data pertaining to the software product are provided. The test data is processed by a test generation system based upon the type template and the output template to automatically generate a test script file having at least one test case. The test script file is then used to test the software product.

Patent
20 Jul 2001
TL;DR: In this article, a test plan tool and database are provided to capture test item definitions and identifications and a monitor generator is provided to generate monitors for detecting functional coverage during verification.
Abstract: A system and method for automatic verification of a test plan for a semiconductor device. The device is specified by a hardware description language (HDL) model or a formal description language model derived from the HDL model. A test plan tool and database are provided to capture test item definitions and identifications and a monitor generator is provided to generate monitors for detecting functional coverage during verification. An evaluator is provided to compare verification-events with test-plan items, to determine test item completeness and to update the test plan database.

Proceedings ArticleDOI
15 Jan 2001
TL;DR: This paper introduces the first real single FPGA concurrent multi- user operating system for reconfigurable computers and the implementation details for the first limited multi-user operating system.
Abstract: Traditional reconfigurable computing platforms are designed to be single user and have been acknowledged to be difficult to design applications for. The design tools are still primitive and as reconfigurable computing becomes mainstream the development of new design tools and run time environments is essential. As the number of system gates is reaching 10 million on current FPGAs, there is an increase in demand to share a single FPGA amongst multiple applications. A third party must be introduced to handle the sharing of the FPGA and we therefore introduce the first real single FPGA concurrent multi-user operating system for reconfigurable computers. In this paper we describe the complete operating system for reconfigurable architecture and the implementation details for the first limited multi-user operating system. The first OS is a loader, it allocates FPGA area and it can dynamically partition, place and route applications at run-time. As OS for reconfigurable computing is a new area of research, we also had to develop techniques for regression testing and performance comparison. This involved the development of a test suite.

Journal ArticleDOI
TL;DR: Techniques for test case generation from specification and description language (SDL) specifications and its underlying behavioral model, extended finite state machines (EFSM) and its variants are discussed.

Proceedings ArticleDOI
26 Nov 2001
TL;DR: This paper shows how test purposes are exploited today by several tools that automate the generation of test cases, and presents the major relations that link test purposes, test cases and reference specification.
Abstract: Nowadays, test cases may correspond to elaborate programs. It is therefore sensible to try to specify test cases in order to get a more abstract view of these. This paper explores the notion of test purpose as a way to specify a set of test cases. It shows how test purposes are exploited today by several tools that automate the generation of test cases. It presents the major relations that link test purposes, test cases and reference specification. It also explores the similarities and differences between the specification of test cases, and the specification of programs. This opens perspectives for the synthesis and the verification of test cases, and for other activities like test case retrieval.

Journal ArticleDOI
TL;DR: A method to determine the repetition numbers of test sequences assuming that each transition is executed with a fixed probability when nondeterministic choice is made is presented.

Journal ArticleDOI
TL;DR: This investigation presents a strategy to construct a compact mathematical model of the path-restoration version of the spare capacity allocation problem that uses a node-arc formulation and combines constraints whenever multiple working paths affected by an edge failure have identical origins or destinations.
Abstract: This investigation presents a strategy to construct a compact mathematical model of the path-restoration version of the spare capacity allocation problem. The strategy uses a node-arc formulation and combines constraints whenever multiple working paths affected by an edge failure have identical origins or destinations. Another unique feature of this model is the inclusion of modularity restrictions corresponding to the discrete capacities of the equipment used in telecommunication networks.The new model can be solved using a classical branch-and-bound algorithm with a linear-programming relaxation. A preprocessing module is developed, which generates a set of cuts that strengthens this linear programming relaxation. The overhead associated with the cuts is offset by the improved bounds produced. A new branch-and-bound algorithm is developed that exploits the modularity restrictions. In an extensive empirical analysis, a software implementation of this algorithm was found to be substantially faster than CPLEX 6.5.3. For a test suite of 50 problems, each having 50 nodes and 200 demands from a uniform distribution with a small variance, our new software obtained solutions guaranteed to be within 4% of optimality in five minutes of CPU time on a DEC AlphaStation.