scispace - formally typeset
Search or ask a question

Showing papers on "White-box testing published in 2001"


Book ChapterDOI
01 May 2001
TL;DR: This paper outlines a design for a system that will approximate mutation, but in a way that will be accessible to every day programmers, and believes this system could be efficient enough to be adopted by leading-edge software developers.
Abstract: Mutation testing is a powerful, but computationally expensive, technique for unit testing software. This expense has prevented mutation form becoming widely used in practical situations, but recent engineering advances have given us techniques and algorithms for significantly reducing the cost of mutation testing. These technique include a new algorithmic execution technique include a new algorithmic execution technique called schema-based mutation, a reduction technique called selective mutation, heuristics for detecting equivalent mutants, and algorithms for automatic test data generation. This paper reviews experimentation with these advances and outlines a design for a system that will approximate mutation, but in a way that will be accessible to every day programmers. We envision a system to which a programmer can submit a program unit and get back a set of input/output pairs that are guaranteed to form an effective test of the unit by being close to mutation adequate. We believe this system could be efficient enough to be adopted by leading-edge software developers. Full automation in unit testing has the potential to dramatically change the economic balance between testing and development, by reducing the cost of testing from the major part of the total development cost to a small fraction.

369 citations


Book ChapterDOI
TL;DR: This work addresses the derivation of test requirements, which will be transformed into test cases, test oracles, and test drivers once the authors have detailed design information, and the one of testability.
Abstract: System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams, and possibly the use of the Object Constraint Language across all these artifacts. Our goal is to support the derivation of test requirements, which will be transformed into test cases, test oracles, and test drivers once we have detailed design information.Another important issue we address is the one of testability. Testability requirements (or rules) need to be imposed on UML artifacts so as to be able to support system testing efficiently. Those testability requirements result from a trade-off between analysis and design overhead and improved testability. The potential for automation is also an overriding concern all across our work as the ultimate goal is to fully support testing activities with high-capability tools.

247 citations


Journal ArticleDOI
TL;DR: This work derives the optimal testing strategy as a function of testing cost, prior knowledge, and testing lead time and shows that in the case of imperfect testing, the attractiveness of parallel strategies decreases.
Abstract: An important managerial problem in product design in the extent to which testing activities are carried out in parallel or in series. Parallel testing has the advantage of proceeding more rapidly than serial testing but does not take advantage of the potential for learning between tests, thus resulting in a larger number of tests. We model this trade-off in the form of a dynamic program and derive the optimal testing strategy (or mix of parallel and serial testing) that minimizes both the total cost and time of testing. We derive the optimal testing strategy as a function of testing cost, prior knowledge, and testing lead time. Using information theory to measure the test efficiency, we further show that in the case of imperfect testing (due to noise or simulated test conditions), the attractiveness of parallel strategies decreases. Finally, we analyze the relationship between testing strategies and the structure of design hierarchy. We show that a key benefit of modular product architecture lies in the reduction of testing cost.

245 citations


Book
15 Mar 2001
TL;DR: The authors reveal how object-oriented software development allows testing to be integrated into each stage of the process--from defining requirements to system integration--resulting in a smoother development process and a higher end quality.
Abstract: A Practical Guide to Testing Object-Oriented Software focuses on the real-world issues that arise in planning and implementing effective testing for object-oriented and component-based software development. It shows how testing object-oriented software differs from testing procedural software and highlights the unique challenges and opportunities inherent in object-oriented software testing.The authors reveal how object-oriented software development allows testing to be integrated into each stage of the process--from defining requirements to system integration--resulting in a smoother development process and a higher end quality. As they follow this process, they describe what to test at each stage as well as offer experienced-based testing techniques.You will find information on such important topics as: Testing analysis and design models, including selecting test cases to guide design inspections Testing components, frameworks, and product lines The testing challenges of inheritance and polymorphism How to devise an effective testing strategy Testing classes, including constructing a test driver and test suites Testing object interactions, covering sampling test cases, off-the-shelf components, protocol testing, and test patterns Testing class hierarchies, featuring subclass test requirements Testing distributed objects, including threads, life cycle testing, and Web server testing Testing systems, with information on stress, life cycle, and performance testingOne comprehensive example runs throughout the book to demonstrate testing techniques for each stage of development. In addition, the book highlights important questions that testers should ask when faced with specific testing tasks.The authors acknowledge that testing is often viewed as a necessary evil, and that resources allocated to testing are often limited. With that in mind, they present a valuable repertoire of testing techniques from which you can choose those that fit your budget, schedule, and needs.

161 citations


Journal ArticleDOI
TL;DR: A mathematical model is developed that treats testing as an activity that generates information about technical and customer-need related problems and its implications for managerial practice are discussed and suggestions for further research undertakings are provided.
Abstract: A fundamental problem in managing product development is the optimal timing, frequency, and fidelity of sequential testing activities that are carried out to evaluate novel product concepts and designs. In this paper, we develop a mathematical model that treats testing as an activity that generates information about technical and customer-need related problems. An analysis of the model results in several important findings. First, optimal testing strategies need to balance the tension between several variables, including the increasing cost of redesign, the cost of a test as function of fidelity, and the correlation between sequential tests. Second, a simple form of our model results in an EOQ-like result: The optimal number of tests called the Economic Testing Frequency or ETF is the square root of the ratio of avoidable cost and the cost of a test. Third, the relationship between sequential tests can have an impact on optimal testing strategies. If sequential tests are increasing refinements of one another, managers should invest their budgets in a few high-fidelity tests, whereas if the tests identify problems independently of one another it may be more effective if developers carry out a higher number of lower-fidelity tests. Using examples, the implications for managerial practice are discussed and suggestions for further research undertakings are provided.

149 citations


Journal ArticleDOI
TL;DR: A methodology for object-oriented software testing at the class and cluster levels is proposed, and the feasibility of using contract, a formal specification language for the behavioral dependencies and interactions among cooperating objects of different classes in a given cluster is illustrated.
Abstract: Object-oriented programming consists of several different levels of abstraction, namely, the algorithmic level, class level, cluster level, and system level. The testing of object-oriented software at the algorithmic and system levels is similar to conventional program testing. Testing at the class and cluster levels poses new challenges. Since methods and objects may interact with one another with unforeseen combinations and invocations, they are much more complex to simulate and test than the hierarchy of functional calls in conventional programs. In this paper, we propose a methodology for object-oriented software testing at the class and cluster levels. In class-level testing, it is essential to determine whether objects produced from the execution of implemented systems would preserve the properties defined by the specification, such as behavioral equivalence and nonequivalence. Our class-level testing methodology addresses both of these aspects. For the testing of behavioral equivalence, we propose to select fundamental pairs of equivalent ground terms as test cases using a black-box technique based on algebraic specifications, and then determine by means of a white-box technique whether the objects resulting from executing such test cases are observationally equivalent. To address the testing of behavioral nonequivalence, we have identified and analyzed several nontrivial problems in the current literature. We propose to classify term equivalence into four types, thereby setting up new concepts and deriving important properties. Based on these results, we propose an approach to deal with the problems in the generation of nonequivalent ground terms as test cases. Relatively little research has contributed to cluster-level testing. In this paper, we also discuss black-box testing at the cluster level. We illustrate the feasibility of using contract, a formal specification language for the behavioral dependencies and interactions among cooperating objects of different classes in a given cluster. We propose an approach to test the interactions among different classes using every individual message-passing rule in the given Contract specification. We also present an approach to examine the interactions among composite message-passing sequences. We have developed four testing tools to support our methodology.

146 citations


Proceedings ArticleDOI
08 Oct 2001
TL;DR: The initial experience shows that this approach of requirement-based test generation may provide significant benefits in terms of reduction in number of test cases and increase in quality of a test suite.
Abstract: Testing large software systems is very laborious and expensive. Model-based test generation techniques are used to automatically generate tests for large software systems. However, these techniques require manually created system models that are used for test generation. In addition, generated test cases are not associated with individual requirements. In this paper, we present a novel approach of requirement-based test generation. The approach accepts a software specification as a set of individual requirements expressed in textual and SDL formats (a common practice in the industry). From these requirements, system model is automatically created with requirement information mapped to the model. The system model is used to automatically generate test cases related to individual requirements. Several test generation strategies are presented. The approach is extended to requirement-based regression test generation related to changes on the requirement level. Our initial experience shows that this approach may provide significant benefits in terms of reduction in number of test cases and increase in quality of a test suite.

114 citations


Journal ArticleDOI
Stephen H. Edwards1
TL;DR: This paper outlines a general strategy for automated black‐box testing of software components that includes automatic generation of component test drivers, automaticgeneration of black‐ box test data, and automatic or semi‐automatic generation of part wrappers that serve as test oracles.
Abstract: This paper outlines a general strategy for automated black-box testing of software components that includes: automatic generation of component test drivers, automatic generation of black-box test data, and automatic or semi-automatic generation of component wrappers that serve as test oracles. This research in progress unifies several threads of testing research, and preliminary work indicates that practical levels of testing automation are possible.

114 citations


Journal ArticleDOI
TL;DR: This paper presents a technique intended to solve the problem of over-estimation of reliability of software reliability models, using both time and code coverage measures for the prediction of software failures in operation.
Abstract: Existing software reliability-growth models often over-estimate the reliability of a given program. Empirical studies suggest that the over-estimations exist because the models do not account for the nature of the testing. Every testing technique has a limit to its ability to reveal faults in a given system. Thus, as testing continues in its region of saturation, no more faults are discovered and inaccurate reliability-growth phenomena are predicted from the models. This paper presents a technique intended to solve this problem, using both time and code coverage measures for the prediction of software failures in operation. Coverage information collected during testing is used only to consider the effective portion of the test data. Execution time between test cases, which neither increases code coverage nor causes a failure, is reduced by a parameterized factor. Experiments were conducted to evaluate this technique, on a program created in a simulated environment with simulated faults, and on two industrial systems that contained tenths of ordinary faults. Two well-known reliability models, Goel-Okumoto and Musa-Okumoto, were applied to both the raw data and to the data adjusted using this technique. Results show that over-estimation of reliability is properly corrected in the cases studied. This new approach has potential, not only to achieve more accurate applications of software reliability models, but to reveal effective ways of conducting software testing.

110 citations


Journal ArticleDOI
TL;DR: The author argues that test-first coding is not testing, and it is an analysis technique that is nearly as old as programming.
Abstract: The author argues that test-first coding is not testing. Test-first coding is not new. It is nearly as old as programming. It is an analysis technique. We decide what we are programming and what we are not programming, and we decide what answers we expect. Test-first is also a design technique.

92 citations


Proceedings ArticleDOI
08 Oct 2001
TL;DR: This paper proposes a scenario-based functional regression testing, which is based on end-to-end (E2E) integration test scenarios, and provides several alternative test-case selection approaches and a hybrid approach to meet various requirements.
Abstract: Regression testing has been a popular quality-assurance technique. Most regression testing techniques are based on code or software design. This paper proposes a scenario-based functional regression testing, which is based on end-to-end (E2E) integration test scenarios. The test scenarios are first represented in a template model that embodies both test dependency and traceability. By using test dependency information, one can obtain a test slicing algorithm to detect the scenarios that are affected and thus they are candidates for regression testing. By using traceability information, one can find affected components and their associated test scenarios and test cases for regression testing. With the same dependency and traceability information one can use the ripple effect analysis to identify all affected, including directly or indirectly, scenarios and thus the set of test cases can be selected for regression testing. This paper also provides several alternative test-case selection approaches and a hybrid approach to meet various requirements. A web-based tool has been developed to support these regression testing tasks.

Proceedings ArticleDOI
Wei-Tek Tsai1, Xiaoying Bai1, R. Paul, Weiguang Shao1, Vivek Agarwal1 
08 Oct 2001
TL;DR: An approach to design End-to-End (E2E) integration testing, including test scenario specification, test case generation and tool support is proposed, including a prototype tool to support E2E testing in a distributed environment on the J2EE platform.
Abstract: Integration testing has always been a challenge especially if the system under test is large with many subsystems and interfaces. This paper proposes an approach to design End-to-End (E2E) integration testing, including test scenario specification, test case generation and tool support. Test scenarios are specified as thin threads, each of which represents a single function from an end user's point of view. Thin threads can be organized hierarchically into a tree with each branch consisting of a set of related thin threads representing a set of related functionality. A test engineer can use thin-thread trees to generate test cases systematically, as well as carry out other related tasks such as risk analysis and assignment, regression testing, ripple effect analysis. A prototype tool has been developed to support E2E testing in a distributed environment on the J2EE platform.

Journal ArticleDOI
01 Nov 2001
TL;DR: Several software reliability and cost models by means of quasi-renewal processes are derived in which successive error-free times are independent and increasing by a fraction, and the maximum likelihood estimates of parameters associated with these models are provided.
Abstract: This paper models software reliability and testing costs using a new tool: a quasi-renewal process. It is assumed that the cost of fixing a fault during software testing phase, consists of both deterministic and incremental random parts, increases as the number of faults removed increases. Several software reliability and cost models by means of quasi-renewal processes are derived in which successive error-free times are independent and increasing by a fraction. The maximum likelihood estimates of parameters associated with these models are provided. Based on the valuable properties of quasi-renewal processes, the expected software testing and debugging cost, number of remaining faults in the software, and mean error-free time after testing are obtained. A class of related optimization problem is then contemplated and optimum testing policies incorporating both reliability and cost measures are investigated. Finally, numerical examples are presented through a set of real testing data to illustrate the models results.

Patent
18 Sep 2001
TL;DR: A software and hardware system and an associated methodology provides ATE-independent go/no-go testing as well as advanced failure diagnosis of integrated circuits for silicon debug, process characterization, production (volume) testing, and system diagnosis comprises an embedded test architecture designed within an integrated circuit; means for seamlessly transferring information between the integrated circuit and its external environment; and external environment that effectuates the seamless transfer for the user to perform relevant test and diagnosis as discussed by the authors.
Abstract: A software and hardware system and an associated methodology provides ATE-independent go/no-go testing as well as advanced failure diagnosis of integrated circuits for silicon debug, process characterization, production (volume) testing, and system diagnosis comprises an embedded test architecture designed within an integrated circuit; means for seamlessly transferring information between the integrated circuit and its external environment; and an external environment that effectuates the seamless transfer for the user to perform relevant test and diagnosis.

Journal ArticleDOI
TL;DR: A modification of a formal testing method for extended finite‐state machines to handle the problem of correct behaviour of an implementation of some system, with respect to its specification, provided certain specific requirements for both of them are satisfied.
Abstract: A number of current control systems for aircraft have been specified with statecharts. The risk of failures requires the use of a formal testing approach to ensure that all possible faults are considered. However, testing the compliance of an implementation of a system to its specification is dependent on the specification method and little work has been reported relating to the use of statechart-specific methods. This paper describes a modification of a formal testing method for extended finite-state machines to handle the above problem. The method allows one to demonstrate correct behaviour of an implementation of some system, with respect to its specification, provided certain specific requirements for both of them are satisfied. The case study illustrates these and shows the applicability of the method. By considering the process used to develop the system it is possible to reduce the size of the test set dramatically; the method to be described is easy to automate. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A decision theoretic solution to the problem of deciding the optimal length of the testing period of software testing is proposed and made use of a well-known error detection model and a sensible utility.
Abstract: Testing software before its release is an important stage of the software testing process. We propose a decision theoretic solution to the problem of deciding the optimal length of the testing period. We make use of a well-known error detection model and a sensible utility. Three testing plans are described: the single stage, the two stage and the next fixed time look ahead. A study to compare the performance of the plans shows the relative performance of each plan under a variety of assumptions about the quality of the software to be tested. All the plans are illustrated by using the well-known naval tactical data system data set.

Proceedings ArticleDOI
27 Aug 2001
TL;DR: The paper reports a case study involving the application of mutation based black box testing to two programs of different types and suggests classes of specifications for which mutation based test-case generation may be effective.
Abstract: The technique of mutation testing, in which the effectiveness of tests is determined by creating variants of a program in which statements are mutated, is well known.. Whilst of considerable theoretical interest, the technique requires costly tools and is computationally expensive. Very large numbers of 'mutants' can be generated for even simple programs. More recently, it has been proposed that the concept be applied to specification based (black box) testing. The proposal is to generate test cases by systematically replacing data items relevant to a particular part of a specification with a data item relevant to another. If the specification is considered as generating a language that describes the set of valid inputs, then the mutation process is intended to generate syntactically valid and invalid statements. Irrespective of their 'correctness' in terms of the specification, these can then be used to test a program in the usual (black box) manner. For this approach to have practical value it must produce test cases that would not be generated by other popular black box test generation approaches. The paper reports a case study involving the application of mutation based black box testing to two programs of different types. Test cases were also generated using equivalence class testing. and boundary value testing approaches. The test cases from each method were examined to judge the overlap and to assess the value of the additional cases generated. It was found that less than 20% of the mutation test cases for a data-vetting program were generated by the other two methods, as against 75% for a statistical analysis program. The paper analyses these results and suggests classes of specifications for which mutation based test-case generation may be effective.

Patent
05 Sep 2001
TL;DR: In this paper, the authors present a method and process for developing and testing software applying runtime executable patching technology to enhance the quality assurance effort across all phases of the Software Development Life-Cycle in a grey box methodology.
Abstract: A method and process for developing and testing software applies runtime executable patching technology to enhance the quality assurance effort across all phases of the Software Development Life-Cycle in a “grey box” methodology. The system facilitates the creation of re-usable, Plug‘n’Play Test Components, called Probe Libraries, that can be used again and again by testers as well as developers in unit and functional tests to add an extra safety net against the migration of low-level defects across Phases of the overall Software Development and Testing Life-Cycle. The new elements introduced in the Software Development Life-Cycle focus on bringing developers and testers together in the general quality assurance workflow and provide numerous tools, techniques and methods for making the technology both relatively easy to use and powerful for various test purposes.

Proceedings ArticleDOI
25 Jun 2001
TL;DR: This article proposes an approach for testing, which explicitly takes into account testing-relevant features of component-based software and thus allows more rigorous testing.
Abstract: The main idea of component-based development is to use existing components for building software. The resulting software often has features which complicate testing, such a feature is, for example, the absence of component source code. This article proposes an approach for testing, which explicitly takes into account testing-relevant features of component-based software and thus allows more rigorous testing. The basic constituent of the approach is a graphical representation combining black- and white-box information from specification and implementation, respectively. This graphical representation can then be used for test case identification based on well-known structural techniques.

Journal ArticleDOI
TL;DR: The suitability of M‐mp testing in a given context will depend on whether building and maintaining model programs is likely to be more cost effective than manually pre‐calculating P's expected outcomes for given test data.
Abstract: SUMMARY A strategy described as ‘testing using M model programs’ (abbreviated to ‘M-mp testing’) is investigated as a practical alternative to software testing based on manual outcome prediction. A model program implements suitably selected parts of the functional specification of the software to be tested. The M-mp testing strategy requires that M (M ≥ 1) model programs as well as the program under test, P , should be independently developed. P and the M model programs are then subjected to the same test data. Difference analysis is conducted on the outputs and appropriate corrective action is taken. P and the M model programs jointly constitute an approximate test oracle. Both M-mp testing and manual outcome prediction are subject to the possibility of correlated failure. In general, the suitability of M-mp testing in a given context will depend on whether building and maintaining model programs is likely to be more cost effective than manually pre-calculating P ’s expected outcomes for given test data. In many contexts, M-mp testing could also facilitate the attainment of higher test adequacy levels than would be possible with manual outcome prediction. A rigorous experiment in an industrial context is described in whichM-mp testing (withM = 1) was used to test algorithmically complex scheduling software. In this case, M-mp testing turned out to be significantly more cost effective than testing based on manual outcome prediction. Copyright  2001 John Wiley & Sons, Ltd.

Patent
10 May 2001
TL;DR: In this article, an automated test system for the remote testing of applications and devices especially in dynamic environments is presented. But the test generation means for generating the tests and executing the testing, which is connected to a data storage means contains information about testable items and test scenarios for the tested items.
Abstract: This invention relates to an automated test system for the remote testing of applications and devices especially in dynamic environments. It provides for the automation of the testing process and for functional independence at every level of the process. The invention is particularly suited for remote testing over a network such as the internet. To achieve its purpose, the invention provides a test generation means for generating the tests and executing the testing, which is connected to a data storage means contains information about testable items and test scenarios for the testable items, as well as the results of testing. The image builder means provides a centralized image building facility for converting the tests into an executable form.

Patent
10 Jul 2001
TL;DR: Test Delivery Management System (TDMS) as mentioned in this paper is a system for computer-based testing, which facilitates network distribution of testing materials and software, and includes a back-end (260), a servicing unit (270), and one or more testing centers (280).
Abstract: A system for computer-based testing facilitates network distribution of testing materials and software. The system comprises a back-end (260), a servicing unit (270), and one or more testing centers (280). The back-end stores test questions and software, and includes software that prepares the test questions and software for distribution to the servicing unit. The servicing unit includes a web server that interfaces with software installed at a testing center. The testing center includes administrative software that contacts the web server at the servicing center to obtain updates to test questions and testing software in a process called 'synchronization.' Synchronization is also the process by which the test center reports test results and candidate information back to the servicing unit by means of the servicing unit's web server. The testing center includes a software component called the Test Delivery Management System (TDMS), which uses Java-based technology to deliver test questions to examinees at one or more testing stations located at the test center.

Journal ArticleDOI
TL;DR: The authors propose a lightweight approach to embedding tests into components, making them self-testable and a method to evaluate testing efficiency based on mutation techniques, which ultimately provides an estimation of a component's quality.
Abstract: A major challenge in software development research today is to come up with low-overhead methods and tools to help developers deliver quality products within tight time and budget constraints. This is particularly true of testing because of its cost and impact on final product reliability. The authors propose a lightweight approach to embedding tests into components, making them self-testable. They also propose a method to evaluate testing efficiency based on mutation techniques, which ultimately provides an estimation of a component's quality.

Proceedings ArticleDOI
08 Oct 2001
TL;DR: This paper presents an integrated method that combines metamorphic testing with fault-based testing using real and symbolic inputs and proposes to enhance fault- based testing to address the oracle problem.
Abstract: Although testing is the most popular method for assuring software quality, there are two recognized limitations, known as the reliable test set problem and the oracle problem. Fault-based testing is an attempt by Morell to alleviate the reliable test set problem. In this paper, we propose to enhance fault-based testing to address the oracle problem as well. We present an integrated method that combines metamorphic testing with fault-based testing using real and symbolic inputs.

Proceedings ArticleDOI
20 Jun 2001
TL;DR: This paper discusses a testing approach that supports developers with their task of creating automated functional test drivers for e-business applications and motivates the approach to reduce the time and effort required to automate scenario tests.
Abstract: E-business software is often developed on a tight schedule, and testing needs to keep pace. The advice from proponents of approaches like extreme programming is that by testing continuously, it is actually possible to compress development cycles. In this paper, we discuss a testing approach that supports developers with their task of creating automated functional test drivers for e-business applications. The main goal for the approach is to reduce the time and effort required to automate scenario tests for e-business applications. After motivating the approach, we give an abstract view of a tool we have designed and implemented to support the approach. Next, we give an example of its use, and finally proceed to a discussion of the architecture of the tool itself.

Book ChapterDOI
12 Mar 2001
TL;DR: This paper proposes a methodology for improving the throughput of software verification by performing some consistency checks between the original code and the model, specifically, by applying software testing and introduces the notion of a neighborhood of an error trace, consisting of a tree of execution paths, where the original error trace is one of them.
Abstract: Automatic and manual software verification is based on applying mathematical methods to a model of the software. Modeling is usually done manually, thus it is prone to modeling errors. This means that errors found in the model may not correspond to real errors in the code, and that if the model is found to satisfy the checked properties, the actual code may still have some errors. For this reason, it is desirable to be able to perform some consistency checks between the actual code and the model. Exhaustive consistency checks are usually not possible, for the same reason that modeling is necessary. We propose a methodology for improving the throughput of software verification by performing some consistency checks between the original code and the model, specifically, by applying software testing. In this paper we present such a combined testing and verification methodology and demonstrate how it is applied using a set of software reliability tools. We introduce the notion of a neighborhood of an error trace, consisting of a tree of execution paths, where the original error trace is one of them. Our experience with the methodology shows that traversing the neighborhood of an error is extremely useful in locating its cause. This is crucial not only in understanding where the error stems from, but in getting an initial idea of how to redesign the code. We use as a case study a robot control system, and report on several design and modeling errors found during the verification and testing process.

Proceedings ArticleDOI
26 Nov 2001
TL;DR: This paper shows how test purposes are exploited today by several tools that automate the generation of test cases, and presents the major relations that link test purposes, test cases and reference specification.
Abstract: Nowadays, test cases may correspond to elaborate programs. It is therefore sensible to try to specify test cases in order to get a more abstract view of these. This paper explores the notion of test purpose as a way to specify a set of test cases. It shows how test purposes are exploited today by several tools that automate the generation of test cases. It presents the major relations that link test purposes, test cases and reference specification. It also explores the similarities and differences between the specification of test cases, and the specification of programs. This opens perspectives for the synthesis and the verification of test cases, and for other activities like test case retrieval.

Journal ArticleDOI
TL;DR: This paper presents a method for deterministic testing of multitasking RTS, which allows explorative investigations of real-time system behavior and shows how this analysis and testing strategy can be extended to encompass distributed computations, communication latencies and the effects of global clock synchronization.

Proceedings ArticleDOI
17 Sep 2001
TL;DR: This paper examines testing the middleware system itself, and therefore, a method for testing the concurrency properties of the system is used and revealed a number of faults and design weaknesses, and showed that, with some adaptation, traditional tools and techniques go a long way in the testing of distributed applications.
Abstract: This paper describes a case study in the testing of distributed systems. The software under test is a middleware system developed in Java. The full test life cycle is examined including unit testing, integration testing, and system testing. Where possible, traditional tools and techniques are used to carry out the testing. One aspect where this is not possible is the testing of the low-level concurrency, which is often overlooked when testing commercial distributed systems, since the middleware or application server is already developed by a third-party and is assumed to operate correctly. This paper examines testing the middleware system itself, and therefore, a method for testing the concurrency properties of the system is used. The testing revealed a number of faults and design weaknesses, and showed that, with some adaptation, traditional tools and techniques go a long way in the testing of distributed applications.

Journal ArticleDOI
TL;DR: A method to determine the repetition numbers of test sequences assuming that each transition is executed with a fixed probability when nondeterministic choice is made is presented.