scispace - formally typeset
Search or ask a question

Showing papers on "White-box testing published in 2005"


Journal ArticleDOI
12 Jun 2005
TL;DR: DART is a new tool for automatically testing software that combines three main techniques, automated extraction of the interface of a program with its external environment using static source-code parsing, and dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths.
Abstract: We present a new tool, named DART, for automatically testing software that combines three main techniques: (1) automated extraction of the interface of a program with its external environment using static source-code parsing; (2) automatic generation of a test driver for this interface that performs random testing to simulate the most general environment the program can operate in; and (3) dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths. Together, these three techniques constitute Directed Automated Random Testing, or DART for short. The main strength of DART is thus that testing can be performed completely automatically on any program that compiles -- there is no need to write any test driver or harness code. During testing, DART detects standard errors such as program crashes, assertion violations, and non-termination. Preliminary experiments to unit test several examples of C programs are very encouraging.

2,346 citations


Proceedings ArticleDOI
15 May 2005
TL;DR: It is concluded that, based on the data available thus far, the use of mutation operators is yielding trustworthy results (generated mutants are similar to real faults); Mutants appear however to be different from hand-seeded faults that seem to be harder to detect than real faults.
Abstract: The empirical assessment of test techniques plays an important role in software testing research. One common practice is to instrument faults, either manually or by using mutation operators. The latter allows the systematic, repeatable seeding of large numbers of faults; however, we do not know whether empirical results obtained this way lead to valid, representative conclusions. This paper investigates this important question based on a number of programs with comprehensive pools of test cases and known faults. It is concluded that, based on the data available thus far, the use of mutation operators is yielding trustworthy results (generated mutants are similar to real faults). Mutants appear however to be different from hand-seeded faults that seem to be harder to detect than real faults.

753 citations


Journal ArticleDOI
01 Jan 2005
TL;DR: Quality assurance and testing organizations are tasked with the broad objective of assuring that a software application fulfills its functional business requirements, but security testing doesn't directly fit into this paradigm.
Abstract: Quality assurance and testing organizations are tasked with the broad objective of assuring that a software application fulfills its functional business requirements. Such testing most often involves running a series of dynamic functional tests to ensure proper implementation of the application's features. However, because security is not a feature or even a set of features, security testing doesn't directly fit into this paradigm

240 citations


Journal ArticleDOI
TL;DR: This paper presents a high-level, functional component-oriented, software-based self-testing methodology for embedded processors and validate the effectiveness and efficiency of the proposed methodology by completely applying it on two different processor implementations of a popular RISC instruction set architecture.
Abstract: Embedded processor testing techniques based on the execution of self-test programs have been recently proposed as an effective alternative to classic external tester-based testing and pure hardware built-in self-test (BIST) approaches. Software-based self-testing is a nonintrusive testing approach and provides at-speed testing capability without any hardware or-performance overheads. In this paper, we first present a high-level, functional component-oriented, software-based self-testing methodology for embedded processors. The proposed methodology aims at high structural fault coverage with low test development and test application cost. Then, we validate the effectiveness of the proposed methodology as a low-cost alternative over structural software-based self-testing methodologies based on automatic test pattern generation and pseudorandom testing. Finally, we demonstrate the effectiveness and efficiency of the proposed methodology by completely applying it on two different processor implementations of a popular RISC instruction set architecture including several gate-level implementations.

188 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: This work presents a value-driven approach to system-level test case prioritization called the prioritization of requirements for test (PORT), which prioritizes system test cases based upon four factors: requirements volatility, customer priority, implementation complexity, and fault proneness of the requirements.
Abstract: Test case prioritization techniques have been shown to be beneficial for improving regression-testing activities. With prioritization, the rate of fault detection is improved, thus allowing testers to detect faults earlier in the system-testing phase. Most of the prioritization techniques to date have been code coverage-based. These techniques may treat all faults equally. We build upon prior test case prioritization research with two main goals: (1) to improve user-perceived software quality in a cost effective way by considering potential defect severity and (2) to improve the rate of detection of severe faults during system-level testing of new code and regression testing of existing code. We present a value-driven approach to system-level test case prioritization called the prioritization of requirements for test (PORT). PORT prioritizes system test cases based upon four factors: requirements volatility, customer priority, implementation complexity, and fault proneness of the requirements. We conducted a PORT case study on four projects developed by students in advanced graduate software testing class. Our results show that PORT prioritization at the system level improves the rate of detection of severe faults. Additionally, customer priority was shown to be one of the most important prioritization factors contributing to the improved rate of fault detection.

186 citations


Proceedings ArticleDOI
18 Sep 2005
TL;DR: UPPAAL-TRON is a new tool for model based online black-box conformance testing of real-time embedded systems specified as timed automata that has promising error detection potential and execution performance.
Abstract: UPPAAL-TRON is a new tool for model based online black-box conformance testing of real-time embedded systems specified as timed automata. In this paper we present our experiences in applying our tool and technique on an industrial case study. We conclude that the tool and technique is applicable to practical systems, and that it has promising error detection potential and execution performance.

139 citations


Journal ArticleDOI
TL;DR: This analysis of the relationships between variable and literal faults, and among literal, operator, term, and expression faults, produces a richer set of findings that interpret previous empirical results, can be applied to the design and evaluation of test methods, and inform the way that test cases should be prioritized for earlier detection of faults.
Abstract: Kuhn, followed by Tsuchiya and Kikuno, have developed a hierarchy of relationships among several common types of faults (such as variable and expression faults) for specification-based testing by studying the corresponding fault detection conditions. Their analytical results can help explain the relative effectiveness of various fault-based testing techniques previously proposed in the literature. This article extends and complements their studies by analyzing the relationships between variable and literal faults, and among literal, operator, term, and expression faults. Our analysis is more comprehensive and produces a richer set of findings that interpret previous empirical results, can be applied to the design and evaluation of test methods, and inform the way that test cases should be prioritized for earlier detection of faults. Although this work originated from the detection of faults related to specifications, our results are equally applicable to program-based predicate testing that involves logic expressions.

122 citations


Journal ArticleDOI
01 Mar 2005
TL;DR: The work applying computational evolutionary methods in software engineering, especially in software testing, is reviewed, both technically and economically vital for high quality software production.
Abstract: In this paper, we review the work applying computational evolutionary methods in software engineering, especially in software testing. Testing is both technically and economically vital for high quality software production. About half of the expenses in software production has been estimated to be due to testing. Much of the testing is done manually or using other labor-intensive methods. To develop efficient, cost effective, and automatic means and tools for software testing is thus highly tempting for software industry. Searching software errors by using evolution based methods like genetic algorithms is one attempt towards these goals. Software testing is a field, where the gap between the means and needs is exceptionally wide. Despite the great advances in computing during the last 30 years the software development and the testing process in most companies is still very immature, meanwhile the complexity and criticality of the software has increased tremendously. When testing software, by using any optimization method as a test data generator, we are optimizing the given input according to a selected software metric encoded as a fitness function. The success of genetic algorithms in optimization is based on the so called building block hypothesis. Basically, the genetic algorithms do not find any solitary bug at any higher probability than pure random search. However, evolutionary algorithms adapt to the given problem, in practice this means that a genetic algorithm-based tester generates several parameter combinations that reveal minor bugs and based on this information constructs sequences that will reveal, on the average, more bugs than pure random testing.

112 citations


Proceedings ArticleDOI
25 Jun 2005
TL;DR: This paper presents an approach with which to apply evolutionary algorithms for the automatic generation of test cases for the white-box testing of object-oriented software.
Abstract: As the paradigm of object orientation becomes more and more important for modern IT development projects, the demand for an automated test case generation to dynamically test object-oriented software increases. While search-based test case generation strategies, such as evolutionary testing, are well researched for procedural software, relatively little research has been done in the area of evolutionary object-oriented software testing.This paper presents an approach with which to apply evolutionary algorithms for the automatic generation of test cases for the white-box testing of object-oriented software. Test cases for testing object-oriented software include test programs which create and manipulate objects in order to achieve a certain test goal. Strategies for the encoding of test cases to evolvable data structures as well as ideas about how the objective functions could allow for a sophisticated evaluation are proposed. It is expected that the ideas herein can be adapted for other unit testing methods as well.The approach has been implemented by a prototype for empirical validation. In experiments with this prototype, evolutionary testing outperformed random testing. Evolutionary algorithms could be successfully applied for the white-box testing of object-oriented software.

109 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: The interview data shows that the main reasons for using ET in the companies were the difficulties in designing test cases for complicated functionality and the need for testing from the end user's viewpoint.
Abstract: Exploratory testing (ET) - simultaneous learning, test design, and test execution - is an applied practice in industry but lacks research. We present the current knowledge of ET based on existing literature and interviews with seven practitioners in three companies. Our interview data shows that the main reasons for using ET in the companies were the difficulties in designing test cases for complicated functionality and the need for testing from the end user's viewpoint. The perceived benefits of ET include the versatility of testing and the ability to quickly form an overall picture of system quality. We found some support for the claimed high defect detection efficiency of ET. The biggest shortcoming of ET was managing test coverage. Further quantitative research on the efficiency and effectiveness of ET is needed. To help focus ET efforts and help control test coverage, we must study planning, controlling and tracking ET.

94 citations


Journal ArticleDOI
TL;DR: This paper presents the results of developing and evaluating an artefact (specifically, a characterisation schema) to assist with testing technique selection and provides developers with a catalogue containing enough information for them to select the best suited techniques for a given project.
Abstract: One of the major problems within the software testing area is how to get a suitable set of cases to test a software system. This set should assure maximum effectiveness with the least possible number of test cases. There are now numerous testing techniques available for generating test cases. However, many are never used, and just a few are used over and over again. Testers have little (if any) information about the available techniques, their usefulness and, generally, how suited they are to the project at hand upon, which to base their decision on which testing techniques to use. This paper presents the results of developing and evaluating an artefact (specifically, a characterisation schema) to assist with testing technique selection. When instantiated for a variety of techniques, the schema provides developers with a catalogue containing enough information for them to select the best suited techniques for a given project. This assures that the decisions they make are based on objective knowledge of the techniques rather than perceptions, suppositions and assumptions.

Journal ArticleDOI
TL;DR: The proposed mechanisms in the Web Application Vulnerability and Error Scanner (WAVES)--a black-box testing framework for automated Web application security assessment--are implemented and the results show that WAVES is a feasible platform for assessing Web applicationSecurity.

Journal ArticleDOI
01 Jan 2005
TL;DR: This work examines application penetration testing - software testing that's specifically designed to hunt down security vulnerabilities.
Abstract: Security bugs' hidden nature is why we need specific, focused application-security testing techniques, testing that defies the traditional model of verifying an application's specification and instead identifies the unspecified and insecure side-effects of "correct" application functionality. I examine application penetration testing - software testing that's specifically designed to hunt down security vulnerabilities

Patent
07 Nov 2005
TL;DR: In this article, a testing service may utilize a distributed architecture that provides a standardized framework for writing tests, scheduling the tests, and gathering and reporting results of the tests; the testing service also automatically create and set up a desired test environment according to the desired specifications for the test.
Abstract: Embodiments of the present invention provide methods and systems for automated distributed testing of software. A testing service may utilize a distributed architecture that provides a standardized framework for writing tests, scheduling the tests, and gathering and reporting results of the tests. Multiple distributed labs are integrated into the testing service and their environments can be centrally managed by the testing service. The testing service permits the scheduling and performance of tests across multiple machines within a test lab, or tests that span across multiple test labs. Any of the machines in the test labs may be selected based on variety of criteria. The testing service may then automatically locate the appropriate machines that match or satisfy the criteria and schedule the tests when the machines are available. The testing service may also automatically create and set up a desired test environment according to the desired specifications for the test.

Journal ArticleDOI
TL;DR: This paper focuses on the integrated use of Sequence and State Diagrams for deriving a “reasonably” complete reference model, which will then be used for automatically deriving the test cases.

Proceedings ArticleDOI
15 May 2005
TL;DR: A new scalable and flexible framework for testing programs with a novel demand-driven approach based on execution paths to implement test coverage is described, which uses dynamic instrumentation on the binary code that can be inserted and removed on the fly to keep performance and memory overheads low.
Abstract: Producing reliable and robust software has become one of the most important software development concerns in recent years. Testing is a process by which software quality can be assured through the collection of information. While testing can improve software reliability, current tools typically are inflexible and have high over-heads, making it challenging to test large software projects. In this paper, we describe a new scalable and flexible framework for testing programs with a novel demand-driven approach based on execution paths to implement test coverage. This technique uses dynamic instrumentation on the binary code that can be inserted and removed on-the-fly to keep performance and memory overheads low. We describe and evaluate implementations of the framework for branch, node and defuse testing of Java programs. Experimental results for branch testing show that our approach has, on average, a 1.6 speed up over static instrumentation and also uses less memory.

Journal ArticleDOI
15 May 2005
TL;DR: This study hypothesizes that the estimation of code coverage on testing effectiveness varies under different testing profiles, and employs coverage testing and mutation testing in this experiment to investigate the relationship between code coverage and fault detection capability under differentTesting profiles.
Abstract: Software testing is a key procedure to ensure high quality and reliability of software programs. The key issue in software testing is the selection and evaluation of different test cases. Code coverage has been proposed to be an estimator for testing effectiveness, but it remains a controversial topic which lacks of support from empirical data. In this study, we hypothesize that the estimation of code coverage on testing effectiveness varies under different testing profiles. To evaluate the performance of code coverage, we employ coverage testing and mutation testing in our experiment to investigate the relationship between code coverage and fault detection capability under different testing profiles. From our experimental data, code coverage is simply a moderate indicator for the capability of fault detection on the whole test set. However, it is clearly a good estimator for the fault detection of exceptional test cases, but a poor one for test cases in normal operations. For other testing profiles, such as functional testing and random testing, the correlation between code coverage and fault coverage is higher in functional test than in random testing, although these different testing profiles are complementary in the whole test set. The effects of different coverage metrics are also addressed in our experiment.

Journal ArticleDOI
TL;DR: It is argued that models for specification purposes, models for test generation, and models for full code generation are likely to be different, and the different levels of abstraction must be bridged.

Proceedings ArticleDOI
19 Sep 2005
TL;DR: An MT-oriented testing methodology formulates metamorphic services to encapsulate services as well as the implementations of meetamorphic relations to alleviate the issues of testing applications in service-oriented architecture environments.
Abstract: Testing applications in service-oriented architecture (SOA) environments needs to deal with issues like the unknown communication partners until the service discovery, the imprecise black-box information of software components, and the potential existence of non-identical implementations of the same service. In this paper, we exploit the benefits of the SOA environments and metamorphic testing (MT) to alleviate the issues. We propose an MT-oriented testing methodology in this paper. It formulates metamorphic services to encapsulate services as well as the implementations of metamorphic relations. Test cases for the unit test phase is proposed to generate follow-up test cases for the integration test phase. The metamorphic services invoke relevant services to execute test cases and use their metamorphic relations to detect failures. It has potentials to shift the testing effort from the construction of the integration test sets to the development of metamorphic relations.

Patent
21 Jan 2005
TL;DR: In this paper, the authors present a method and device with instructions for testing a software application that includes creating a system model for the software application, wherein the system model includes an activity diagram, and applying one or more test annotations to the activity diagram to control test generation for testing software application.
Abstract: A method and device with instructions for testing a software application include creating a system model for the software application, wherein the system model includes an activity diagram, and applying one or more test annotations to the activity diagram to control test generation for testing the software application. Further, test annotations and the system model are processed to create one or more test cases, and the software application is tested using a test execution tool that uses the test cases.

Proceedings ArticleDOI
Alexander Pretschner1
15 May 2005
TL;DR: This tutorial discusses different scenarios of model-based testing, presents common abstractions when building models, and their consequences for testing, and explains how to use functional, structural, and stochastic test selection criteria, and describes today's test generation technology.
Abstract: Model-based testing has become increasingly popular in recent years. Major reasons include: (1) the need for quality assurance for increasingly complex systems, (2) the emerging model-centric development paradigm, e.g., UML and MDA, with its seemingly direct connection to testing, and (3) the advent of test-centered development methodologies. Model-based testing relies on execution traces of behavior models. They are used as test cases for an implementation: input and expected output. This complements the ideas of model-driven testing. The latter uses static models to derive test drivers to automate test execution. This assumes the existence of test cases, and is, like the particular intricacies of OO testing, not in the focus of this tutorial. We cover major methodological and technological issues: the business case of model-based testing within model-based development, the need for abstraction and inverse concretization, test selection, and test case generation. We (1) discuss different scenarios of model-based testing, (2) present common abstractions when building models, and their consequences for testing, (3) explain how to use functional, structural, and stochastic test selection criteria, and (4) describe today's test generation technology. We provide both practical guidance and a discussion of the state-of-the-art. Potentials of model-based testing in practical applications and future research are highlighted.

Book ChapterDOI
11 Jul 2005
TL;DR: A new methodology for model-based GUI testing is introduced using Labeled Transition Systems (LTSs) in conjunction with action word and keyword techniques for test modeling that was able to find previously unreported defects.
Abstract: So far, model-based testing approaches have mostly been used in testing through various kinds of APIs. In practice, however, testing through a GUI is another equally important application area, which introduces new challenges. In this paper, we introduce a new methodology for model-based GUI testing. This includes using Labeled Transition Systems (LTSs) in conjunction with action word and keyword techniques for test modeling. We have also conducted an industrial case study where we tested a mobile device and were able to find previously unreported defects. The test environment included a standard MS Windows GUI testing tool as well as components implementing our approach. Assessment of the results from an industrial point of view suggests directions for future development.

Journal ArticleDOI
TL;DR: Test case prioritization (TCP) involves the explicit planning of the execution of test cases in a specific order and is shown to improve the rate of fault detection.
Abstract: Software testing is a strenuous and expensive process. At least 50% of the total software cost is spent on testing activities [12]. Companies are often faced with time and resource constraints that limit their ability to effectively complete testing efforts. Companies generally save suites for reuse; test suite reuse accounts for almost half of the maintenance cost [9]. As the product goes thru several versions, executing all the test cases in a test suite can be expensive [9]. Prioritization of test cases can be cost effective when the time allocated to complete testing is limited [9]. Test case prioritization (TCP) involves the explicit planning of the execution of test cases in a specific order and is shown to improve the rate of fault detection [3, 9]. The current software TCP techniques are primarily coverage-based (statement, branch or other coverage) [3,9]. Coverage-based white-box prioritization techniques are most applicable for regression testing at the unit level and are harder to apply on complex systems [2]. These techniques require testers to read and understand the code, which can be time consuming [2], and may assume that all faults are equally severe.

Book ChapterDOI
13 Nov 2005
TL;DR: A new, computationally intelligent approach to automated generation of effective test cases based on a novel, Fuzzy-Based Age Extension of Genetic Algorithms (FAexGA) is introduced, which aims to eliminate "bad" test cases that are unlikely to expose any error, while increasing the number of "good" test Cases that have a high probability of producing an erroneous output.
Abstract: Black-box (functional) test cases are identified from functional requirements of the tested system, which is viewed as a mathematical function mapping its inputs onto its outputs. While the number of possible black-box tests for any non-trivial program is extremely large, the testers can run only a limited number of test cases under their resource limitations. An effective set of test cases is the one that has a high probability of detecting faults presenting ina computer program.In this paper, we introduce a new, computationally intelligent approach to automated generation of effective test cases based on a novel, Fuzzy-Based Age Extension of Genetic Algorithms (FAexGA). The basic idea is to eliminate "bad" test cases that are unlikely to expose any error, while increasing the number of "good" test cases that have a high probability of producing an erroneous output. The promising performance of the FAexGA-based approach is demonstrated on testing a complex Boolean expression.

Journal ArticleDOI
TL;DR: Testing is often difficult, and testing real-time embedded systems for mission-critical applications is particularly difficult owing to embedded design complexities and frequent requirements changes.
Abstract: Testing is often difficult, and testing real-time embedded systems for mission-critical applications is particularly difficult owing to embedded design complexities and frequent requirements changes. Embedded systems usually require a series of rigorous white-box (structural), black-box (functional), module, and integration testing before developers can release them to the market. In practice, functional testing is often more important than structural testing. Similarly, integration testing is more challenging than module testing. Furthermore, functional integration testing often requires individual test scripts based on the system requirements.

Journal ArticleDOI
TL;DR: The model-based black-box testing (MB^3T) approach is introduced in order to effectively minimize these limiting factors by creating a systematic procedure for the design of test scenarios for embedded automotive software and its integration in the model- based development process.

Journal ArticleDOI
TL;DR: The problem of testing the transmitter and the receiver subsystems of a RF transceiver for system level specification is addressed and a specially crafted test stimulus is used for testing all the specifications from the response of the subsystem-under-test.
Abstract: In the recent past, with the emergence of System-on-Chip (SoC), focus has shifted towards testing system specifications rather than device or module specifications. While the problem of test accessibility for test stimulus application and response capture for such high-speed systems remains a challenge to the test engineers, new test strategies are needed which can address the problem in a practical manner. In this paper, the problem of testing the transmitter and the receiver subsystems of a RF transceiver for system level specification is addressed. Instead of using different conventional test stimuli for testing each of the system level specifications of RF subsystems, a specially crafted test stimulus is used for testing all the specifications from the response of the subsystem-under-test. A new simulation approach has also been developed to perform fast behavioral simulations in frequency domain for the system-under-test. In the test method, frequency domain test response spectra are captured and non-linear regression models are constructed to map the spectral measurements onto the specifications of interest. In the presented simulation results, the test stimuli have been validated using netlist level simulation of the subsystem-under-test and specifications have been predicted within an error of ±3% of the actual value.

Journal ArticleDOI
01 Jan 2005
TL;DR: This paper proposed an integrated framework that has been built on two existing testing techniques namely Mutation Testing and Capability Testing, an attempt for developing an automated software testing environment and among the several phases of Software Development Life Cycle (SDLC), this framework is recommended for unit testing in code complete phase and alpha phase.
Abstract: The primary features of the object-oriented paradigm lead to develop a complex and compositional testing framework for object-oriented software. Agent-oriented approach has become a trend in software engineering. Agent technologies facilitate the software testing by virtue of their high-level independency with parallel activation and automation. This paper proposed an integrated framework that has been built on two existing testing techniques namely Mutation Testing and Capability Testing. In both the cases, testing is carried out at Autonomous Unit Level (AUL) and Inter-Procedural Level (IPL). Mutation-Based Testing-Agent and Capability Assessment Testing-Agent have been developed for performing AUL testing and Method Interaction Testing-Agent has been developed for performing IPL testing. This agent-based framework is an attempt for developing an automated software testing environment and among the several phases of Software Development Life Cycle (SDLC), this framework is recommended for unit testing in code complete phase and alpha phase. This methodology gives the basic approach to agent-based frameworks for testing and to optimization of agent-based testing schedules, subject to timing constraints. This adds ''interesting new opportunities in the object-oriented software testing phase'' to the existing literature that is concerned with software testing frameworks.

Proceedings ArticleDOI
05 Dec 2005
TL;DR: It is conjecture that N-wise enhanced pairwise testing can be used as a black-boxed testing method to increase effectiveness of random testing in exposing unusual or unexpected behaviors, such as security failures in network-centric software.
Abstract: Pairwise testing, which can be complemented with partial or full N-wise testing, is a technique which guarantees that all important parametric value pairs are included in a test suite. A percentage of N-wise testing is also included. We conjecture that N-wise enhanced pairwise testing can be used as a black-boxed testing method to increase effectiveness of random testing in exposing unusual or unexpected behaviors, such as security failures in network-centric software. This testing can also be quite cost-efficient since small N test suites grow linearly with the number of parameters. This paper explains the results of random testing of a simulation in which about 20% of the defects with probabilities of occurrence less than 50% are never exposed. This supports the premise that if the unusual or unexpected behaviors are based on defects which are less likely to occur, then random testing needs to be enhanced, especially if those unexposed defects could cause erratic or even critical behaviors to the system. Higher system complexities may indicate higher numbers of unusual or unexpected behaviors. It may be difficult to use the traditional operational profile information to determine the amount of testing for unusual behaviors since the operational usage may be 0 or close to it. Another interesting problem is that some testers lack the experience necessary to effectively analyze the results of a test run. It is important to compensate for the lack of experience so that novice testers are able to test comparatively as effectively as more experienced testers. It is believed that if the size of the test suite is relatively small, then it may be easier to pinpoint the source of a failure. The research presented in this paper is aimed at addressing some of these issues of random testing via enhanced pairwise testing and N-wise testing in general. It is possible that more complex systems, such as those that rely a great deal on a network, would require higher numbers of interactions to combat unexpected combinations for use in some testing instances such as security testing or high assurance testing. A tool is being developed concurrently to help automate a part of the test generation process

Journal ArticleDOI
15 May 2005
TL;DR: This paper presents example system requirements and corresponding models for applying the combinatorial approach to those requirements, using terminology and modeling notation from the AETG1 system to provide concrete examples.
Abstract: The combinatorial approach to software testing uses models to generate a minimal number of test inputs so that selected combinations of input values are covered. The most common coverage criteria is two-way, or pairwise coverage of value combinations, though for higher confidence three-way or higher coverage may be required. This paper presents example system requirements and corresponding models for applying the combinatorial approach to those requirements. These examples are intended to serve as a tutorial for applying the combinatorial approach to software testing. Although this paper focuses on pairwise coverage, the discussion is equally valid when higher coverage criteria such as three-way (triples) are used. We use terminology and modeling notation from the AETG1 system to provide concrete examples.