scispace - formally typeset
Search or ask a question

Showing papers on "White-box testing published in 1989"


Journal ArticleDOI
J.D. Musa1, A.F. Ackerman
TL;DR: The authors explain how to use reliability models to determine how much system testing to do and how to allocate resources among the components to be tested and the functions, presuppositions, and basic procedures of system testing.
Abstract: The authors explain how to use reliability models to determine how much system testing to do and how to allocate resources among the components to be tested They begin by discussing the basic concepts of software reliability They examine the functions, presuppositions, and basic procedures of system testing, as well as testing comparison and the use of calendar-time component to predict when testing will be completed They then discuss acceptance testing The authors conclude with examples of actual applications >

106 citations


Proceedings ArticleDOI
A. Offutt1
TL;DR: The major conclusion from this investigation is that by explicitly testing for simple faults, the authors are also implicitly testing for more complicated faults, which gives confidence that fault-based testing is an effective means of testing software.
Abstract: Fault-based testing strategies test software by focusing on specific, common types of errors. The coupling effect states that test data sets that detect simple types of faults are sensitive enough to detect more complex types of faults. This paper describes empirical investigations into the coupling effect over a specific domain of software faults. All the results from this investigation support the validity of the coupling effect. The major conclusion from this investigation is that by explicitly testing for simple faults, we are also implicitly testing for more complicated faults. This gives us confidence that fault-based testing is an effective means of testing software.

102 citations


Proceedings ArticleDOI
TL;DR: In the most seriously flawed work, one method whose efficacy is unknown is used as a standard for judging other methods as mentioned in this paper, which is not the case in the case of random testing.
Abstract: Comparison of software testing methods is meaningful only if sound theory relates the properties compared to actual software quality. Existing comparisons typically use anecdotal foundations with no necessary relationship to quality, comparing methods on the basis of technical terms the methods themselves define. In the most seriously flawed work, one method whose efficacy is unknown is used as a standard for judging other methods! Random testing, as a method that can be related to quality (in both the conventional sense of statistical reliability, and the more stringent sense of software assurance), offers the opportunity for valid comparison.

81 citations


Proceedings ArticleDOI
29 Aug 1989
TL;DR: A test sequence is given for the test access port (TAP) controller portion of the boundary-scan architecture proposed by the Joint Test Action Group (JTAG) and IEEE Working Group P1149.1 as an industry-standard design-for-testability technique.
Abstract: A test sequence is given for the test access port (TAP) controller portion of the boundary-scan architecture proposed by the Joint Test Action Group (JTAG) and IEEE Working Group P1149.1 as an industry-standard design-for-testability technique. The resulting test sequence, generated by using a technique based on Rural Chinese Postman tours and unique input/output sequences, is of minimum cost (time) and rigorously tests the specified functional behavior of the controller. The test sequence can be used for detecting design faults for conformance testing or for detecting manufacture-time/run-time defects/faults. >

70 citations


Proceedings ArticleDOI
16 Oct 1989
TL;DR: A practical approach to module regression testing aimed at reducing the cost of test development, execution and maintenance is presented.
Abstract: A practical approach to module regression testing aimed at reducing the cost of test development, execution and maintenance is presented. Test cases are formally defined using a language based on module traces and a software tool is used to automatically generate test programs to apply the cases. The testing approach, language and program generator are described in detail and illustrated with a case study. >

33 citations


Proceedings ArticleDOI
29 Aug 1989
TL;DR: The authors present a testability strategy for complex VLSI devices which is implemented in the PIRAMID Digital Signal Processor Silicon Compiler and supports built-in self-test, scan test, bus test control, restricted partial scan and test control logic at various levels in the design hierarchy.
Abstract: The authors present a testability strategy for complex VLSI devices which is implemented in the PIRAMID Digital Signal Processor Silicon Compiler. The macrotest methodology supports built-in self-test, scan test, bus test control, restricted partial scan and test control logic at various levels in the design hierarchy. A set of testability design rules is developed and implemented automatically in the design. The design hierarchy is closely followed, resulting in a hierarchical set of testable macros. The complete process from design to final test program is guided by software tools. As an example, the synthesis of a large industrial circuit is presented for comparing the proposed approach with the traditional approaches. The additional overhead due to testability is within reasonable limits (roughly 8%), and the software run time figures show that it is possible to generate a test program with an excellent fault coverage within a very short period of time. >

28 citations


Proceedings ArticleDOI
TL;DR: This work describes an approach for systematic module regression testing, where test cases are defined formally using a language based on module traces, and a software tool is used to automatically generate test programs that apply the cases.
Abstract: While considerable attention has been given to techniques for developing complex systems as collections of reliable and reusable modules, little is known about testing these modules. In the literature, the special problems of module testing have been largely ignored and few tools or techniques are available to the practicing tester. Without effective testing methods, the development and maintenance of reliable and reusable modules is difficult indeed.We describe an approach for systematic module regression testing. Test cases are defined formally using a language based on module traces, and a software tool is used to automatically generate test programs that apply the cases. Techniques for test case generation in C and in Prolog are presented and illustrated in detail.

27 citations


Proceedings ArticleDOI
01 Jun 1989
TL;DR: This paper presents a test scheduling method, called overlaying concurrent testing, for built-in testing of VLSI circuits, based on a resource-conflict analysis of of subcircuits and a scheduling algorithm that fully exploits test parallelism.
Abstract: This paper presents a test scheduling method, called overlaying concurrent testing, for built-in testing of VLSI circuits. The scheme is based on a resource-conflict analysis of of subcircuits and a scheduling algorithm. The algorithm fully exploits test parallelism by overlaying the test intervals of compatible subcircuits to test as many of them as possible concurrently. The technique is supported by a test hardware architecture whose design is well coordinated with the test scheduling leading to a considerable reduction of testing time, as demonstrated by simulation experiments.

27 citations


Proceedings ArticleDOI
19 Jun 1989
TL;DR: An assessment of the relative error-finding ability and effort requirements of several test techniques are presented and the results show the importance of tester/code sample in finding errors.
Abstract: An assessment of the relative error-finding ability and effort requirements of several test techniques are presented. Analyses of the testing process and product show how the test techniques compared, and identify sources of variability that affect the test activities. The results show the importance of tester/code sample in finding errors. >

26 citations


Proceedings ArticleDOI
C. Dislis1, I.D. Dear1, J.R. Miles1, S.C. Lau1, A.P. Ambler1 
29 Aug 1989
TL;DR: The authors discuss the development of a test-planning system based on economic considerations and using a parameterized economics model for cost predictions, and two levels of hierarchy in the cost modeling approach are discussed.
Abstract: As ICs get larger and increasingly more expensive to test, testing provision has to be made at the design stage. The authors discuss the development of a test-planning system based on economic considerations and using a parameterized economics model for cost predictions. The use of the economics model, as well as some of the factors that affect the cost effectiveness of design-for-test strategies are considered. Two levels of hierarchy in the cost modeling approach are discussed: the general model, which can be used to estimate costs from component design through to field test, and the component level, which addresses the modeling of component design, manufacture, and test costs. >

24 citations


Proceedings ArticleDOI
29 Aug 1989
TL;DR: The incorporation of boundary scan has been demonstrated to impose a minimal real estate overhead and change the process of design verification and testing making it beneficial to both the design engineer and test engineer and the use of devices incorporating boundary scan will reduce the cost of testing.
Abstract: Conventional logic devices incorporating boundary scan with the proposed IEEE P11491 interface have been shown to offer great improvements in board testing These improvements are contrasted with traditional approaches for the design verification, debugging, and testing of a prototype system The incorporation of boundary scan has been demonstrated to impose a minimal real estate overhead and change the process of design verification and testing making it beneficial to both the design engineer and test engineer The use of devices incorporating boundary scan will reduce the cost of testing By using the devices that support the P11491 architecture in the prototype system considered, some of the problems and questions associated with the verification and testing of prototype systems (or even production systems) were solved In addition to solving the problems, the verification and testing processes were simplified >

Proceedings ArticleDOI
TL;DR: In a case study of an industrial software system, it is found that in eighty percent of the subroutines the all-du-paths criterion is satisfied by testing ten or fewer complete paths.
Abstract: The all-du-paths software testing criterion is the most discriminating of the data flow testing criteria of Rapps and Weyuker. Unfortunately, in the worst case, the criterion requires an exponential number of test cases. To investigate the practicality of the criterion, we develop tools to count the number of complete program paths necessary to satisfy the criterion. This count is an estimate of the number of test cases required. In a case study of an industrial software system, we find that in eighty percent of the subroutines the all-du-paths criterion is satisfied by testing ten or fewer complete paths. Only one subroutine out of 143 requires an exponential number of test cases.

Journal ArticleDOI
TL;DR: A program design methodology is presented that advocates the synthesis of tests hand-in-hand with the design at every stage of program development, and uses them for early detection of design flaws.
Abstract: A program design methodology is presented that advocates the synthesis of tests hand-in-hand with the design at every stage of program development, and uses them for early detection of design flaws. Formal specifications are advocated at every stage of the development process. It is illustrated on an example that• formalisation allows for a systematic derivation of black-box, design and abstract data tests• higher-level testing leads to significant structural coverage of the final code but does not eliminate the need for structural testing• abstract data testing allows a more natural selection of tests than concrete data testingExcept for the last stage, the method is of manual nature; however, the formal approach opens a way for its automation and rapid prototyping. Also, it can be combined with formal verification.

Proceedings ArticleDOI
16 Oct 1989
TL;DR: The Universal Ada Test Language (UATL) provides a consistent framework for testing complex systems at all stages of the software/system development, production and maintenance cycle.
Abstract: The ability to perform thorough regression testing is especially critical for software maintenance in order to ensure that changes made to correct problems or add capabilities do not affect the integrity of the complete software system. The Universal Ada Test Language (UATL), a DoD sponsored project established to meet these requirements, is described. The UATL design, test manager, test program generation tool and capabilities demonstration are discussed. The UATL provides a consistent framework for testing complex systems at all stages of the software/system development, production and maintenance cycle. >

Journal ArticleDOI
TL;DR: An architecture for implementing scan technology in a state-of-the-art workstation that uses a single resource to control scan and clock functions and perform pseudorandom testing of individual chips and boards is presented.
Abstract: The author presents an architecture for implementing scan technology in a state-of-the-art workstation that uses a single resource to control scan and clock functions and perform pseudorandom testing of individual chips and boards. The testing approach, which is based on the use of a linear-feedback shift register, also features the ability to capture test results and compress them into a single signature for comparison with a known 'golden-circuit' signature. The author describes an application for testing the Apollo DN10000 and presents a list of design rules for pseudorandom testing at the board level. He discusses communication with scan and clock resources, timing relationships for scan operations, problems encountered, and design-for-testability issues in some depth. >

Proceedings ArticleDOI
29 Aug 1989
TL;DR: The authors present executable VLSI test software which reads a digital device specification file and dynamically reconfigures the tester at run time and the Omnitest system, which frees tester resources, standardizes test-related operations, and provides ample means to revise and amend.
Abstract: The authors present executable VLSI test software which reads a digital device specification file and dynamically reconfigures the tester at run time. The Omnitest concept has been demonstrated on the Megatest MegaOne VLSI tester. Complete device description files have gone from data manual to error-free-testing in under an hour. The Omnitest program completely masks the detailed operation of the MegaOne, yet allows the user to take advantage of all the tester's sophisticated features. Engineers who have had no previous exposure to test have been able to comprehend and utilize the input language after merely reviewing an example. Besides shortening development times and enabling a high-level approach to digital test, the Omnitest system also frees tester resources, standardizes test-related operations, and provides ample means to revise and amend. >

Journal ArticleDOI
TL;DR: A functional testing method called polynomial testing is proposed to test packet-switching networks (PSNs) used in multiprocessor systems and a built-in tester is embedded into each switch's structure to provide self-testing capabilities.
Abstract: A functional testing method called polynomial testing is proposed to test packet-switching networks (PSNs) used in multiprocessor systems. Focus is on applying the method to packet-switching multistage interconnection networks (PMINs). A multiple stuck-at (MSA) fault model is developed and faults are diagnosed at two different levels: network level and switch level. The former uses each processor as a tester and can test part of the network concurrently with the normal operations on the remaining part of the network; the latter uses switches in the network as testers and is inherently an autonomous testing method. To facilitate the network-level testing, the routing dynamic in a PMIN is eliminated by synchronizing switch operations. The network is then decomposed into routes, each of which is tested after transforming it into a polynomial calculator. For switch-level testing, a built-in tester (BIT) is embedded into each switch's structure to provide self-testing capabilities. Network-level testing is distributed and suitable for concurrent testing, whereas switch-level testing is done offline, and needs only a small testing time. >

Proceedings ArticleDOI
20 Sep 1989
TL;DR: The zero-one integer programming model, a generalized optimal path selection method for node (or statement) testing and branch testing criteria, is extended in such a way that it can be used for DD-path testing, TER/sub n/ measurement, and all types of local coverage test criteria.
Abstract: A major issue in structural program testing is how to select a minimal set of test paths to meet certain test requirements. The zero-one integer programming model, a generalized optimal path selection method for node (or statement) testing and branch testing criteria, is extended in such a way that it can be used for DD-path testing, TER/sub n/ measurement, and all types of local coverage test criteria. With slight modification, it can also be applied to all types of data-flow-oriented test criteria. The model can be used for program testing based on any coverage criterion of the structural testing approach. If a mixture of multiple test criteria is needed, the model is still workable. The model can be applied to program testing with various objective functions and can be extended to multiple goal objective function problems. Since the objective functions are independent from the constraints of test criteria, it is possible to have various combinations of optimization criteria and coverage requirements according to the specified test strategy. Characteristics of the zero-one integer programming model are discussed. >

Proceedings ArticleDOI
TL;DR: This work developed a testing process program much as it would develop a software product from requirements through design to implementation and evaluation and found process programming to be effective for explicitly integrating the techniques and achieving the desired synergism.
Abstract: Integration of multiple testing techniques is required to demonstrate high quality of software. Technique integration has four basic goals: reduced development costs, incremental testing capabilities, extensive error detection, and cost-effective application. We are experimenting with the use of process programming as a mechanism for integrating testing techniques. Having set out to develop a process that provides adequate coverage and comprehensive fault detection, we proposed synergistic use of DATA FLOW testing and RELAY to achieve all four goals. We developed a testing process program much as we would develop a software product from requirements through design to implementation and evaluation. We found process programming to be effective for explicitly integrating the techniques and achieving the desired synergism. Used in this way, process programming also mitigates many of the other problems that plague testing in the software development process.

Journal ArticleDOI
K. Grimm1
TL;DR: The main topics of this work are the elaboration and evaluation of systematic test methods, the definition of an effective test strategy which satisfies safety and high reliability requirements and the development of a concept outline for the test automation supporting the above mentionedeffective test strategy.

Book ChapterDOI
01 Jan 1989
TL;DR: Using results from the literature on software reliability, the ensuing optimization problems which can be addressed using numerical techniques are formulated for the case of single stage testing and for a double stage testing.
Abstract: In this paper we address the important practical problem of how long to test and debug a piece of software before it is released for use. We consider the case of single stage testing and outline two protocols for the case of a double stage testing. Using results from the literature on software reliability, we formulate, for the case of single stage testing, the ensuing optimization problems which can be addressed using numerical techniques.

Proceedings ArticleDOI
22 May 1989
TL;DR: A structured approach to testing software based on the principles of the McCabe cyclomatic complexity metric is described, which provides an early assessment of testability and a comprehensive and quantifiable framework for the testing program.
Abstract: A structured approach to testing software based on the principles of the McCabe cyclomatic complexity metric is described. This approach is being applied to current Naval embedded weapon system software projects for unit, integration, and computer software configuration item (CSCI) requirements-level testing. The primary automated tool supporting this process is the Vitro Automated Structured Testing Tool (VASTT), which analyzes and generates reports from a variety of inputs, including data flow diagrams (DFDs), program design language (PDL), and several programming languages. The reports include complexity metrics, flow graphs, test paths, and test cases. This approach to testing provides an early assessment of testability and a comprehensive and quantifiable framework for the testing program. >

Proceedings ArticleDOI
B.A. Alcorn1
29 Aug 1989
TL;DR: It is concluded that, although the overall edge placement accuracy specification is extremely useful, the user needs to recognize how limitations, such as variations in loads, programmed levels, and programmed slew rates, will affect the edge placement placement accuracy of the tester.
Abstract: The author notes that current board test system specifications are not adequate for a complete analysis of tester capability. He examines the error sources in timing specification; defines overall edge placement accuracy; discusses automatic compensation techniques for a distributed architecture tester; shows the effect of load, slew rate, and level variations on specifications; and gives techniques to verify manufacturer's claims. He demonstrates that, in order to be meaningful, a driver or receiver must be referenced to other drivers and receivers, to an internal tester clock, and, ideally, to a user-supplied reference clock. It is concluded that, although the overall edge placement accuracy specification is extremely useful, the user needs to recognize how limitations, such as variations in loads, programmed levels, and programmed slew rates, will affect the edge placement accuracy of the tester. >

Journal ArticleDOI
TL;DR: The paper discusses the problems involved and the methods used in each step of the structural testing methodology, as well as describing the techniques used to detect program errors.
Abstract: Program testing may be performed using either one of two approaches; structural or functional. This paper is concerned with the structural testing approach of programs. Given a listing of the program, the first step is to construct its flowgraph. The flowgraph usually contains a very large number of paths, owing to the program loops, so testing all the paths is impossible. A subset of these paths are chosen, according to one criterion or another. Then, a set of test data is generated which causes the selected paths to be traversed when the program runs. Finally, the program runs, using the generated test data, and the output is analysed to detect program errors. The paper discusses the problems involved and the methods used in each step of the above mentioned structural testing methodology.

Journal ArticleDOI
TL;DR: The paper describes the problems of modeling, the test generation procedure, explains the need for testability analysis and gives results obtained on 8254 circuit.

01 Jan 1989
TL;DR: An automatic test pattern generator for combinational circuits based on a functional description has been developed and a functional fault coverage is defined and related to structural fault coverage.
Abstract: An Automatic Test Pattern Generator (ATPG) for combinational circuits based on a functional description has been developed .The algorithm generates tests based on a set of boolean equations or a higher level where little or no knowledge of the physical structure is required. A functional fault coverage is defined and related to structural fault coverage. SUMMARY The increased drive towards VLSl circuits experienced today necessitates more organized and automatic techniques for test generation.The large number of transistors imbedded in complex VLSl circuits makes gate-level testing difficult,time consuming and expensive. Testability rapidly decreases with the increase in complexity of integrated circuits, not only due to the increased number of transistors, but also because test points no longer are accessible inside the chip. In addition,the reliability of offthe-shelf components can be hard to determine since adequate implementation details of such circuits is rarely available.Thus,the concept of functional testing has received a great deal of attention during the past few years[l-31. Functional testing differs from the traditional gate-level testing in several aspects, rather than individually checking every node in a logic circuit for signal faults using specific fault models (thereby verifying the structure), functional testing attempts to verify the validity of the logic functions. Gate-level testing is obviously implementation dependent and requires a detailed knowledge of the CUT, while functional testing can be independent of circuit implementation. At the present time, no universally accepted definition of functional testing is available. Some of the discussions concentrate on the distinctions between " function-oriented " testing and gatelevel testing, while others focus on system test (boards) as opposed to parts test (chips)[4-51. The deterministic algorithm introduced here (FUNTEST : FUNctional TESTing) considers the conventional stuck-at fault model in a functional sense.The algorithm generates tests for combinational circuits based on a set of boolean equations or higher functional representations. No

Proceedings ArticleDOI
25 Sep 1989
TL;DR: The authors consider RFI methodologies and techniques that can be used to effect a cohesive functional/parametric testing strategy and conclude that diagnostic information management is the key to effecting more efficient and accurate RFI testing.
Abstract: The authors consider RFI (ready for issue) methodologies and techniques that can be used to effect a cohesive functional/parametric testing strategy. A number of functional/parametric testing scenarios are discussed from the viewpoint of assessing diagnostic accuracy, functional/parametric test correlation, test data maturation, and testing throughput. It is concluded that the key to establishing an effective and cohesive functional/parametric testing strategy is diagnostic information management within the context of the concurrent engineering process. Diagnostic information management enables more effective RFI testing to be performed at all levels of test. Specifically, diagnostic information management is the key to effecting more efficient (i.e. faster test times) and accurate (i.e. fault isolation resolution) test program sets because it enables the test strategies to capture the best features of both functional and parametric testing and use them in a complementary manner. >

Proceedings ArticleDOI
19 Jun 1989
TL;DR: The notion of condition constraints is defined and it is shown that it can be used for the definition and implementation of condition testing strategies and is believed to be cost-effective for software quality assurance.
Abstract: The notion of condition constraints is defined and it is shown that it can be used for the definition and implementation of condition testing strategies. A condition testing strategy that is based on the detection of Boolean and relational operator errors in a condition is described. Several conjectures in support of the condition testing approach and the condition testing strategy are given. The Boolean and relational operator testing strategy is believed to be cost-effective for software quality assurance. >

Proceedings ArticleDOI
23 Oct 1989
TL;DR: The authors have developed an application environment under which the VLSI design tools as well as the testers can be run and a knowledge-based system for the transparent use of various testers from a common intermediate test-pattern language.
Abstract: The authors have developed an application environment for VLSI design, under which the VLSI design tools as well as the testers can be run. They have also developed a knowledge-based system for the transparent use of various testers from a common intermediate test-pattern language. Under the new environment, the user stimulates a design as before, and then specifies on which tester the fabricated design should be tested. The tests are performed with minimal user intervention (e.g. powering the circuit up). Upon completion of the physical testing the system compares the test data to the simulation data and graphically presents discrepancies which may indicate potential errors. >

Journal ArticleDOI
Mark Marshall1
TL;DR: A user's approach to testing the 68882 floating-point coprocessor and the test vectors were developed functionally by executing instructions on a 68881 while using tester software to emulate a microprocessor system.