scispace - formally typeset
Search or ask a question

Showing papers on "White-box testing published in 1994"


Journal ArticleDOI
TL;DR: The fault detection effectiveness of this family of strategies for automatically generating test data for any implementation intended to satisfy a given specification that is a Boolean formula is investigated both analytically and empirically.
Abstract: This paper presents a family of strategies for automatically generating test data for any implementation intended to satisfy a given specification that is a Boolean formula. The fault detection effectiveness of these strategies is investigated both analytically and empirically, and the costs, assessed in terms of test set size, are compared. >

270 citations


Proceedings ArticleDOI
01 Dec 1994
TL;DR: This work presents a new approach to class testing that supports dataflow testing for dataflow interactions in a class, and provides opportunities to find errors in classes that may not be uncovered by black-box testing.
Abstract: The basic unit of testing in an object-oriented program is a class. Although there has been much recent research on testing of classes, most of this work has focused on black-box approaches. However, since black-box testing techniques may not provide sufficient code coverage, they should be augmented with code-based or white-box techniques. Dataflow testing is a code-based testing technique that uses the dataflow relations in a program to guide the selection of tests. Existing dataflow testing techniques can be applied both to individual methods in a class and to methods in a class that interact through messages, but these techniques do not consider the dataflow interactions that arise when users of a class invoke sequences of methods in an arbitrary order. We present a new approach to class testing that supports dataflow testing for dataflow interactions in a class. For individual methods in a class, and methods that send messages to other methods in a the class, our technique is similar to existing dataflow testing techniques. For methods that are accessible outside the class, and can be called in any order by users of the class, we compute dataflow information, and use it to test possible interactions between these methods. The main benefit of our approach is that it facilitates dataflow testing for an entire class. By supporting dataflow testing of classes, we provide opportunities to find errors in classes that may not be uncovered by black-box testing. Our technique is also useful for determining which sequences of methods should be executed to test a class, even in the absence of a specification. Finally, as with other code-based testing techniques, a large portion of our technique can be automated.

241 citations


Proceedings ArticleDOI
01 Aug 1994
TL;DR: The purpose of this paper is to explain in which way the problem of testing protocol implementations is different from the usual problem of software testing.
Abstract: Communication protocols are the rules that govern the communication between the different components within a distributed computer system. Since protocols are implemented in software and/or hardware, the question arises whether the existing hardware and software testing methods would be adequate for the testing of communication protocols. The purpose of this paper is to explain in which way the problem of testing protocol implementations is different from the usual problem of software testing. We review the major results in the area of protocol testing and discuss in which way these methods may also be relevant in the more general context of software testing.

237 citations


Journal ArticleDOI
TL;DR: Algorithms for fault-driven test set selection are presented based on an analysis of the types of tests needed for different types of faults, and a major reduction in testing time should come from reducing the number of specification tests that need to be performed.
Abstract: Analog testing is a difficult task without a clearcut methodology. Analog circuits are tested for satisfying their specifications, not for faults. Given the high cost of testing analog specifications, it is proposed that tests for analog circuits should be designed to detect faults. Therefore analog fault modeling is discussed. Based on an analysis of the types of tests needed for different types of faults, algorithms for fault-driven test set selection are presented. A major reduction in testing time should come from reducing the number of specification tests that need to be performed. Hence algorithms are presented for minimizing specification testing time. After specification testing time is minimized, the resulting test sets are supplemented with some simple, possibly non-specification, tests to achieve 100% fault coverage. Examples indicate that fault-driven test set development can lead to drastic reductions in production testing time. >

182 citations


Journal ArticleDOI
TL;DR: The uses of a dataflow coverage-testing tool for C programs-called ATAC for Automatic Test Analysis for C/sup 3/-in measuring, controlling, and understanding the testing process are described.
Abstract: Coverage testing helps the tester create a thorough set of tests and gives a measure of test completeness. The concepts of coverage testing are well-described in the literature. However, there are few tools that actually implement these concepts for standard programming languages, and their realistic use on large-scale projects is rare. In this article, we describe the uses of a dataflow coverage-testing tool for C programs-called ATAC for Automatic Test Analysis for C/sup 3/-in measuring, controlling,and understanding the testing process. We present case studies of two real-world software projects using ATAC. The first study involves 12 program versions developed by a university/industry fault-tolerant software project for a critical automatic-flight-control system. The second study involves a Bellcore project of 33 program modules. These studies indicate that coverage analysis of programs during testing not only gives a clear measure of testing quality but also reveals important aspects of software structure. Understanding the structure of a program, as revealed in coverage testing, can be a significant component in confident assessment of overall software quality. >

160 citations


Proceedings ArticleDOI
27 Jun 1994
TL;DR: This work defines a minimal coverage criterion for category-partition test specifications, identifies a mechanical process to produce a test specification that satisfies the criterion, and discusses the problem of resolving infeasible combinations of choices for categories.
Abstract: Testing is a standard method of assuring that software performs as intended. We extend the category-partition method, which is a specification-based testing method. An important aspect of category-partition testing is the construction of test specifications as an intermediate between functional specifications and actual tests. We define a minimal coverage criterion for category-partition test specifications identify a mechanical process to produce a test specification that satisfies the criterion, and discuss the problem of resolving infeasible combinations of choices for categories. Our method uses formal schema-based functional specifications and is shown to be feasible with an example study of a simple file system. >

138 citations


Proceedings ArticleDOI
06 Nov 1994
TL;DR: The current system of the Automatic Efficient Test Generator is described and it constructs and reports some preliminary results obtained during initial trials.
Abstract: Software testing is expensive, tedious and time consuming. Thus, the problem of making testing more efficient and mechanical, without losing its effectiveness, is very important. The Automatic Efficient Test Generator (AETG) is a new tool that mechanically generates efficient test sets from user defined test requirements. It is based on algorithms that use ideas from statistical experimental design theory to minimize the number of tests needed for a specific level of test coverage of the input test space. The savings due to AETG are substantial when compared to exhaustive testing or other methods of testing. AETG has been used in Bellcore for screen testing, interoperability testing and for protocol conformance testing. The paper describes the current system and it constructs and reports some preliminary results obtained during initial trials. >

121 citations


Proceedings ArticleDOI
01 Aug 1994
TL;DR: The TAOS toolkit and its capabilities are described as well as testing, debugging and maintenance processes based on program dependence analysis based on ProDAG and Program Dependence Analysis Graph are described.
Abstract: Few would question that software testing is a necessary activity for assuring software quality, yet the typical testing process is a human intensive activity and as such, it is unproductive, error-prone, and often inadequately done. Moreover, testing is seldom given a prominent place in software development or maintenance processes, nor is it an integral part of them. Major productivity and quality enhancements can be achieved by automating the testing process through tool development and use and effectively incorporating it with development and maintenance processes.The TAOS toolkit, Testing with Analysis and Oracle Support, provides support for the testing process. It includes tools that automate many tasks in the testing process, including management and persistence of test artifacts and the relationships between those artifacts, test development, test execution, and test measurement. A unique aspect of TAOS is its support for test oracles and their use to verify behavioral correctness of test executions. TAOS also supports structural/dependence coverage, by measuring the adequacy of test criteria coverage, and regression testing, by identifying tests associated or dependent upon modified software artifacts. This is accomplished by integrating the ProDAG toolset, Program Dependence Analysis Graph, with TAOS, which supports the use of program dependence analysis in testing, debugging, and maintenance.This paper describes the TAOS toolkit and its capabilities as well as testing, debugging and maintenance processes based on program dependence analysis. We also describe our experience with the toolkit and discuss our future plans.

113 citations


Proceedings ArticleDOI
06 Nov 1994
TL;DR: This model allows us to relate a test coverage measure directly to the defect coverage, and shows how the defect density controls the time-to-next-failure.
Abstract: Models the relationship between testing effort, coverage and reliability, and presents a logarithmic model that relates testing effort to test coverage: statement (or block) coverage, branch (or decision) coverage, computation use (c-use) coverage, or predicate use (p-use) coverage. The model is based on the hypothesis that the enumerables (like branches or blocks) for any coverage measure have different detectability, just like the individual defects. This model allows us to relate a test coverage measure directly to the defect coverage. Data sets for programs with real defects are used to validate the model. The results are consistent with the known inclusion relationships among block, branch and p-use coverage measures. We show how the defect density controls the time-to-next-failure. The model can eliminate variables like the test application strategy from consideration. It is suitable for high-reliability applications where automatic (or manual) test generation is used to cover enemerables which have not yet been tested. >

106 citations


Proceedings ArticleDOI
02 Oct 1994
TL;DR: A system to which a programmer can submit a program unit, and get back a set of input/output pairs that are guaranteed to form an effective test of the unit by being close to mutation adequate is envisaged.
Abstract: Mutation testing is a technique for unit testing software that, although powerful, is computationally expensive. Recent engineering advances have given us techniques and algorithms for significantly reducing the cost of mutation testing. These techniques include a new algorithmic execution technique called schema-based mutation, an approximation technique called weak mutation, a reduction technique called selective mutation, and algorithms for automatic test data generation. This paper outlines a design for a system that will approximate mutation, but in a way that will be accessible to everyday programmers. We envisage a system to which a programmer can submit a program unit, and get back a set of input/output pairs that are guaranteed to form an effective test of the unit by being close to mutation adequate.

94 citations


Book
01 Dec 1994
TL;DR: This book discusses Subsystem Testing, an Overview of Sub system Testing, and Multiplying Operation Test Requirements, which addresses the challenges faced in working with large subsystems.
Abstract: 1 Should You Read This Book? 2 An Overview of Subsystem Testing I THE BASIC TECHNIQUE 3 The Specification 4 Introduction to the SREADHEX Example 5 Building the Test Requirement Checklist 6 Test Specifications 7 Test Drivers and Suite Drivers 8 Inspecting the Code with the Question Catalog 9 Using Coverage to Test the Test Suite 10 Cleaning Up 11 Miscellaneous Tips II ADOPTING SUBSYTEM TESTING 12 Getting Going 13 Getting Good III SUBSYSTEM TESTING IN PRACTICE 14 Using More Typical Specifications (Including None at All) 15 Working with Large Subsystems 16 Testing Bug Fixes and Other Maintenance Changes 17 Testing Under Schedule Pressure IV EXAMPLES AND EXTENSIONS 18 Syntax Testing 19 A Second Complete Example: MAX 20 Testing Persistent State V MULTIPLYING TEST REQUIREMENTS 21 Simpler Test Requirement Multiplication 22 Multiplying Operation Test Requirements APPENDICES A Test Requirement Catalog (Student Version) B Test Requirement Catalog (Standard Version) C POSIX-Specific Test Requirement Catalog (Sample) D A Question Catalog for Code Inspections E Requirements for Complex Booleans Catalog F Checklists for Test Writing Glossary Bibliography

01 Jan 1994
TL;DR: This thesis demonstrates how formal specification techniques can systematise the application of testing strategies, and also how the concepts of software testing can be combined with formal specifications to extend the role of the formal specification in software development.
Abstract: This thesis examines applying formal methods to software testing. Software testing is a critical phase of the software life-cycle which can be very effective if performed rigorously. Formal specifications offer the bases for rigorous testing practices. Not surprisingly, the most immediate use of formal specifications in software testing is as sources of black-box test suites. However, formal specifications have more uses in software testing than merely being sources for test data. We examine these uses, and show how to get more assistance and benefit from formal methods in software testing. At the core of this work is a exible framework in which to conduct specification-based testing. The framework is founded on formal definitions of tests and test suites, which directly addresses important issues in managing software testing. This provides a uniform platform for other applications of formal methods to testing such as analysis and reification of tests, and also for applications beyond testing such as maintenance and specification validation. The framework has to be exible so that any testing strategies can be used. We examine the need to adapt certain strategies to work with the framework and formal specification. Our experiments showed some deficiencies that arise when using derivation strategies on abstract specifications. These deficiencies led us to develop two new specification-based testing strategies based on extensions to existing strate- gies. We demonstrate the framework, strategies, and other applications of formal methods to software testing using three case studies. In each of these, the framework was easy to use. It provided an elegant and powerful means for defining and structuring tests, and a suitable staging ground for other applications of formal methods to software testing. This thesis demonstrates how formal specification techniques can systematise the application of testing strategies, and also how the concepts of software testing can be combined with formal specifications to extend the role of the formal specification in software development.

Proceedings ArticleDOI
01 May 1994
TL;DR: This paper describes how both the quality and efficiency of protocol testing were improved by using a new Bellcore tool called the Automatic Efficient Test Generator (AETG).
Abstract: This paper describes how both the quality and efficiency of protocol testing were improved by using a new Bellcore tool called the Automatic Efficient Test Generator (AETG). The AETG tool is based on ideas from experimental design and it creates a test set that contains all possible pairs of involved factors. Two examples are given to illustrate this technique and compare it with traditional approaches. The improved quality of testing leads to a faster detection of nonconformances and a higher quality of products in a shorter development interval. Although the application discussed in this paper covers protocol conformance testing, the techniques for improving the quality of testing can be applied to other types of testing such as feature testing and testing between two different network elements. >

Book ChapterDOI
01 Jan 1994
TL;DR: This work examines uses of formal specifications in software testing, particularly, roles of Z specifications inSoftware testing, and presents the unifying framework for specification-based testing, which is founded on Z.
Abstract: There are two camps of software developers: formal methods advocates battling against traditionalist supporters of software testing and assessment metrics. Surely, as Turing observed, we will (must) never do away with testing in some form. But clearly, formal methods cannot be ignored, and must be the basis of quality assurance in some form. Important impacts of specifications on testing are in test selection, test oracles, and analysis of test suites and theoretical results of testing. We examine uses of formal specifications in software testing, particularly, roles of Z specifications in software testing. We also present our unifying framework for specification-based testing, which is founded on Z.

Proceedings ArticleDOI
01 Aug 1994
TL;DR: This paper describes the design and prototype implementation of a structural testing system that uses a theorem prover to determine feasibility of testing requirements and to optimize the number of test cases required to achieve test coverage.
Abstract: For certain structural testing criteria a significant proportion of tests instances are infeasible in the sense the semantics of the program implies that test data cannot be constructed that meet the test requirement. This paper describes the design and prototype implementation of a structural testing system that uses a theorem prover to determine feasibility of testing requirements and to optimize the number of test cases required to achieve test coverage. Using this approach, we were able to accurately and efficiently determine path feasibility for moderately-sized program units of production code written in a subset of Ada. On these problems, the computer solutions were obtained much faster and with greater accuracy than manual analysis. The paper describes how we formalize test criteria as control flow graph path expressions; how the criteria are mapped to logic formulas; and how we control the complexity of the inference task. It describes the limitations of the system and proposals for its improvement as well as other applications of the analysis.

Journal ArticleDOI
TL;DR: The details of a real time trial of a large software system that had a substantial amount of code added during testing are reported, and a stochastic and economic framework to deal with systems that change as they are tested is presented.
Abstract: Developers of large software systems must decide how long software should be tested before releasing it. A common and usually unwarranted assumption is that the code remains frozen during testing. We present a stochastic and economic framework to deal with systems that change as they are tested. The changes can occur because of the delivery of software as it is developed, the way software is tested, the addition of fixes, and so on. Specifically, we report the details of a real time trial of a large software system that had a substantial amount of code added during testing. We describe the methodology, give all of the relevant details, and discuss the results obtained. We pay particular attention to graphical methods that are easy to understand, and that provide effective summaries of the testing process. Some of the plots found useful by the software testers include: the Net Benefit Plot, which gives a running chart of the benefit; the Stopping Plot, which estimates the amount of additional time needed for testing; and diagnostic plots. To encourage other researchers to try out different models, all of the relevant data are provided. >

Proceedings ArticleDOI
25 Apr 1994
TL;DR: It was found that several multilevel, synthesized, robust path delay testable circuits require impractically long pseudo-random test sequences and weighted random testing techniques have been developed for robust pathdelay testing are applied.
Abstract: Importance of delay testing is growing especially for high speed circuits. Delay testing using automatic test equipment is expensive. Built-in self-test can significantly reduce the cost of comprehensive delay testing by replacing the test equipment. It was found that several multilevel, synthesized, robust path delay testable circuits require impractically long pseudo-random test sequences. Weighted random testing techniques have been developed for robust path delay testing. The proposed technique is successfully applied to these circuits and 100% robust path delay fault coverage obtained using only 1-2 sets of weights. >

Proceedings ArticleDOI
01 Dec 1994
TL;DR: It is argued that testing's primary goal should be to measure the dependability of tested software, and a plausible theory of dependability is needed to suggest and prove results about what test methods should be used, and under what circumstances.
Abstract: Testing is potentially the best grounded part of software engineering, since it deals with the well defined situation of a fixed program and a test (a finite collection of input values). However, the fundamental theory of program testing is in disarray. Part of the reason is a confusion of the goals of testing---what makes a test (or testing method) "good." I argue that testing's primary goal should be to measure the dependability of tested software. In support of this goal, a plausible theory of dependability is needed to suggest and prove results about what test methods should be used, and under what circumstances. Although the outlines of dependability theory are not yet clear, it is possible to identify some of the fundamental questions and problems that must be attacked, and to suggest promising approaches and research methods. Perhaps the hardest step in this research is admitting that we do not already have the answers.

Proceedings ArticleDOI
08 Aug 1994
TL;DR: The proposed approach is a transparent BIST technique combined with on-line error detection, which preserves the initial contents of the memory after the test and provides a high fault coverage for traditional fault and error models, as well as for pattern sensitive faults.
Abstract: This paper presents a new methodology for testing of bit-oriented and word-oriented RAMs based on circular test sequences, which can be used for periodic and manufacturing testing and require lower hardware and time overheads than the standard approaches. The proposed approach is a transparent BIST technique combined with on-line error detection, which preserves the initial contents of the memory after the test and provides a high fault coverage for traditional fault and error models, as well as for pattern sensitive faults. Our methodology is useful for embedded RAMs and MCM implemented RAMs. >

Proceedings ArticleDOI
06 Nov 1994
TL;DR: BOR testing is very effective at detecting faults in predicates, and that BOR-based approach has consistently better fault detection performance than branch testing, thorough (but informal) functional testing, simple state-based testing, and random testing.
Abstract: We report the results of three empirical studies of fault detection and stability performance of the predicate-based BOR (Boolean Operator) testing strategy. BOR testing is used to develop test cases based on formal software specification, or based on the implementation code. We evaluated the BOR strategy with respect to some other strategies by using Boolean expressions and actual software. We applied it to software specification cause-effect graphs of a safety-related real-time control system, and to a set of N-version programs. We found that BOR testing is very effective at detecting faults in predicates, and that BOR-based approach has consistently better fault detection performance than branch testing, thorough (but informal) functional testing, simple state-based testing, and random testing. Our results indicate that BOR test selection strategy is practical and effective for detection of faulty predicates and is suitable for generation of safety-sensitive test-cases. >

Proceedings Article
01 Aug 1994
TL;DR: A formal theory of model-based testing is presented, an algorithm for test generation based on it is proposed, and how testing is implemented by a diagnostic engine is outlined.
Abstract: We present a formal theory of model-based testing, an algorithm for test generation based on it, and outline how testing is implemented by a diagnostic engine. The key to making the complex task of test generation feasible for systems with continuous domains is the use of model abstraction. Tests can be generated using manageable finite models and then mapped back to a detailed level. We state conditions for the correctness of this approach and discuss the preconditions and scope of applicability of the theory.

Proceedings ArticleDOI
J. Katz1
02 Oct 1994
TL;DR: Techniques by which a test engineer can, while treating the processor as a black box, proceed efficiently through the debug process up to the point of final circuit analysis are described.
Abstract: RISC processors including microSPARC are becoming increasingly complex and are requiring more device expertise on the part of the test engineer. At the same time, the increasing complexity and sophistication of VLSI/ULSI testers also require higher levels of tester expertise. It is difficult, if not impossible for today's test engineer to keep up with new testers every two to three years while trying to attain design level knowledge necessary to test and debug leading edge processors. This paper describes techniques by which a test engineer can, while treating the processor as a black box, proceed efficiently through the debug process up to the point of final circuit analysis. This paper describes the various techniques used, providing examples of actual device data, both pre and post debug.

Proceedings ArticleDOI
Praerit Garg1
21 Dec 1994
TL;DR: An investigation into the correlation between "true" reliability of a software system and the white box testing measures such as block coverage, c-uses and p-uses coverage demonstrates that the estimated reliability is sensitive to the operational profile defined for the software.
Abstract: The focus of the work is an investigation into the correlation between "true" reliability of a software system and the white box testing measures such as block coverage, c-uses and p-uses coverage. We believe that software reliability and testing measures, especially white box testing, are inherently related. Results from experiments are presented to support this belief. We also demonstrate that the estimated reliability is sensitive to the operational profile defined for the software and hence errors in the operational profile may lead to incorrect reliability estimates.

01 Jan 1994
TL;DR: The ongoing development of a Tester's Assistant is described, which in the long term, will include a specification-driven slicer for C programs, a test data generator, a coverage analyzer, an execution monitor, and applications to Unix security are indicated.
Abstract: : We consider an approach to testing that combines white-box and black-box techniques. Black-box testing is used for testing a program's effects against its specification. White-box testing is essential if subtle implementation errors are to be identified, e.g., errors due to race conditions. Full white-box testing is a large task. However, for many properties, only a small portion of the program is relevant hence property-based testing has the potential to substantially simplify much of the testing work. The portion of a program that relates to a given property can be identified through slicing. We describe the ongoing development of a Tester's Assistant, which in the long term, will include a specification-driven slicer for C programs, a test data generator, a coverage analyzer, an execution monitor. The slicer and execution monitor are described in this paper, and applications to Unix security are indicated. Security is an important application of property-based testing because of the subtle undetected security errors in delivered operating systems. It is also a promising application because of the (unexpectedly) concise specifications that capture most security requirements, and because of the operating system support for execution monitoring.

Proceedings Article
Praerit Garg1
31 Oct 1994
TL;DR: This work investigates the correlation between "true" reliability of a software system and the white box testing measures such as block coverage, c-uses and puses coverage and demonstrates that the estimated reliability is sensitive to the operational profile defined for the software.
Abstract: The focus of this work is an investigation into the correlation between "true" reliability of a software system and the white box testing measures such as block coverage, c-uses and puses coverage. We believe that software reliability and testing measures, especially white box testing, are inherently related. Results from experiments are presented to support this belief. We also demonstrate that the estimated reliability is sensitive to the operational profile defined for the software and hence errors in the operational profile may lead to incorrect reliability estimates.

Journal ArticleDOI
TL;DR: It is argued that a higher level language uses fewer lines of code than a lower level language to achieve the same functionality, so testing the HLL program will require less effort than testing the LLL equivalent, and it is now possible to build a totally automated testing system.
Abstract: The testing phase of the software development process consumes about one-half of the development time and resources. This paper addresses the automation of the analysis stage of testing. Dual programming is introduced as one approach to implement this automation. It uses a higher level language to duplicate the functionality of the software under test. We contend that a higher level language (HLL) uses fewer lines of code than a lower level language (LLL) to achieve the same functionality, so testing the HLL program will require less effort than testing the LLL equivalent. The HLL program becomes the oracle for the LLL version. This paper describes experiments carried out using different categories of applications, and it identifies those most likely to profit from this approach. A metric is used to quantify savings realized. The results of the research are: (a) that dual programming can be used to automate the analysis stage of software testing; (b) that substantial savings of the cost of this testing phase can be realized when the appropriate pairing of primal and dual languages is made, and (c) that it is now possible to build a totally automated testing system. Recommendations are made regarding the applicability of the method to specific classes of applications.

Proceedings ArticleDOI
15 Nov 1994
TL;DR: A novel method to generate test programs for functional verification of microprocessors is presented that combines schemes of random generation and specific sequence generation.
Abstract: A novel method to generate test programs for functional verification of microprocessors is presented. The method combines schemes of random generation and specific sequence generation. Four levels of hierarchical information are used to generate efficient test programs including many complicated sequences. Considerations in the test generation is also discussed. >

Journal ArticleDOI
TL;DR: A new practical method, the domain-partition boundary method with software probes, and a test platform for testing real-time software embedded in protective relays are described, which is applicable to other microprocessor-based devices.
Abstract: This paper describes a new practical method, the domain-partition boundary method with software probes, and a test platform for testing real-time software embedded in protective relays. The test scheme automatically and efficiently exercises all functions of a relay in all of its operating domains, especially in the error-prone domains. While the test-case generation methodology belongs to the function-test class, it uses knowledge of the software modularity, and takes into consideration system specification and behavior, critical parameter boundary values and software flow in generating the test specification. TSL/sup 1/ is used for managing the test-case generation process, and avoids redundancy. A microcomputer test platform was designed and assembled. It is a simulator which produces voltage or current levels that correspond to events in the system. A control program manages the overall process, including test case generation, wave-form production, process timing, event identification, data collection and result comparison. An implementation of the test scheme is described. Actual testing was conducted on a motor control and overload protection device for verifying its functions, but the test scheme is applicable to other microprocessor-based devices. >

Proceedings ArticleDOI
02 Oct 1994
TL;DR: A structured approach is presented to solve the problems of system-level testability, which consists of partitioning the system specification into testable parts, and inserting implementation-independent test functionality in the specification.
Abstract: As modern digital hardware/software systems become more complex, the testing of these systems throughout their entire system life cycle, including design verification, production testing, and field testing, becomes a severe problem. In this paper a structured approach is presented to solve the problems of system-level testability. A strategy towards design for system-level testability is introduced, which consists of partitioning the system specification into testable parts, and inserting implementation-independent test functionality in the specification. Incorporating these test requirements in the hardware/software implementation will considerably improve system-level testability. The design and implementation of a traffic-lights control system is presented as an example to illustrate the benefits of this approach.

Proceedings ArticleDOI
10 Jun 1994
TL;DR: Initial research is presented on using fault trees and event trees as oracles for testing safety-critical software systems to allow the developer to focus the usually limited amount of testing time on the detection of critical faults.
Abstract: With respect to safety-critical systems, specific techniques do exist for statically analyzing such systems. However, with respect to dynamic analyses (i.e., testing techniques), no specific techniques exist; instead, developers must use general-purpose testing techniques such as branch testing, path testing, and boundary-value testing. While certain other areas, such as real-time systems, have specific testing techniques (e.g., thread testing), safety-critical systems still lack such techniques. This paper, therefore, presents some initial research that addresses this problem. The techniques focus on using fault trees and event trees as oracles for testing safety-critical software systems. The goal is to allow the developer to focus the usually limited amount of testing time on the detection of critical faults. These techniques also have applications to other subsets of high-integrity systems (both software- and hardware-based systems). The effect of these techniques is to develop test cases that will reveal only critical faults (i.e., they ignore non-critical faults). >