scispace - formally typeset
Search or ask a question

Showing papers on "Test harness published in 1993"


Journal ArticleDOI
TL;DR: A technique to select a representative set of test cases from a test suite that provides the same coverage as the entire test suite by identifying, and then eliminating, the redundant and obsolete test cases in the test suite is presented.
Abstract: This paper presents a technique to select a representative set of test cases from a test suite that provides the same coverage as the entire test suite. This selection is performed by identifying, and then eliminating, the redundant and obsolete test cases in the test suite. The representative set replaces the original test suite and thus, potentially produces a smaller test suite. The representative set can also be used to identify those test cases that should be rerun to test the program after it has been changed. Our technique is independent of the testing methodology and only requires an association between a testing requirement and the test cases that satisfy the requirement. We illustrate the technique using the data flow testing methodology. The reduction that is possible with our technique is illustrated by experimental results.

630 citations


Patent
Kimberly L. Gross1, Kirk D. Sullivan1
26 Mar 1993
TL;DR: In this paper, an improved method of testing in a distributed environment which is comprised of a Control Program residing in a Control Machine is presented. But the control program does not control test execution in the Test Machines.
Abstract: An improved method of testing in a distributed environment which is comprised of a Control Program residing in a Control Machine. The Control Machine also contains the central repository of information to control test execution in the Test Machines. The Control Program forwards instructions to a particular Test Program, residing in a Test Machine. The instructions are executed on that machine, and results are reported back to the Control Program. The Control Program verifies whether the results are correct. Depending on the results of the verification, the Control Program sends the test machine further instructions (to continue the test, stop the test, etc.). Logging the results of each test operation, keeping track of the tests performed, and coordinating the test cases are all performed on the Control Machine, by the Control Program.

140 citations


Journal ArticleDOI
TL;DR: An approach to determining the consequences of a stop-test decision that combines software reliability engineering and economic analysis is described, and the benefits-to-cost ratio of the approach is shown to be very favorable.
Abstract: An approach to determining the consequences of a stop-test decision that combines software reliability engineering and economic analysis is described. The approach develops a model to quantify the economic consequences associated with terminating testing at a reliability achieved with a specified number of units of test-program execution, collects data on failures and program-execution time during system test, analyzes reliability data by selecting a reliability-growth model and fitting the model to these data at several points during system test, and applies the reliability model's estimated values to the economic model to determine the optimal system-release time. The benefits-to-cost ratio of the approach is shown to be very favorable. >

95 citations


Proceedings ArticleDOI
01 Jul 1993
TL;DR: System testers using the proposed method have great flexibility in dealing with common system test problems: limited access to the system test environment, unstable software, or changing operational conditions.
Abstract: In this paper we introduce a new load testing technique called Deterministic Markov State Testing and report on its application. Our approach is called “deterministic” because the sequence of test case execution is set at planning time, and “state testing” because each test case certifies a unique software state. There are four main advantages of Deterministic Markov State Testing for system testers: provision of precise software state information for root cause analysis in load test, accommodation for limitations of the system test lab configuration, higher acceleration ratios in system test, and simple management of distributed execution of test cases. System testers using the proposed method have great flexibility in dealing with common system test problems: limited access to the system test environment, unstable software, or changing operational conditions. Because each test case verifies correct execution on a path from the idle state to the software state under test, our method does not require the continuous execution of all test cases. Deterministic Markov State Testing is operational-profile-based, and allows for measurement of software reliability robustness when the operational profile changes.

57 citations


Proceedings ArticleDOI
01 Jul 1993
TL;DR: The CONVEX Integrated Test Environment (CITE) is discussed as an answer to the need for a more complete and powerful general purpose automated software test system.
Abstract: As software systems become more and more complex, both the complexity of the testing effort and the cost of maintaining the results of that effort increase proportionately. Most existing test environments lack the power and flexibility needed to adequately test significant software systems. The CONVEX Integrated Test Environment (CITE) is discussed as an answer to the need for a more complete and powerful general purpose automated software test system.

26 citations


Proceedings ArticleDOI
17 Oct 1993
TL;DR: An approach to systems test is presented that builds on earlier work by several design-for-test and test management standards committees and research teams based on a generic model of a managed built-in test process supported by standard test descriptions.
Abstract: An approach to systems test is presented that builds on earlier work by several design-for-test and test management standards committees and research teams. The approach is based on a generic model of a managed built-in test process supported by standard test descriptions. >

24 citations


Proceedings ArticleDOI
Todd Austin1
17 Oct 1993
TL;DR: The development of both the test instrument models required for a design simulator to generate device test data, and of software links to let the design simulator share data with the test programming environment are described.
Abstract: The ability to link mixed-signal IC design and test databases can shorten product development cycles in multiple ways. By allowing designers to simulate device tests and by giving test engineers access to the results, such a link promotes testability from the earliest stages of design, and generates data usable in test program development. Moreover, by enabling test program development to take place in simulation, design/test integration frees test engineers to both work in parallel with designers rather than having to wait for a fabricated device, and to debug test programs and hardware off-line at a workstation rather than waiting for time on a busy test system. This paper describes the development of both the test instrument models required for a design simulator to generate device test data, and of software links to let the design simulator share data with the test programming environment. The resulting integration supports concurrent design and test engineering efforts in developing new mixed-signal IC products. >

20 citations


Proceedings ArticleDOI
16 Nov 1993
TL;DR: The LFSROM architecture is presented herein is an attempt to solve the hardware cost problem without altering the initial test sequence in order to preserve the advantages of minimal sequence length of deterministic testing over pseudo-random and (pseudo)-exhaustive testing.
Abstract: Deterministic testing is by far the most interesting built-in self-test (BIST) technique because of the minimal number of test patterns required and of the known fault coverage. However, it is still not applicable since none of the existing deterministic test pattern generators (TPGs) is at the same time efficient and small. The LFSROM architecture which is presented herein is thus an attempt to solve the hardware cost problem without altering the initial test sequence in order to preserve the advantages of minimal sequence length of deterministic testing over pseudo-random and (pseudo)-exhaustive testing. The LFSROM concept is described and several implementations of test sets generated for the ISCAS85 benchmark circuits have been compared with those of equivalent ROM designs and the results reported in the form of curves and bar charts. >

12 citations


Proceedings ArticleDOI
17 Oct 1993
TL;DR: This paper describes the process by which an 8-bit microcontroller was put into production test by testing four devices simultaneously, with an emphasis on the additional performance gained from out test system.
Abstract: This paper describes the process by which an 8-bit microcontroller was put into production test by testing four devices simultaneously. The test setup used was a pick and place handler, and a VLSI ATE (automated test equipment) tester. The process used to implement this new testing method is presented. Hardware and software constraints are explored, with an emphasis on the additional performance gained from out test system. >

10 citations


01 Jan 1993
TL;DR: The main techniques that have been employed in these generators are reviewed, giving the advantages and disadvantages of this approach, and how it might be applied to a far wider range of software as a powerful software tool to complement other methods of testing.
Abstract: The automatic generation of test data has been used as a tool for the black box testing of both compilers and electronic hardware, but only rarely has it been applied to the testing of other types of software. This paper reviews the the main techniques that have been employed in these generators, giving the advantages and disadvantages of this approach, and how it might be applied to a far wider range of software as a powerful software tool to complement other methods of testing.

8 citations


Proceedings ArticleDOI
05 May 1993
TL;DR: The structure of a knowledge base intended to capture potentially useful refinements, based either upon the expert knowledge of a tester or upon the software faults uncovered in prior, related projects is described.
Abstract: Software testing criteria produce test descriptions that may be viewed as systems of constraints describing desired test cases. Refinement of test descriptions is possible by adding additional constraints to each test description, reducing the solution space and focusing attention upon tests that are more likely to reveal faults. This paper describes the structure of a knowledge base intended to capture potentially useful refinements, based either upon the expert knowledge of a tester or upon the software faults uncovered in prior, related projects. >

Proceedings ArticleDOI
H. Bouwmeester1, S. Oostdijk1, F. Bouwmann1, R. Stans1, L. Thijssen, F. Beenker 
17 Oct 1993
TL;DR: A classification of methods for reducing the test time of a device by exploiting parallelism in Macro Test and it is shown that without design modifications significant reductions in test time can be reached.
Abstract: Increasing complexity of modern designs and high costs of test equipment are putting more and more emphasis on test application times. This paper presents a classification of methods for reducing the test time of a device by exploiting parallelism in Macro Test. Techniques and considerations are given for different methods of parallel testing. It is shown that without design modifications significant reductions in test time can be reached. To obtain a further test time reduction, analysis of resource sharing conflicts is done in order to be able to decide which design modifications can best be made. As a result, a trade-off between test time and additional testability hardware can be made. Results of one of the methods of parallel testing are given for two industrial devices. Test time reductions of up to 40-50% compared to sequential approaches have been reached without making any design modifications. >

Book ChapterDOI
01 Jan 1993
TL;DR: Functional statistical test sets have the highest fault revealing power and are the most cost-effective when an input distribution and a number of random inputs are determined according to criteria relating to software functionality.
Abstract: Statistical testing involves exercising a piece of software by supplying it with input values that are randomly selected according to a defined probability distribution over its input domain This paper focuses on functional statistical testing, that is, when an input distribution and a number of random inputs are determined according to criteria relating to software functionality The criteria based on models of behavior deduced from specification, ie, finite-state machines and decision tables, are defined The modeling approach involves a hierarchical decomposition of software functionality It is applied to a module from the nuclear field Functional statistical test sets are designed and applied to two versions of the module: the real version, and that developed by a student Twelve residual faults are revealed, eleven of which affect the student’s version The other fault is quite subtle, since it resides in the driver that we have developed for the real version in our experimental test harness Two other input distributions are experimented with: the uniform distribution over the input domain and a structural distribution determined so as to rapidly exercise all the instructions of the student’s version The results show that the functional statistical test sets have the highest fault revealing power and are the most cost-effective

Proceedings ArticleDOI
30 Aug 1993
TL;DR: A test system which provides a series of documents to verify that the testing process has been carried out properly and that the test objectives have been met is described.
Abstract: The ANSI/IEEE standard for software test documentation calls for the production of a series of documents to verify that the testing process has been carried out properly and that the test objectives have been met Without automated tool support the costs of such test documentation are prohibitive in all but the most trivial projects The paper describes a test system which provides such a service It begins with a test plan frame as a master class, from which the class test design is then derived From it various test procedure classes are generated which serve to generate the individual objects -test cases specified in the form of pre- and post condition assertions to be executed in test suites >

Proceedings ArticleDOI
20 Sep 1993
TL;DR: A new approach is described for converting a minimal set oftest requirements and a set of test vectors into a working test program set (TPS).
Abstract: A new approach is described for converting a minimal set of test requirements and a set of test vectors into a working test program set (TPS). A test requirements file (TRF) expresses all of the necessary information for testing the logical functions and electrical and switching performance of a device-under-test. The TRF and the test vector files are processed automatically to produce the TPS. Test requirements are audited for correctness or reasonableness, and some types of tests can be generated automatically if their specifications are omitted. >

Proceedings ArticleDOI
08 Sep 1993
TL;DR: The MDIS acceptance test is the government evaluation of the MDIS system availability, performance, and functionality and is composed of three test scenarios which are: system/component availability test, component performance test, and system integration test.
Abstract: The MDIS acceptance test is the government evaluation of the MDIS system availability, performance, and functionality. The test is performed over a period of 30 days and is composed of three test scenarios which are: (1) System/component availability test, (2) component performance test, and (3) system integration test. A test protocol describes each test in detail, as well as describing the actions of the vendor and test teams during the test period. Fifty-six test modules were derived directly from the performance criteria specified in the MDIS contract. These test modules are used by the test team members to evaluate the MDIS system. A database is used to compile the results of the acceptance test for management and comparison of multiple MDIS site results.

Patent
28 Oct 1993
TL;DR: In this article, a test program is loaded into a RAM on the interactive network board through the test interface, and the test program resident in the RAM is activated, and checkpoint test results are outputted after completion of test program.
Abstract: Method and apparatus for testing an interactive network board having a local area network interface, a Small Computer System Interface, and a test interface comprises supplying power to the interactive board, and performing a power-on self-test program within the interactive board. At the completion of the power-on self-test, a test program is loaded into a RAM on the interactive network board through the test interface, and the test program resident in the RAM is activated. The test program is executed and checkpoint test results are outputted after completion of the test program. A test computer is provided to receive the checkpoint test result and may script additional tests in accordance with checkpoint test results. Preferably, at the completion of the test program, ROM-resident firmware is downloaded into the RAM on the interactive board, and the firmware is loaded from the RAM into a ROM on the interactive network board.

Proceedings ArticleDOI
20 Sep 1993
TL;DR: In this article, the authors address the practical issues of using such virtual instruments during the development of test systems and evaluate cost savings incurred through this development methodology and describes future capabilities and advantages of using virtual instruments for other development and test systems.
Abstract: SYTRONICS employed the use of modeled instruments during the recent development of the Engine Monitoring and Control System (EMCS) completed for the US Navy in which the engine was modeled and simulated for the purposes of developing, testing, and verifying the Engine Test Cell system. This simulated engine provided a virtual instrument for use with the development of the Test Cell. The Test Cell was originally developed for testing the T-406 (V-22) engine at the Navy's Cherry Point Depot. Currently, the Test Cell is in use at this facility testing other engines. This paper addresses the practical issues of using such virtual instruments during the development of test systems. It also evaluates cost savings incurred through this development methodology and describes future capabilities and advantages of using virtual instruments for other development and test systems. >

Proceedings ArticleDOI
06 Sep 1993
TL;DR: The authors develop a test selection method with respect to a general distributed test architecture used for testing distributed systems, based on the Open Distributing Processing (ODP) Basic Reference Model.
Abstract: ISO (International Standardization Organization) developed the ISO distributed test architecture for testing layered protocols. Furthermore, a general distributed test architecture where the IUT (implementation under test) contains several distributed ports is used for testing distributed systems, based on the Open Distributing Processing (ODP) Basic Reference Model (BRM). In this architecture, the testers cannot communicate or synchronize with one another unless they communicate through the IUT; and no global clock is available in the system. This architecture could model a test architecture of a communication network with n accessing nodes, where the testers reside in these nodes. When n=2, this general distributed test architecture reduces to the ISO distributed test architecture. The authors develop a test selection method with respect to this general distributed test architecture.

Journal ArticleDOI
TL;DR: The focus of this article is on the automated test program synthesis techniques employed in BOLD that can greatly reduce test program development costs.
Abstract: BOLD is a system that supports several test aspects of digital hardware units, such as chips, modules (boards), and systems. BOLD consists of three main components, namely a design methodology, special hardware structures, and special test languages and their associated compilers. The goal of the BOLD system is to make it feasible for an engineer to efficiently develop high quality tests for hardware units. These tests usually consist of (1) tests for faults that are internal to a unit and that are supported by one or more design-for-test or built-in self-test methodologies, and (2) interconnect tests between units. The main idea behind BOLD is to be able to easily compose tests for low level hardware units to create a test for a higher level unit.

Proceedings ArticleDOI
28 Sep 1993
TL;DR: This paper discusses the relative merits of various techniques extending IEEE 1149.1 over a backplane and the alternatives are evaluated in terms of test capability and efficiency, fault tolerance and board real estate and component overhead.
Abstract: High fault coverage embedded system test is a desirable capability in many applications. The test requirement may be for on-line test, remote test capability or for rapid field diagnostics and repair. Two industry standards have recently become available for this purpose. The serial 4 wire IEEE 1149.1 boundary scan standard is widely used for board test and provides interconnect test and device functionality. Although primarily intended as an aid to prototype and manufacturing test, 1149.1 can also be extended over the backplane by the use of commercially available hardware. The proposed IEEE P1149.5 standard is modeled on the DOD's TM bus and is a fault tolerant, 5-wire, packet based master-slave communications protocol for up to 251 modules. This paper discusses the relative merits of various techniques extending 1149.1 over a backplane. The alternatives are evaluated in terms of test capability and efficiency, fault tolerance and board real estate and component overhead.

01 Jan 1993
TL;DR: The chaining approach significantly improves the effectiveness of the process of test data generation for distributed software and introduces new research challenges which do not exist in the testing of sequential programs, because of the nonreproducible execution behavior.
Abstract: Software testing is very labor-intensive and expensive; it accounts for a significant portion of the cost of software system development. If the testing process could be automated, then the cost of developing software should be reduced significantly. The aim of this dissertation is to improve the process of automated test data generation for sequential and distributed programs. Test data generation in software testing is the process of identifying program input which satisfy selected testing criterion. Structural testing coverage criteria is the requirement that certain program statements, or combinations thereof, be exercised (e.g., statement coverage, branch coverage). In our approach, referred to as the chaining approach, test data are derived based on the actual execution of the program under test. The approach starts by executing a program for a arbitrary program input. When the program is executed, the program execution flow is monitored. If an undesirable execution flow is observed at some branch (p,q), (e.g., the current branch doesn't lead to the selected program element), then a search algorithm is used to find a different input to change the flow of execution at this branch. If flow cannot be changed, then the chaining approach identifies statements in the program by using dependency analysis, which are to be executed prior to execution of this branch (p,q); as a result, the flow of execution can be altered at this branch, allowing execution to continue. The chaining approach significantly improves the effectiveness of the process of test data generation. Finally, we have performed research on test data generation for distributed programs. Test data generation for distributed programs introduces new research challenges which do not exist in the testing of sequential programs, because of the nonreproducible execution behavior. For distributed software, we have proposed two approaches: the path oriented approach and the chaining approach.

Proceedings ArticleDOI
19 Apr 1993
TL;DR: The generic nature of TPL and the use of a standard software environment have been demonstrated to allow good portability of test programs between different test systems.
Abstract: A standard, generic, test program language (TPL) has been defined to ease the test program development task for analog and mixed-signal circuits. The language is used in the construction of links between simulation and test. The TPL allows designers to take responsibility for test program development, without the need for detailed knowledge of specific test instruments. The TPL test file is generated and executed through a Motif interface running under X-windows. The generic nature of TPL and the use of a standard software environment have been demonstrated to allow good portability of test programs between different test systems. >

Proceedings ArticleDOI
R. Naiknaware1
03 Oct 1993
TL;DR: The article addresses the issues involved in each stage of the automatic test program generation process and explains how exactly it needs to be integrated with the design, manufacturing and testing process of analog ICs designed using newly emerging top-down modular design approach.
Abstract: Analog automatic test plan generation (AATPG) was considered to be a difficult task until recently. We have developed a method to automate the test plan generation for analog ICs, which are designed using a modular design concept similar to that of digital ICs. However, it is still not clear exactly how this method can be put in the design and manufacturing process. Overall the automated test program generation process requires intricate knowledge of the design database of the chip, and the destination automatic test equipment (ATE), on which online testing of the ICs is to be performed. At the same time, it should be noted that consideration needs to be given to intermediate stages of extracting information from the design database, selection of appropriate method for test plan generation, test method specifications, test plan format, test plan to test program translator, optimization of the overall test plan and corresponding tester specific test program, deletion and addition of the tests, simulation of the entire test environment including device under test (DUT), extracting information of non-testable areas and, design alteration for higher test coverage. The article addresses the issues involved in each stage of the automatic test program generation process and explains how exactly it needs to be integrated with the design, manufacturing and testing process of analog ICs designed using newly emerging top-down modular design approach. >

Journal Article
TL;DR: TPGEN simulates the execution of a test program being generated and if an abnormal event such as zero divide or infinite loop is detected, TPGEN back-tracks to the specified position and selects an alternative production rule to avoid such abnormal execution.
Abstract: This paper presents a test program generator called TPGEN, which is based on attributed grammar. TPGEN generates a wide variety of test programs mainly for programming language processors. The generated test programs are executable and have self-checking code for validating execution results. The generated test programs are assured that they have specific testing covevage. TPGEN simulates the execution of a test program being generated and if an abnormal event such as zero divide or infinite loop is detected, TPGEN back-tracks to the specified position and selects an alternative production rule to avoid such abnormal execution. Introduction of this mechanism has succeeded in generating a wide variety of programs with complex structures

Proceedings ArticleDOI
22 Feb 1993
TL;DR: A protocol for defining the structural relations is proposed, and the protocol's usefulness is demonstrated by applying it to a real-life situation, namely, defining a test structure for the NELSIS IC Design System.
Abstract: The systematic testing of a large CAD tool-set requires a complex collection of tests called a test-suite. The test-suite is managed and driven by a test system (harness). The goal is to reduce the effort involved in the definition of the test-suite through a systematic approach. In the present approach a type of composite test called a device test which is constructed of primitive tests is described. This enables splitting the test-suite definition into two activities: the definition of the test structure for each device test, and the definition of the structural relations between the device test. The advantage of this approach is that each device test may be constructed and executed independently. This enhances the modularity of the architecture by reducing the dependencies in the test suite. The emphasis is on the structural relations between the device tests. A protocol for defining the structural relations is proposed, and the protocol's usefulness is demonstrated by applying it to a real-life situation, namely, defining a test structure for the NELSIS IC Design System. >

Journal ArticleDOI
TL;DR: This paper presents a testing process based on a test suite generated in advance with the aim of developing an active tester which arranges its actions in accordance with the observed test events but just within the scope of the test suite.

Proceedings ArticleDOI
20 Sep 1993
TL;DR: This paper describes a flexible and comprehensive approach to incorporating diagnostic capabilities into test program sets (TPS) that was utilized to test components of a Westinghouse airborne radar system.
Abstract: This paper describes a flexible and comprehensive approach to incorporating diagnostic capabilities into test program sets (TPS). This approach was utilized to test components of a Westinghouse airborne radar system. This paper will relate what did and did not work during the actual use of this approach. For example, diagnostics must be part of the design of both the hardware and software to produce a concise and efficient TPS. The integration of hardware and software capabilities must be used in a concerted effort to minimize erroneous results when manual intervention is required. >