scispace - formally typeset
Search or ask a question

Showing papers on "Test suite published in 1990"


Proceedings ArticleDOI
26 Nov 1990
TL;DR: The authors present a technique that selects from a test suite a representative set of test cases that provides the same measure of coverage as the test suite by means of the data flow testing methodology.
Abstract: As a result of modifications to a program during the maintenance phase, the size of a test suite used for regression testing can become unmanageable. The authors present a technique that selects from a test suite a representative set of test cases that provides the same measure of coverage as the test suite. This selection is performed by the identification of the redundant and obsolete test cases in the test suite. The representative set can be used to reduce the size of the test suite by substituting for the test suite. The representative set can also be used to determine those test cases that should be rerun to test the program after it has been changed. The technique is independent of the testing methodology and only requires an association between each testing requirement and the test cases that satisfy the requirement. The technique is illustrated by means of the data flow testing methodology. Experimental studies are being performed that demonstrate the effectiveness of the technique. >

510 citations


Journal ArticleDOI
TL;DR: In this article, the authors address the question of how to decide which test cases to rerun after a modification, and emphasize that it is important that these tests be selected systematically, because executing an entire test suite to validate a few modifications can consume large amounts of time and computational resources.
Abstract: The authors address the question of how to decide which test cases to rerun after a modification. They emphasize that it is important that these tests be selected systematically, because executing an entire test suite to validate a few modifications can consume large amounts of time and computational resources and involve many people, and it is unreliable to exercise a system by selecting test cases intuitively or randomly. They develop a revalidation strategy that is based on an extension of the Fischer algorithm (see K.F. Fischer et al., Proc. Nat. Telecom. Conf., 1981, p.B6.3.1-B6.3.6). Fischer's revalidation technique is based on a zero-one integer programming model. The authors implement a prototype environment based on his methodology. >

110 citations


Patent
21 Aug 1990
TL;DR: In this article, a control system and methodology for defining and executing parametric test sequences using automated test equipment is presented, where the control system is divided into components which separate fixed, reusable information from information which is specific to particular tests.
Abstract: Disclosed is a control system and methodology used for defining and executing parametric test sequences using automated test equipment. The control system is divided into components which separate fixed, reusable information from information which is specific to particular tests. One component contains reference data which describes the configuration of the wafer being tested as well as specifications for the tests to be carried out. Another component contains a set of measurement algorithms that describe individual tests to be performed on generic types of devices or parametric test structures. Execution of a test is carried out by a general test program which retrieves stored reference and test definition information and supplies it to the measurement algorithms to enable them to perform measurements on specific devices in the user specified sequence. The general test program additionally routes the measurement results obtained from the algorithms to data files and/or networks, and summarizes the results in a standardized report format.

91 citations


Journal ArticleDOI
TL;DR: The derivation of conformance tests for communication protocols is discussed, and laws are presented for handling basic LOTOS operators and the conversion of the test processes into finite test suites is discussed.
Abstract: The derivation of conformance tests for communication protocols is discussed. Protocol specifications are considered in the formal description technique LOTOS, which has been developed by the International Standards Organization. Test processes which preserve the structure of the protocol specifications are constructed. Laws are presented for handling basic LOTOS operators. The test processes obtained by applying these laws are related to the theoretical notion of canonical testers. The conversion of the test processes into finite test suites is discussed. Their relationship to current practice in test suite design is discussed. >

57 citations


Proceedings ArticleDOI
V. Monie1
17 Sep 1990
TL;DR: A software program called CoMETS (comprehensive design and maintenance environment for test program sets), which gives the test designer a powerful tool to develop test programs free from the encumbrances imposed by the test language and target tester, is discussed.
Abstract: A software program called CoMETS (comprehensive design and maintenance environment for test program sets), which gives the test designer a powerful tool to develop test programs free from the encumbrances imposed by the test language and target tester, is discussed. CoMETS uses language-neutral graphical symbols to express the flow of a test sequence, detail the expression of individual tests, create and edit test libraries, etc. An overview of the approach is given, and the characteristics of a good tool set are examined. The CoMETS tool set is described, and a simple test program using CoMETS is considered. The savings realized are discussed, and four scenarios typically encountered in test-program-set development that were readily handled by CoMETS are illustrated. >

36 citations


Journal ArticleDOI
TL;DR: A test suite has been developed for evaluating hearing aids that produces a more complete evaluation than any previous set of tests, and is suitable for the automatic evaluation of a hearing aid containing unknown processing.
Abstract: A test suite has been developed for evaluating hearing aids. The tests in the suite are frequency response, number of processing bands and type of processing, input/output characteristics, processing attack and release times, and broadband distortion. The test suite produces a more complete evaluation of a hearing aid than any previous set of tests, and is suitable for the automatic evaluation of a hearing aid containing unknown processing. The test procedures are described, and sample test results are presented for simulated linear and two-channel compression hearing aids.

17 citations


Patent
28 Nov 1990
TL;DR: In this article, a means is provided which is allowed to exist as test circumstances in the case of not only an actual computer 10 but also virtual computers 4-1 and 4-2 as the computer to be tested and executes the automatic test in either case as well as both cases.
Abstract: PURPOSE: To simultaneously execute the tests set by plural persons in charge of test by giving plural procedures related to test circumstances, those related to test resources, and those related to test procedures independently of one another. CONSTITUTION: A means is provided which is allowed to exist as test circumstances in the case of not only an actual computer 10 but also virtual computers 4-1 and 4-2 as the computer to be tested and executes the automatic test in either case as well as both cases. Test control procedures which prescribe the operation condition required for execution of the test in accordance with contents of the computer to be tested and test execution resource procedures which prescribe a test job to be used for the test and the use of data like commands are made independent of each other. Thus, plural persons in charge of test can independently generate procedures to improve the productivity of test items. COPYRIGHT: (C)1992,JPO&Japio

12 citations


01 Jan 1990
TL;DR: The Rule-based, Intelligent Test Case Generator (RITCaG) is a test tool that automatically generates unique and relevant test cases for rule-based expert systems and captures the intuition and experience of the tester by allowing the user to selectively implement test cases.
Abstract: The Rule-based, Intelligent Test Case Generator (RITCaG) is a test tool that automatically generates unique and relevant test cases for rule-based expert systems. RITCaG interactively divides a knowledge base into contexts or segments. RITCaG uses an object-oriented architecture to represent contexts, rules and conditions in a knowledge base. This architecture supports the validation and verification of the frequent changes and updates that are made to a knowledge base. Each test case generated by RITCaG is also represented as an object that can be easily viewed. Equivalence classes are used as a strategy to reduce the size of the input domain to be tested. This strategy is highly beneficial to the test process because the benefits of exhaustive testing can be derived from a reduced input set. Two types of equivalence classes are created in RITCaG. The Legal Equivalence Class is based on legal and valid inputs while the Illegal Equivalence Class is based on illegal and invalid inputs. RITCaG generates two types of test cases: Error-free and Error-seed. Error-free test cases are designed to test if a system's performance matches its functional specifications and are derived from the Legal Equivalence Class. Error-seed test cases are designed to test a system's level of brittleness and are based on the Illegal Equivalence Class. RITCaG mimics the real-world performance of an intelligent system by testing situations, were a situation is defined as a sequence of dependent rules. In other words, if a rule activates another rule, then both rules are tested. A knowledge base can be viewed as a hierarchical model with three levels: contexts, rules and conditions, respectively. RITCaG captures the intuition and experience of the tester by allowing the user to selectively implement test cases. In other words, instead of automatically activating all generated test cases, RITCaG gives the user the choice to selectively activate test cases with unique test goals. An error file is created and maintained of all errors detected during each test session. This file identifies the rules and the paths along which the error was detected. A history of all generated and implemented test cases is maintained and feed back is given on contexts and rules not tested during a test session. (Abstract shortened by UMI).

8 citations


Patent
27 Apr 1990
TL;DR: Test data is incorporated within the microcode of a bit-slice microprocessor to verify program performance and during operation of the program as a built-in test as discussed by the authors, which allows complete algorithm debugging during program development, and permits the rest of the system to be developed in parallel.
Abstract: Test data is incorporated within the microcode of a bit-slice microprocessor to be used during development of the program to verify program performance and during operation of the program as a built-in test. Little additional hardware is required and there is minimal impact on the structure of the program. The program is allowed to operate with the same data that it would have when integrated with the system. During development, the embedded data is used as a substitute for the rest of the system, allowing program development to continue until system integration, using only power supplies and some test equipment. When implemented and used with a commercially available microprocessor ROM emulator, the test data may be varied to highlight difficulties in algorithm design and program development. The operating program cannot tell the difference between live system data and embedded test data. Thus, the program will behave identically during development and system operation. This allows complete algorithm debugging during program development, and permits the rest of the system to be developed in parallel. Developmental test data can be used later for operational program/hardware bit confidence testing with minimal changes.

7 citations


Journal ArticleDOI
TL;DR: This paper shows how the model has been applied to create a self test for the Intel 8085, and how the size of the test program has been reduced, and results are given concerning the test length, the fault coverage and the development time.

6 citations


Book ChapterDOI
TL;DR: The Bratko-Kopec test represents one of several attempts to quantify the playing strength of chess computers and human subjects, with considerable scope for improvement, as the success of test sets in related areas like pattern recognition attest.
Abstract: The twenty-four positions of the Bratko-Kopec test (Kopec and Bratko 1982) represent one of several attempts to quantify the playing strength of chess computers and human subjects. Although one may disagree with the choice of test set, question its adequacy and completeness and so on, the fact remains that the designers of computer-chess programs still do not have an acceptable means of estimating the performance of chess programs, without resorting to time-consuming and expensive “matches” against other subjects. Clearly there is considerable scope for improvement, as the success of test sets in related areas like pattern recognition attest.

Proceedings ArticleDOI
24 Jun 1990
TL;DR: A validation suite for IEEE standard VHDL is discussed along with its executive manager, which efficiently classifies the tests based on different criterion defined in the test header.
Abstract: A validation suite for the IEEE standard VHSIC Hardware Description Language (VHDL) is discussed along with its executive manager. Test points are generated from the VHDL LRM (language reference manual) syntax diagrams and sentences. Each test in the suite contains a test header which is specially formatted and keeps information such as test point, test objective, test result, and test type. The suit executive manager is menu-driven and efficiently classifies the tests based on different criterion defined in the test header. Coverage is defined to measure how closely a VHDL tool covers the LRM and is also computed by the suite executive manager. >


Journal ArticleDOI
TL;DR: A conformance test method for the HBS standard specification for layer 3 of the OSI (Open Systems Interconnection) model, where the devices to be tested are the home bus controller, network termination, and terminals.
Abstract: A conformance test method for the HBS standard specification is reported. The extent of this research corresponds to layer 3 of the OSI (Open Systems Interconnection) model. The devices to be tested are the home bus controller, network termination, and terminals. Test suites of respective devices are specified, together with 171 test cases relating to the layer 3 communication sequences and packet exchange. A test suite is specified for each test case, and a test scenario is automatically generated based on test data and test suite. A test system and test program were developed, and the effectiveness of this conformance test method was verified by conducting a simulation experiment. >

Journal ArticleDOI
TL;DR: Dynamic behaviour description in TTCN is shown to address many important aspects of conformance testing and weaknesses of this technique are indicated in the areas such as multiparty testing needed for some protocols.
Abstract: The International Organisation for Standardisation (ISO) has defined a protocol test language called TTCN (Tree and Tabular Combined Notation) to specify abstract test cases for ISO protocols. After an introduction to the ISO test methodology, test specification in TTCN is discussed. TTCN combines a tree notation for dynamic behaviour description with a tabular representation of various language constructs. Dynamic behaviour description in TTCN is shown to address many important aspects of conformance testing. Weaknesses of this technique are indicated in the areas such as multiparty testing needed for some protocols. Tabular TTCN specifications cannot be developed using traditional text editors. From this arises the need for specialized editors for TTCN. Next an interactive editor is described for the tabular form of TTCN (TTCN-GR), called CONTEST-TTCN, implemented under Sun-Unix in the SunView environment. CONTEST-TTCN allows the user to enter TTCN test suites using keyboard and mouse, and after performing syntax and semantic checks, produces the machine-processable form of TTCN called TTCN-MP. Another module of the tool takes a test specification in TTCN-MP form and generates the corresponding TTCN-GR form for interactive viewing. In case a formal description of the protocol is available in Estelle or Lotos, the tool can inherit the interaction definitions to be used in the TTCN test suite.

01 Jan 1990
TL;DR: This account provides the details of this excursion into the use of hierarchical testlets and validity-based scoring and found on cross validation that although an adaptive test is everywhere superior to a fixed format test, this superiority is crucially dependent upon the quality of the items.
Abstract: Earlier (Wainer & Lewis, 1990) we reported the initial development of a testlet-based algebra test. In this account we provide the details of this excursion into the use of hierarchical testlets and validity-based scoring. A pretest of two 15 item hierarchical testlets was carried out in which examinees' performance on a four item subset of each testlet was used to predict performance on the entire testlet. Four models for constructing hierarchies were considered. These presentation hierarchies were compared with one another and with an optimally chosen set of four linearly administered items. The comparison was carried out using both the root mean square error and the conditional posterior variance as the criterion. It was found on cross validation that although an adaptive test is everywhere superior to a fixed format test, this superiority is crucially dependent upon the quality of the items. When items vary considerably in quality a fixed format test, which uses the best items, can do almost as well as an adaptive test of equal length.

Patent
05 Jan 1990
TL;DR: In this paper, the authors propose to eliminate the need for preparing a test program to respective protocol specification verifiers by providing a test suit knowledge base in which its executed result and an operation instruction are recorded.
Abstract: PURPOSE:To eliminate the need for prepareing a test program to respective protocol specification verifiers by providing a test suit knowledge base in which its executed result and an operation instruction are recorded. CONSTITUTION:The test case for veifying the protocol and its executed result and the operation instruction and recorded in the test suit knowledge base 11a. Then, the test case is selected and taken out of the knowledge base 11a and inputted to an objective communication protocol to be tested executing device 10 by a test case taking-out means 14 and a test case executing means 15. Here, the executed result is received by the means 15, and the received executed result and the previously given executed result are compared with each other. Then, the operation instruction corresponding to the executed result is inputted as a selection signal to the means 14 or an optimization rule taking- out means 16 from the means 15.

Proceedings ArticleDOI
17 Sep 1990
TL;DR: A novel ATE (automatic test equipment) test program development/execution concept employed at Harris is discussed, where test program sets are constructed as a collection of atomic tests whose sequence is determined by a test manager with adaptive reasoning.
Abstract: A novel ATE (automatic test equipment) test program development/execution concept employed at Harris is discussed, Test program sets are constructed as a collection of atomic tests (i.e., independent, totally encapsulated tests) whose sequence is determined by a test manager with adaptive reasoning. The tests can be independently constructed and compiled, allowing for more parallel development and reduced maintenance costs. Communication between tests is achieved through a common data storage, and the setup conditions are reconciled by a single setup test module whose actions are directed by a state vector. The test manager employs experiential data to adjust relative failure probabilities in optimally sequencing the atomic tests. >

Proceedings ArticleDOI
01 May 1990
TL;DR: Improvements of validation techniques are described, with examples drawn from current literature, to evaluate a test suite with such a high confidence level, that multiple validations no longer need be performed by each user.
Abstract: While the last decade has seen an explosion of microwave software, experimental validation efforts have seen little effort. As a result, most published validation efforts are, justifyably, viewed with skepticism. Thus, when an analysis is to be validated, it must be validated again and again for, and by, each group of users, often at considerable expense. This paper describes areas for improvement of validation techniques, with examples drawn from current literature. The ultimate objective in the field of experimental validation is to evaluate a test suite with such a high confidence level, that multiple validations no longer need be performed by each user.

Proceedings ArticleDOI
01 Jun 1990
TL;DR: This paper refines and completes the results of the previous work on the implementation of a uniform, white-box test environment PROTest (PROLOG Test Environment), which supports the development of object-oriented, rule and knowledge-based expert systems which will be implemented in PROLOG.
Abstract: This paper refines and completes the results of our previous work represented in /3/. Our present work concentrates on the implementation of a uniform, white-box test environment PROTest (PROLOG Test Environment). PROTest supports the development of object-oriented, rule and knowledge-based expert systems which will be implemented in PROLOG. PROTest assists the programmer - as well as the quality engineer - in particular to generate test cases, to exercise his or her programs by means of these test cases, to produce test reports after the test execution etc (Testing in the Large).The special feature of PROTest as a novelty in the field stems from its built-in object-oriented logic test language (TL) which is being used. The major benefit of this approach is given through the high adaptability potential of the test procedures whenever the functional part of the programs has been changed.