scispace - formally typeset
Search or ask a question

Showing papers on "Test data published in 1978"


Journal ArticleDOI
TL;DR: Test cases can be designed so that their associated subsets cover the entire input domain, allowing reliability estimatesto be made for expected operational use profiles.

175 citations


Proceedings Article
01 Jan 1978
TL;DR: Program mutation as discussed by the authors is a new approach to determining test data adequacy which holds promise of being a major breakthrough in the field of software testing and has been used extensively in the literature.
Abstract: When testing software the major question which must always be addressed is "If a program is correct for a finite number of test cases, can we assume it is correct in general. " Test data which possess this property is called Adequate test data, and, although adequate test data cannot in general be derived algorithmically, 1 several methods have recently emerged which allow one to gain confidence in one's test data's adequacy. Program mutation is a radically new approach to determining test data adequacy which holds promise of being a major breakthrough in the field of software testing. The concepts and philosophy of program mutation have been given elsewhere, 2 the following will merely present a brief introduction to the ideas underlying the system. Unlike previous work, program mutation assumes that competent programmers will produce programs which, if they are not correct, are "almost" correct. That is, if a program is not correct it is a "mutant"-it differs from a correct program by simple errors. Assuming this natural premise, a program P which is correct on test data T is subjected to a series of mutant operators to produce mutant programs which differ from P in very simple ways. The mutants are then executed on T. If all mutants give incorrect results then it is very likely that P is correct (i.e., T is adequate). On the other hand, if some mutants are correct on T then either: (1) the mutants are equivalent to P, or (2) the test data T is inadequate. In the latter case, T must be augmented by examining the non-equivalent mutants which are correct on T: a procedure which forces close examination of P with respect to the mutants. At first glance it would appear that if T is determined adequate by mutation analysis, then P might still contain some complex errors which are not explicitly mutants of P.

81 citations


Patent
17 Feb 1978
TL;DR: In this article, an automatic tester for singularly testing a plurality of different types of standard electronic modules is presented, where a test program is written for each type of standard e cient electronic module and this test data is stored on cassette tape.
Abstract: An automatic tester for singularly testing a plurality of different types of standard electronic modules. A test program is written for each type of standard electronic module and this test data is stored on cassette tape. As the tester is adaptable for testing different types of modules, interface and switching matrix between the tester and the module under test is accomplished by an interface card and matrix of relays. A plurality of programmable power supplies and programmable waveform generators are provided in the tester and instructions on the cassette tape dictate the desired values and wave shapes to be supplied to a particular module under test. The cassette tape also has data which represents the acceptable output requirements of a module under test and a measurement system measures the actual outputs and a comparison is then made with the measured output and the desired output to indicate either failure or acceptance of the module under test.

73 citations


Journal ArticleDOI
TL;DR: Several conventional types of program testing strategies are evaluated and the strategies include branch testing, structured testing and testing on input values having special properties.
Abstract: The effectiveness in discovering errors of symbolic evaluation and of testing and static program analysis are studied. The three techniques are applied to a diverse collection of programs and the results compared. Symbolic evaluation is used to carry out symbolic testing and to generate symbolic systems of path predicates. The use of the predicates for automated test data selection is analysed. Several conventional types of program testing strategies are evaluated. The strategies include branch testing, structured testing and testing on input values having special properties. The static source analysis techniques that are studied include anomaly analysis and interface analysis. Examples are included which describe typical situations in which one technique is reliable but another unreliable. The effectiveness of symbolic testing is compared with testing on actual data and with the use of an integrated methodology that includes both testing and static source analysis. Situations in which symbolic testing is difficult to apply or not effective are discussed. Different ways in which symbolic evaluation can be used for generating test data are described. Those ways for which it is most effective are isolated. The paper concludes with a discussion of the most effective uses to which symbolic evaluation can be put in an integrated system which contains all three of the validation techniques that are studied.

66 citations


Journal ArticleDOI
D.J. Panzl1
TL;DR: Typical testing activities may involve many hundreds of tests, but an automatic software test driver assists the tester by managing all the test data, and automatically running the tests.
Abstract: Typical testing activities may involve many hundreds of tests. An automatic software test driver assists the tester by managing all of the test data, and automatically running the tests. Savings during regression testing can be significant.

59 citations


Journal ArticleDOI
TL;DR: This paper introduces a technique whereby test data can be used in proving program correctness, and in addition to simplifying the process of providing correctness, this method simplifies theprocess of providing accurate specification for a program.
Abstract: Proofs of program correctness tend to be long and tedious, whereas testing, though useful in detecting errors, usually does not guarantee correctness This paper introduces a technique whereby test data can be used in proving program correctness In addition to simplifying the process of providing correctness, this method simplifies the process of providing accurate specification for a program The applicability of this technique to procedures and recursive programs is demonstrated

46 citations


Proceedings ArticleDOI
19 Apr 1978
TL;DR: In this article, the use of a commercially available infrared (IR) scanning system for measurement of convectiveheating patterns on wind tunnel models in continuous flow facilities is described, including continuous color video monitoring and on-line digitizing of analog information from a full frame consisting of 7,700 discrete data points.
Abstract: The use of a commercially available infrared (IR) scanning system for measurement of convectiveheating patterns on wind tunnel models in continuous-flow facilities is described. Basic system components, data-reduction techniques, data precision, system limitations, and typical test data are presented. Unique capabilities of the system include continuous color video monitoring and on-line digitizing of analog information from a full frame consisting of 7,700 discrete data points. Computergenerated plots of model isotherms and reduced data in heat-transfer coefficient form at predetermined geometric model locations are produced from the digitized data. Nomenclature C Differential count readings between the model surface and a reference target of known temperature [see Fig. 5, Eq. (3)] c Velocity of light c Model material specific heat (ES) j Error sensitivity terms in Eqs. (5) and (18) Aperture opening of the camera optics and shape factor for radiation calculation in Eq. (20)

30 citations


Journal ArticleDOI
TL;DR: The advantages of exhaustive testing of combinational networks are investigated and it is shown that by abandoning the requirement of minimal testing time a substantial reduction of testing data to be stored is obtained, and the generation process is simplified.
Abstract: In this correspondence the advantages of exhaustive testing of combinational networks are investigated. The method consists of applying all possible input combinations and checking only some attributes of the output vector. It is shown that by abandoning the requirement of minimal testing time (practically insignificant for medium sized networks) a substantial reduction of testing data to be stored is obtained, and the generation process is simplified. It is shown that

24 citations


Proceedings Article
01 Jan 1978
TL;DR: The Fortran Test Procedure Language (TPL/F) is described which was developed at General Electric and is used for specifying test procedures for Fortran software and may play a significant role in the development and maintenance of production software.
Abstract: The execution of software test cases and the verification of test results may be performed automatically by a new type of program called an automatic software test driver. When using an automatic test driver, a formal test procedure is coded in a special test language. The test procedure takes the place of the test data and test setup instructions of conventional testing, and controls the automatic test driver. An automatic software test driver applies one test procedure to all or part of a target program, executes all of the test cases specified in the test procedure, and verifies that the results of each test case are correct. This paper describes the Fortran Test Procedure Language (TPL/F) which was developed at General Electric and is used for specifying test procedures for Fortran software. 1 The concept of automatic software test drivers is a new idea that has been evolving slowly over the past six years and just now seems to be approaching the point where soon it may play a significant role in the development and maintenance of production software. Two other automatic software test drivers that were developed independently in recent years and have their own software test languages are described in References 2 and 3. The Fortran Test Procedure system is illustrated in Figure 1. One test procedure coded in TPL/F and the source code for one or more modules of the target program are processed by the automatic test driver which executes all of the test cases specified in the test procedure, and produces a brief test execution report (Figures 2 and 3) stating which test cases failed, if any, and the degree of testing coverage actually achieved by the test procedure. Test cases consist of input data for the target program and model output data. The automatic test driver actually executes the target program for each test case, feeding the input data to the target program and comparing outputs from the target program with the model outputs specified in the test procedure. Incorrect outputs produce a diagnostic in the test execution report (Figure 3). Since the TPL/F system has access to the target program's source code, it can monitor which statements were actually executed and which branches were actually traversed while executing a test procedure. This type of information pro-

24 citations


Journal ArticleDOI
TL;DR: A dynamic model of the circuit breaker arc near current zero using both the Cassie and the Mayr differential equations, has been solved by computer including interaction with resistance and capacitance shunts.
Abstract: The continuing program of analysis of circuit breaker short-line fault test data has culminated in a practicable method for getting complete performance and circuit information quickly from presently available test data and a simple, time-sharing computer program. A dynamic model of the circuit breaker arc near current zero using both the Cassie and the Mayr differential equations, has been solved by computer including interaction with resistance and capacitance shunts. By expressing the critical cases which just interrupt in terms of voltage and current slope coefficients, the model is fitted to key measurements from the test oscillograms. More precise specification of performance and its dependence on electrical parameters results.

20 citations


Patent
05 Oct 1978
TL;DR: In this article, the monitoring system for operational or traffic-related data provides the necessary information to the driver or pilot without distracting him from his basic duties using an analogue-digital converter and a voice-operated device.
Abstract: The monitoring system for operational or traffic-related data provides the necessary information to the driver or pilot without distracting him from his basic duties the data is fed as electrical signals to the test data switching unit An analogue-digital converter may be provided and the signals fed directly to a voice-operated device An operating sequence control provides for the determined interrogation cycle of the individual data and initiates the speech store according to a predetermined program it also determines the loudness via an amplifier which is connected to a loudspeaker

Proceedings ArticleDOI
01 Feb 1978
TL;DR: In this article, a comparison of the performance of modern modal data analysis methods on test data from the Voyager Jupiter/Saturn payload is presented, and four different test/data-analysis combinations are compared - multiple-point sine excitation tests, single-point random-excitation tests using two different techniques of manipulating Fourier transform data, and a time-domain method for analyzing random data.
Abstract: A comparison of the performance of modern modal data analysis methods on test data from the Voyager Jupiter/Saturn payload is presented. Four different test/data-analysis combinations are compared - multiple-point sine excitation tests, single-point random-excitation tests using two different techniques of manipulating Fourier transform data, and a time-domain method for analyzing random data. Results indicate that all four methods can give comparable results. Of the four, the time-domain approach detects more modes in the test data and, at the same time, shows the greatest promise for reducing the time and cost of modal testing.


Journal ArticleDOI
TL;DR: The test program and the design provisions developed by statistical procedures from the test data are described and an example of the application of the provision in bridge design is presented in an appendix.
Abstract: An extensive experimental investigation to determine the fatigue strength of deformed reinforcing bars was recently completed by the Portland Cement Association. This work was sponsored in part by the National Cooperative Highway Research Program (NCHRP). The experimental program was carried out with each test bar embedded as the main reinforcing element within a concrete beam. Test variables included size of beam, size of bar, grade of bar, bar deformation geometry, minimum stress, and stress range. A total of 353 tests were carried out. This paper describes the test program and the design provisions developed by statistical procedures from the test data. An example of the application of the provision in bridge design is presented in an appendix.

Proceedings ArticleDOI
13 Jun 1978
TL;DR: In this paper, an N-phase by M-stage capacitor-diode voltage multiplier (CDVM) is described, which is failure tolerant and suitable for medium power applications, and test data from a 1. 2 kW, five-phase CDVM are included.
Abstract: An N-phase by M-stage capacitor-diode voltage multiplier (CDVM) is described. The multiplier is failure tolerant and suitable for medium power applications. Total capacitor weight is significantly less than single-phase multipliers. Excellent correlation between test data and predicted values for output voltage, peak currents, and efficiency demonstrated the accuracy of the generalized equations. The design principles of the multiphase CDVM concept were applied to design, fabricate, and test a breadboard model circuit to operate at a 1.2 kW power level under contract NAS 3-20395 with the NASA Lewis Research Center. Test data from a 1 . 2 kW, five-phase CDVM are included.


Journal ArticleDOI
TL;DR: The utility of the program in fitting real biphasic data obtained from amino acid transport experiments is demonstrated, and it gives a least squares fit of an equation to data, and obviates the need for non-statistical approximations.

Proceedings ArticleDOI
13 Nov 1978
TL;DR: JAVS is an automated tool for quantifying the effectiveness of test data in exercising a program's control structures and can be used to generate descriptive program documentation, provide dynamic execution traces of modules and DD-paths, and flag unexpected execution behavior through the use of computation directives.
Abstract: JAVS was developed as a tool to assist developers and testers of JOVIAL software in determining the extent to which their programs have been tested, and in deriving additional test cases to further verify them. Testing has often been without an orderly approach and without accurate means of determining exactly what portions of code have actually been exercised. JAVS is an automated tool for quantifying the effectiveness of test data in exercising a program's control structures. In addition, JAVS can be used to generate descriptive program documentation, provide dynamic execution traces of modules and DD-paths (described later), assist in generation of additional test cases, and flag unexpected execution behavior through the use of computation directives.

Proceedings ArticleDOI
13 Nov 1978
TL;DR: An approach to functional testing is described in which the design of a program is viewed as an integrated collection of functions and the selection of test data depends on the functions used in the design and on the value spaces over which the functions are defined.
Abstract: An approach to functional testing is described in which the design of a program is viewed as an integrated collection of functions. The selection of test data depends on the functions used in the design and on the value spaces over which the functions are defined. The basic ideas in the method were developed during the study of a collection of scientific programs containing errors. The method was the most reliable testing technique for discovering the errors. It was found to be significantly more reliable than structured testing. The two techniques are compared and their relative advantages and limitations are discussed.

01 Jun 1978
TL;DR: The procedure uses time series models of test data to generate empirical distributions which can be compared to distributions obtained from simulation models to decide whether the simulation model valid for a specific purpose is presented.
Abstract: : A method for deciding whether a missile system simulation model is sufficiently accurate for a specific purpose is presented. The procedure uses time series models of test data to generate empirical distributions which can be compared to distributions obtained from simulation models. Time series models are also used to generate empirical distributions of the error between simulated and test data. These error distributions can then be used in decision models to decide whether the simulation model valid for a specific purpose. Examples demonstrate the validity of the time series transformation and illustrate its application to missile system data. (Author)

ReportDOI
01 Sep 1978
TL;DR: In this paper, the authors presented a computer algorithm for determining the degree of heterogeneity among the layers of a reservoir using the equations developed by Brigham and Smith that predict the behavior of a tracer slug flowing in a five-spot injection pattern.
Abstract: This report presents a computer algorithm for determining the degree of heterogeneity among the layers of a reservoir. The algorithm uses the equations developed by Brigham and Smith that predict the behavior of a tracer slug flowing in a five-spot injection pattern. To illustrate the use and potential problems in the application of this algorithm, examples are presented using five sets of simulated field test data. One example using actual field data is also presented.

Book ChapterDOI
TT Hitch1
TL;DR: In this paper, the authors reviewed the measurement of adhesion of conductor film materials and described the requirements of thick-film technology, and treated the test methods used to measure thick film adhesion strength from a fundamental basis.
Abstract: The paper briefly reviews the measurement of adhesion of conductor film materials and describes in particular the requirements of thick-film technology The test methods used to measure thick-film adhesion strength, whichhave been previously described by other workers, are treated from a fundamental basis and are discussed with reference to the usefulness and practicability of the test both for research and for more routine purposes The second part of the paper reviews two thick-film adhesion tests and their use at RCA Laboratories over the past several years The first test is the thermocompression bonded peel test, which has been used for the adhesion strength measurement of gold-and silver-based conductor films This test was developed at RCA and has proved useful for films of a wide range of adhesion strengths The second test description reviews RCA Laboratories' use of the soldered-wire, peel test It also treats our progress in eliminating subjectivity from this test by limiting the use of hand operations and by controlling time-temperature cycles required in the assembly of test specimens For both these tests, more than one failure mode has been observed The meanings ascribed both to the failure modes and to test data, which are taken when the modes occur, are treated Data illustrating the variation in adhesion strength with firing temperature and film thickness are shown for several ink types These data are correlated with the composition of the inks by grouping the gold and silver inks into three bonding classifications-frit-bonded, reactively bonded, and mixed-bonded

01 Nov 1978
TL;DR: Test data of the RFS Program in the Production phase and computer automation are presented, as an essential element in the evaluation of RFS performance in a simulated spacecraft environment.
Abstract: Test data of the RFS Program in the Production phase and computer automation are presented, as an essential element in the evaluation of the RFS performance in a simulated spacecraft environment. Typical production test data will be discussed for stabilities from 1 to 100,000 seconds averaging time and simulated time error accumulation test. Also, design considerations in developing the RFS test systems for the acceptance test in production are discussed.

Journal ArticleDOI
TL;DR: An algorithm for updating the means and variances of a norm group after each computer-assisted administration of a test is described, which provides for unlimited, continuous expansion of the test norms.
Abstract: An algorithm for updating the means and variances of a norm group after each computer-assisted administration of a test is described The algorithm does not require storage of the whole data set and provides for unlimited, continuous expansion of the test norms



Patent
17 Jul 1978
TL;DR: In this article, the same test data is prepared for both stations, and the output of the test data within memory 7 are compared with each other at circuit 8 to be displayed at circuit 10 and also sent back to MODEM1 via MODEM7 and line 6 for comparison and collation at circuit 2 and display at circuit 4 in such way the diagnosis can be facilitated for both transmission and reception transmission systems.
Abstract: PURPOSE:To increase the resolution of the transmission system when the fault diagnosis is given by preparing the same test data both at the testing station and the tested station CONSTITUTION:MODEM1, central processor 25 incorporating test data memory 2 and comparison/collation circuit 3, and display circuit 4 are installed at the side of the testing station At the same time, MODEM7, and terminal unit 26 incorporating comparison/collation circuit 8, test data memory 9 and display circuit 10 are provided at the remote station (tested station) And processor 25 and unit 26 are connected via transmission line 5 and 6 With such test system, the same test data is prepared for both stations, and the DC binary data accumulated in memory 2 is converted to the AC signal through MODEM1 and then applied to MODEM7 via transmission line 5 Then the output of MODEM7 and the test data within memory 7 are compared with each other at circuit 8 to be displayed at circuit 10 and also sent back to MODEM1 via MODEM7 and line 6 for comparison and collation at circuit 2 and display at circuit 4 In such way, the diagnosis can be facilitated for both transmission and reception transmission systems

Proceedings ArticleDOI
01 Feb 1978
TL;DR: The objective of the present methodology is to process test data obtained from either modal survey tests, or slow sine-sweep tests, to extract a set of orthogonal modes best matching the test data while being commensurate with the dynamic model.
Abstract: The objective of the present methodology is two-fold: (1) to process test data obtained from either modal survey tests, or slow sine-sweep tests, to extract a set of orthogonal modes best matching the test data while being commensurate with the dynamic model, and (2) to modify submatrices of the dynamic model mass and stiffness matrices to adjust the model to best fit the test data. The method has been implemented using a linear statistical sequential estimator for computation on a CDC computer. Demonstration problems involving Space Shuttle quarter-scale vibration test data and dynamic models have been run. This paper will discuss the general methodology and experience to date.

Proceedings ArticleDOI
10 May 1978
TL;DR: The TPL/2.0 automatic software test driver described in this paper automates both the initial generation and subsequent revision of test procedure model outputs.
Abstract: An automatic software test driver is a new type of software tool which controls and monitors the execution of software tests. An automatic test driver is controlled by a formal test procedure coded in a special software test language. The test procedure replaces the test data and test setup instructions of conventional testing. The specific goals of automatic test drivers are to eliminate the need for writing drivers and stubs for module and subsystem testing, to provide a standard format and language for specifying software tests, to provide a standard execution setup for software tests, and to automate the verification of test execution results.A test procedure contains input data to be supplied to the program under test and model outputs against which actual outputs of the target program are verified. Typically, ninety percent or more of the text of a test procedure consists of model outputs which must be revised each time the target program is modified. The TPL/2.0 automatic software test driver described in this paper automates both the initial generation and subsequent revision of test procedure model outputs.

Proceedings ArticleDOI
19 Jun 1978
TL;DR: TDAS (Test Data Analysis System) has provided timely and economic solutions to test data analysis problems which might have been intractable by other means.
Abstract: To provide cost-effective performance evaluation or engineering feedback from circuit test results often requires that complex analyses be performed on large volumes of non-standard data. Using a large scale data management system and a modular design philosophy, a system to cope with the above requirements has been developed. TDAS (Test Data Analysis System) has provided timely and economic solutions to test data analysis problems which might have been intractable by other means.