scispace - formally typeset
Search or ask a question

Showing papers on "White-box testing published in 1981"


Journal ArticleDOI
TL;DR: A relatwely large but easy-to-use collection of test functions and designed gmdelines for testing the reliability and robustness of unconstrained optimization software.
Abstract: Much of the testing of optimization software is inadequate because the number of test functmns is small or the starting points are close to the solution. In addition, there has been too much emphasm on measurmg the efficmncy of the software and not enough on testing reliability and robustness. To address this need, we have produced a relatwely large but easy-to-use collection of test functions and designed gmdelines for testing the reliability and robustness of unconstrained optimization software.

1,420 citations


Proceedings ArticleDOI
09 Mar 1981
TL;DR: Simulation results are presented which treat path and partition testing in a reasonably favorable way, and yet still suggest that random testing may often be more cost effective.
Abstract: Random testing of programs is usually (but not always) viewed as a worst case of program testing. Test case generation that takes into account the program structure is usually preferred. Path testing is an often proposed ideal for structural testing. Path testing is treated here as an instance of partition testing. (Partition testing is any testing scheme which forces execution of at least one test case from each subset of a partition of the input domain.) Simulation results are presented which treat path and partition testing in a reasonably favorable way, and yet still suggest that random testing may often be more cost effective. Results of actual random testing experiments are presented which tend to confirm the viability of random testing as a useful validation tool.

107 citations


Journal ArticleDOI
Agrawal1
TL;DR: The concepts of information theory are applied to the problem of testing digital circuits by analyzing the information throughput of the circuit and an expression for the probability of detecting a hardware fault is derived.
Abstract: The concepts of information theory are applied to the problem of testing digital circuits. By analyzing the information throughput of the circuit an expression for the probability of detecting a hardware fault is derived. Examples are given to illustrate an application of the present study in designing efficient pattern generators for testing.

92 citations


Proceedings ArticleDOI
09 Mar 1981
TL;DR: This paper presents an analysis of the effectiveness of individual paths for testing predicates in linearly domained programs and a measure is derived to show that any predicate in such programs may be sufficiently tested using at most m+n+1 paths.
Abstract: Many testing methods require the selection of a set of paths over which testing is to be conducted. This paper presents an analysis of the effectiveness of individual paths for testing predicates in linearly domained programs. A measure is derived for the marginal advantage of testing another path after several paths have already been tested. This measure is used to show that any predicate in such programs may be sufficiently tested using at most m+n+1 paths, where m is the number of input values and n is the number of program variables.

21 citations


Journal ArticleDOI
TL;DR: The strategies of adaptive testing developed to date are outlined and it is shown how, structurally, they can be grouped into three general categories, which are then discussed as they relate to this categorization.
Abstract: Adaptive testing is a relatively new form of test administration in which a test is tailored to the individual taking it by choosing items most informative about that person. Methods for determining which items are most appropriate take on a variety of forms, some requiring extensive computation, and almost all requiring administration by a computer. The increasing availability of inexpensive microcomputer systems has made adaptive testing possible when access to larger computer systems is impractical. To make implementation of a variety of adaptive testing methods feasible on a microcomputer, a system efficient from both the examinee’s and the test constructor’s perspectives is necessary. This paper begins by briefly outlining the strategies of adaptive testing developed to date and showing how, structurally, they can be grouped into three general categories. Considerations in design of a test-specification subsystem are then discussed as they relate to this categorization. Finally, a specific implementation of a subsystem for use under the CP/M microcomputer operating system is described. Techniques used to make the extensive computations required by adaptive testing feasible on a microcomputer are presented.

19 citations


Dissertation
01 Jan 1981
TL;DR: A new theoretical framework for testing is presented that provides a mechanism for comparing the power of methods of testing programs based on the degree to which the methods approximate program verification and describes a new method for generating test data from specifications expressed in predicate calculus.
Abstract: The theoretical works on program testing by Goodenough and Gerhart, Howden, and Geller are unified and generalized by a new theoretical framework for testing presented in this thesis. The framework provides a mechanism for comparing the power of methods of testing programs based on the degree to which the methods approximate program verification. The framework also provides a reasonable and useful interpretation of the notion that successful tests increase one's confidence in the program's correctness. Applications of the framework include confirmation of a number of common assumptions about practical testing methods. Among the assumptions confirmed is the need for generating tests from specifications as well as programs. On the other hand, a careful formal analysis of assumptions surrounding mutation analysis shows that the "competent programmer hypothesis" does not suffice to ensure the claimed high reliability of mutation testing. Responding to the confirmed need for testing based on specifications as well as programs, the thesis describes a new method for generating test data from specifications expressed in predicate calculus. The new method has the advantages that, beside filling the gap just mentioned, it is very general, working on any order of logic, it is easy enough to be of practical use, it can be automated to a great extent, and it produces test data methodically and consistently that are of obvious utility for the problems studied.

13 citations


Book Chapter
01 Jan 1981
TL;DR: A type of on-chip test structure called H timing sampler is described which enables the designer to accurately measure when on- chip signal transitions occur and results show that they arc reasonably accurate as well.
Abstract: Testing VLSl chips presents a variety of problems. some of which can be solved by building on-chip testing structures. On-chip testing structures can allow a designer to test aspects of a circuit which might be difficult to test even with expensive test equipment and moreover can provide reasonable testing hardware to designers who do not have access to sophisticated off-chip testing equipment. In this paper we describe a type of on-chip test structure called H timing sampler which enables the designer to accurately measure when on-chip signal transitions occur. The timing samplers we present are simple. They have been fabricated as part of a multi-project chip and experimental results show that they arc reasonably accurate as well.

4 citations


Proceedings ArticleDOI
04 May 1981
TL;DR: This paper describes a relatively low-cost investment undertaken by the Information Services Group of Chemical Bank for testing improvement and presents results of the program to date.
Abstract: Software testing is one of the most critical tasks performed by a large data processing organization. Testing is important in the development of new systems, but it may have an even greater impact on the maintenance of the production systems. In spite of this, testing is rarely approached in the same disciplined manner as are other software production activities. Perhaps as a result of this situation, an organization can often achieve significant improvements in both software testing effectiveness and efficiency through a relatively low-cost investment in testing methodologies, tools, and techniques. This paper describes just such an investment undertaken by the Information Services Group of Chemical Bank. Particular emphasis is placed on how this program for testing improvement was implemented, in addition to what it consists of. Finally, results of the program to date are presented and analyzed.

2 citations


Journal ArticleDOI
01 Jan 1981
TL;DR: An approach to data space analysis is introduced with an associated notation to identify the sensitivity of the software to a change in a specific data item.
Abstract: A complete software testing process must concentrate on examination of the software characteristics as they may impact reliability. Software testing has largely been concerned with structural tests, that is, test of program logic flow. In this paper, a companion software test technique for the program data called data space testing is described.An approach to data space analysis is introduced with an associated notation. The concept is to identify the sensitivity of the software to a change in a specific data item. The collective information on the sensitivity of the program to all data items is used as a basis for test selection and generation of input values.

2 citations


ReportDOI
01 Dec 1981
TL;DR: This report is a comprehensive presentation of a quantatitive methodology for software testing which measures test effectiveness at several different levels of program coverage and establishes confidence levels in the correctness of the program at these levels, based on the resulting numerical specifications for testing a computer program.
Abstract: : This report is a comprehensive presentation of a quantatitive methodology for software testing which measures test effectiveness at several different levels of program coverage and establishes confidence levels in the correctness of the program at these levels. Based on the resulting numerical specifications for testing a computer program, quantatitive acceptance criteria are developed. These metrics are sensitived to cost and software criticality factors. The methodology, based on path analysis, is a natural extension of software engineering techniques to quality assurance for well-structured programs. It has been applied successfully, but several practical problems still remain. Application of this mlethodology to a software development program will provide control and visibility into the structure of the program and may result in improved reliability and documentation. Especially for the Air Force, when it acts only as a monitor, external to the software development process, the methodology provides a framework for proper planning and optimal allocation of test resources by quantifying the effectiveness of a test program and pre-determining the amount of testing required for achieving test objectives. With the proof of the fundamental theorem of program testing in 1975, which establishes testing as the equivalent of a proof of correctness for programs which satisfy some structural constraints, systematic testing has become possibly the only effective means to assure quality of a program of non-trivial complexity.

1 citations


01 Jan 1981
TL;DR: In this paper, the authors explore the underlying problems associated with traditional test data verification and propose an approach to verify the test data for use in production testing of printed circuit boards (PCB).
Abstract: Increased product complexity has touched all aspects of testing: hardware, software, and procedures. The test data verification process is not immune to this increased complexity. Qualification, validation, and final inspection of the test data prior to its acceptance for use in production testing of printed circuit boards has matured to a science. Large test data volumes, circuit density, technology mixing, sophisticated testers, as well as the extensive product development process, have all contributed to this complexity. Recently, major emphasis and attention have been focused on this critical process. This paper explores the underlying problems associated with traditional test data verification. An effec