scispace - formally typeset
Search or ask a question

Showing papers on "White-box testing published in 1992"


Journal ArticleDOI
TL;DR: The major conclusion from this investigation is the fact that by explicitly testing for simple faults, the authors are also implicitly testing for more complicated faults, giving us confidence that fault-based testing is an effective way to test software.
Abstract: Fault-based testing strategies test software by focusing on specific, common types of faults. The coupling effect hypothesizes that test data sets that detect simple types of faults are sensitive enough to detect more complex types of faults. This paper describes empirical investigations into the coupling effect over a specific class of software faults. All of the results from this investigation support the validity of the coupling effect. The major conclusion from this investigation is the fact that by explicitly testing for simple faults, we are also implicitly testing for more complicated faults, giving us confidence that fault-based testing is an effective way to test software.

394 citations


Proceedings ArticleDOI
09 Nov 1992
TL;DR: The authors present a methodology for regression testing of modules where dependencies due to both control flow and data flow are taken into account and a firewall concept for the data-flow aspect of software change is defined.
Abstract: The authors present a methodology for regression testing and function or system testers. The methodology involves regression testing of modules where dependencies due to both control flow and data flow are taken into account. The control-flow dependency is modeled as a call graph and a firewall defined to include all affected modules which must be retested. Global variables are considered as the remaining data-flow dependency to be modeled. An approach to testing and regression testing of these global variables is given, and a firewall concept for the data-flow aspect of software change is defined. >

154 citations


Journal ArticleDOI
B. Korel1
TL;DR: In this approach, the path selection stage is eliminated and test data are derived based on the actual execution of the program under test and function minimization methods.
Abstract: Test data generation in program testing is the process of identifying a set of test data which satisfies a given testing criterion. Existing pathwise test data generators proceed by selecting program paths that satisfy the selected criterion and then generating program inputs for these paths. One of the problems with this approach is that unfeasible paths are often selected; as a result, significant computational effort can be wasted in analysing those paths. In this paper, an approach to test data generation, referred to as a dynamic approach for test data generation, is presented. In this approach, the path selection stage is eliminated. Test data are derived based on the actual execution of the program under test and function minimization methods. The approach starts by executing a program for an arbitrary program input. During program execution for each executed branch, a search procedure decides whether the execution should continue through the current branch or an alternative branch should be taken. If an undesirable execution flow is observed at the current branch, then a real-valued function is associated with this branch, and function minimization search algorithms are used to locate values of input variables automatically, which will change the flow of execution at this branch.

93 citations


Journal ArticleDOI
TL;DR: It is argued that until random testing of a million points becomes practical, testing for quality is only a poor competitor for other heuristic defect-detection methods.
Abstract: The relationship between software testing and reliability is discussed. Two kinds of reliability models, reliability-growth models, which are applied during debugging, and reliability models, which are applied after debugging, are described. Several reasons for the failure of conventional reliability theory in software engineering are presented. It is argued that until random testing of a million points becomes practical, testing for quality is only a poor competitor for other heuristic defect-detection methods. >

88 citations


Journal ArticleDOI
TL;DR: A concurrent path model is presented to model the execution behaviour of concurrent programs, and the potential reliability of path analysis testing for concurrent programs is assessed.
Abstract: Path analysis testing is a widely used approach to program testing. However, the conventional path analysis testing method is designed specifically for sequential program testing; it is inapplicable to concurrent program testing because of the existence of multi-loci of control and task synchronizations. A path analysis approach to concurrent program testing is proposed. A concurrent path model is presented to model the execution behaviour of concurrent programs. In the model, an execution of a concurrent program is seen as involving a concurrent path (which is comprised of the paths of all concurrent tasks), and the tasks' synchronizations are modelled as a concurrent route to traverse the concurrent path involved in the execution. Accordingly, testing is a process to examine the correctness of each concurrent route along all concurrent paths of concurrent programs. Examples are given to demonstrate the effectiveness of path analysis testing for concurrent programs and some practical issues of path analysis testing, namely, test path selection, test generation, and test execution, are discussed. Moreover, the errors of concurrent programs are classified into three classes: domain errors, computation errors, and missing path errors, similar to the error classification for sequential programs. Based on the error classification, the potential reliability of path analysis testing for concurrent programs is assessed.

55 citations


Proceedings ArticleDOI
20 Sep 1992
TL;DR: Some of the issues that had to be addressed in the development of a comprehensive system for delay testing in a Level Sensitive Scan Design environment are discussed.
Abstract: Delay testing, as opposed to static testing introduces the parameter of time as a new variable. Time impacts the way defects manifest themselves and are modeled as faults, as well as how defect sizes and system timing statistics interact with tester timing constraints in the detection of causes for dynamic system malfunctions. This paper briefly discusses some of the issues that had to be addressed in the development of a comprehensive system for delay testing in a Level Sensitive Scan Design environment.

49 citations


Journal ArticleDOI
TL;DR: This paper includes a formal specification of both the all-du-paths criterion and the software tools used to estimate a minimal number of test cases necessary to meet the criterion.
Abstract: The all-du-paths structural testing criterion is one of the most discriminating of the data-flow testing criteria. Unfortunately, in the worst case, the criterion requires an intractable number of test cases. In a case study of an industrial software system, we find that the worst-case scenario is rare. Eighty percent of the subroutines require ten or fewer test cases. Only one subroutine out of 143 requires an intractable number of tests. However, the number of required test cases becomes tractable when using the all-uses criterion. This paper includes a formal specification of both the all-du-paths criterion and the software tools used to estimate a minimal number of test cases necessary to meet the criterion.

47 citations


01 Jan 1992
TL;DR: In this article, the authors describe a new approach to statistical testing by modeling software usage and the testing process as finite state, discrete parameter Markov chains using the software specification document as a guide.
Abstract: Cleanroom Software Engineering (40) is a new methodology that has evolved from structured programming into a promising technology for high quality software development. Cleanroom has three major components: specification, design with verification, and statistical certification testing. This dissertation describes a new approach to statistical testing by modeling software usage and the testing process as finite state, discrete parameter Markov chains. Using the software specification document as a guide, a Markov chain is constructed which models the usage of the specified software. This time homogeneous chain is used to compute stochastic properties of pertinent usage random variables before any code development begins and to generate a set of "statistically typical" test sequences. These sequences, along with any failure data they produce upon execution, are used as a training set for a second Markov chain which models the behavior of the software during testing. This second chain is updated as testing progresses and is used to compute software quality measures, such as the reliability and mean time between failure at any stage of the testing process. Comparison of the two chains is by an information theoretic discriminant function based on the ergodic properties of the stochastic processes. Among its uses this comparison yields an analytical stopping criterion for the testing process. The latter chain is updated based upon appropriate expected values to obtain a third chain which is used to predict future software quality. The model presented is a complete certification strategy encompassing usage modeling, statistical testing, and reliability analysis.

47 citations


Proceedings ArticleDOI
01 Nov 1992
TL;DR: The authors describe the design and implementation of SYNTEST, a system for the design of self-testable VLSI circuits from behavioral description that consists of several algorithmic synthesis tools for scheduling, testable allocation, and optimum test points selection.
Abstract: The authors describe the design and implementation of SYNTEST, a system for the design of self-testable VLSI circuits from behavioral description. SYNTEST consists of several algorithmic synthesis tools for scheduling, testable allocation, and optimum test points selection. A key feature in SYNTEST is the tight interaction between the system tools: the scheduler, the allocator, and the test tool. The system uses a technology library for optimizing the original structure. All tools interact with each other as well with the user through an X graphical interface. This provides a better design environment and allows for more designer intervention. >

37 citations


Proceedings ArticleDOI
07 Jan 1992
TL;DR: Software is either correct or incorrect in design to a specification in contrast to hardware which is reliable to a certain level to a correct design, and certifying the correctness of such software requires two conditions, namely statistical testing with inputs characteristic of actual usage and no failures in the testing.
Abstract: Software is either correct or incorrect in design to a specification in contrast to hardware which is reliable to a certain level to a correct design. Software of any size or complexity can only be tested partially, and typically a very small fraction of possible inputs are actually tested. Certifying the correctness of such software requires two conditions, namely (1) statistical testing with inputs characteristic of actual usage, and (2) no failures in the testing. If any failures arise in testing or subsequent usage, the software is incorrect, and the certification invalid. If such failures are corrected, the certification process can be restarted, with no use of previous testing. >

31 citations


Patent
24 Jul 1992
TL;DR: In this article, the authors present a testing architecture for complex integrated systems, such as avionics systems, that provides for a selection of on-line and off-line tests to be run on a complex system, at selected hierarchical levels.
Abstract: Apparatus, and a related method, for testing complex integrated systems, such as avionics systems. A unified approach to the testing architecture provides for a selection of on-line and off-line tests to be run on a complex system, at selected hierarchical levels. Built-in test logic is provided on each component or chip to be tested, and additional built-in test logic is provided at a module level, where a module includes multiple chips or components of various types. Module testing is, in turn, controlled in part by a maintenance processor responsible for multiple modules, and the maintenance processor is controlled in part by a maintenance executive. The testing architecture can be configured to provide a desired mix of performance testing, processor testing and physical testing of components.

Proceedings ArticleDOI
20 Sep 1992
TL;DR: The Boundary Scan technique and the Un@ed Built-In Self-Test scheme are combined in order to propose a strategy suitable for the manufacturing, the field testing and the concurrent error detection on integrated circuits and board interconnects.
Abstract: In this paper the Boundary Scan technique and the Un@ed Built-In Self-Test scheme are combined in order to propose a strategy suitable for the manufacturing, the field testing and the concurrent error detection on integrated circuits and board interconnects. Such unification of the off-line and the on-line testing plays a major role in the design for broad testability of self-checking boards. This unified test strategy is primarily aimed at critical application designs: transportation systems, nuclear plants, etc.. ., which are the main targets. This architecture, oriented primarily towards the off-line test, provides efficient means of testing of circuits and board interconnects. The association of BIST and BS techniques leads to a significant reduction of the automatic test equipment complexity, due to lower memory requirements and weaker test time constraints. The other part of the tests necessary for circuits and boards is related to the on-line testing capability, a feature of great importance for systems where poor functioning can, for example, lead to a disaster. This is the case of railway, automotive and nuclear systems, where errors must be detected before they contaminate other units and at a point where basic repair is still possible. The self-checking circuit implementation, for making on-line testing possible, is based on the encoding of functional block outputs and on the verification of

Journal ArticleDOI
TL;DR: In this paper, the authors report on unit testing experiments performed on a program which is a piece of a software from the nuclear industry, and five test sets of each type have been automatically generated, and mutation analysis is used to assess their efficiency with respect to error detection.

01 Jan 1992
TL;DR: An approach to integrate support for program mutation, a well-known and effective software testing technique, directly into a compiler, is presented and a prototype patch-generating C compiler, and mutation-based software testing environment utilizing this paradigm are constructed in order to demonstrate the approach.
Abstract: Traditionally, compilers available to the software developer/tester have only supported two software testing techniques, statement and branch coverage. However, during compilation, sufficient syntactic and semantic information is available to provide support for additional testing techniques. This dissertation presents an approach to integrate support for program mutation, a well-known and effective software testing technique, directly into a compiler. The paradigm permits the intermediate states of computation within a machine-executable program to be monitored or modified subsequent to compilation, without recompiling the program. Program mutations are performed in an efficient manner on native machine-code, and direct support is provided for effective mutant execution on MIMD architectures. As such, the paradigm provides facilities for the development of practical tools that allow support for program mutation, while improving the cost-effectiveness of both experimental and testing applications. The paradigm is based upon program patch generation and application. A prototype patch-generating C compiler, and mutation-based software testing environment utilizing this paradigm, have been constructed in order to demonstrate the approach. The prototype implementation supports the manipulation of separately compiled programs and, therefore, permits potentially large software systems to be tested. A set of experimental results compares the effectiveness of the compiler-integrated approach, employed by the prototype, to that employed by existing mutation-based software testing environments in providing support for program mutation.

Proceedings ArticleDOI
20 Sep 1992
TL;DR: The economic impact of Type I test errors that fail good product in electronic systems and PC boards is evaluated and a model that predicts the impact on quality of bonepile testing is presented.
Abstract: The economic impact of Type I test errors that fail good product in electronic systems and PC boards is evaluated. Data show that an average of 46 percent of all valid component failures at the board and system level are, in fact, not failures. Data are given in several formats and a model is presented showing the impact on quality of bonepile testing. A model that predicts the impact on quality of bonepile testing is presented. This model is used to predict quality impact for three different cases. The data also show that accurate diagnostic techniques should be designed into the product.

Proceedings ArticleDOI
20 Sep 1992
TL;DR: This paper proposes a framework for testing and diagnosing in-sys,tem, IEEE Std 1149.1 compliant boards, and defines the mechanisms used to describe, store, execute the tests, perform diagnosis, and the interface to the system diagnostician.
Abstract: This paper proposes a framework for testing and diagnosing in-sys,tem, IEEE Std 1149.1 compliant boards. It defines the mechanisms used to describe, store, execute the tests, perform diagnosis, and the interface to the system diagnostician. Using, BIST and Boundary-Scan in-system makes it possible to achieve high test quality, comparable to that achieved during board test. Diagnostic accuracy and resolution can also be enhanced.

Journal ArticleDOI
TL;DR: A systematic approach to the regression testing aspect of software maintainability is presented and it is stated that regression testing is important at the unit, integration, and system testing levels.
Abstract: A systematic approach to the regression testing aspect of software maintainability is presented. It is stated that regression testing is important at the unit, integration, and system testing levels. Software development teams usually have responsibility for unit and integration testing, but do not consistently apply regression testing at these levels when they make changes and often do not even systematically retain test data. System or functional testers, on the other hand, are systematic about keeping test data and applying regression testing. This costs more than detecting regression error earlier. Results from a research project to evaluate regression testing concepts are discussed. >

Journal ArticleDOI
TL;DR: It is shown that previous informal arguments asserting the superiority of adaptive methodologies are formally confirmed, and several standard metrics are seen to serve as component measures for the intricacies of testing.
Abstract: The futility of using a general-purpose metric to characterise ‘the’ complexity of a program has recently been argued to support the design of specific metrics for the different stages of the software life-cycle. An analysis of the module testing activity is performed, providing evidence of the absurdity of all-purpose metrics, as well as a methodical means with which to measure testing complexity. Several standard metrics are seen to serve as component measures for the intricacies of testing. The methodology is applied to compare traditional and adaptive means of testing. It is shown that previous informal arguments asserting the superiority of adaptive methodologies are formally confirmed.

Proceedings ArticleDOI
07 Oct 1992
TL;DR: A new analysis methodology based on the black box test design and white box analysis is proposed, intended to support the reduction of testing costs and enhancement of software quality by improving test selection, eliminating test redundancy, and identifying error prone source files.
Abstract: Studies black box testing and verification of large systems. Testing data is collected from several test teams. A flat, integrated database of test, fault, repair, and source file information is built. A new analysis methodology based on the black box test design and white box analysis is proposed. The methodology is intended to support the reduction of testing costs and enhancement of software quality by improving test selection, eliminating test redundancy, and identifying error prone source files. Using example data from AT&T systems, the improved analysis methodology is demonstrated. >

Journal ArticleDOI
TL;DR: The aim of this paper is to propose a new approach to integration testing to transfer and adapt module testing methods to the level of integration testing for control flow and data flow oriented testing methods.
Abstract: The testing of modular software systems can be divided into a module testing phase and an integration testing phase. While module testing checks the modules separately, integration testing examines the use of interfaces in a modular system. Integration testing allows errors to be found which cannot be found by module testing. The aim of this paper is to propose a new approach to integration testing. The main principle is to transfer and adapt module testing methods to the level of integration testing. The approach is described for control flow and data flow oriented testing methods. To decrease the testing effort and increase the probability of finding errors, integration testing can be limited to statically detectable anomalous applications of interfaces. This is accomplished by the combination of static analysis with dynamic execution and by the possibility of using information already provided by the module tests. To find further test data to execute interfaces, symbolic execution is applied. One great advantage here is to prove whether statically determined interface anomalies can be dynamically executed and can therefore occur at all.

Dissertation
01 Jan 1992
TL;DR: A mathematical model, linking yield and reliability is developed to answer the question of what is the optimum size of a unit and the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability are studied.
Abstract: The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours.

Proceedings Article
14 Jan 1992
TL;DR: The author describes in outline, a prototype testing tool for algebraic specifications, OBJTEST, built around the ObjEx system, which automatic generation of 'exhaustive' sets of test expressions from a specification, followed by the use of these test expressions in mutation testing of the given specification.
Abstract: Algebraic specifications involve the development of 'axioms' or equations to model the behaviour of systems. The technique is one example of a formal method of specification. By using the equations to drive a process of term-rewriting, test expressions can be evaluated, thus providing an execution facility. Such animation certainly helps in checking typographical and notational errors. However, there is still a need for thorough testing of algebraic specifications to uncover more subtle errors. The author describes in outline, a prototype testing tool for algebraic specifications, OBJTEST, built around the ObjEx system. The two principal facets of the tool are the automatic generation of 'exhaustive' sets of test expressions from a specification, followed by the use of these test expressions in mutation testing of the given specification.

Dissertation
01 Jan 1992
TL;DR: The research in this thesis addresses the subject of regression testing by developing a technique for selective revalidation which can be used during software maintenance to analyse and retest only those parts of the program affected by changes.
Abstract: The research in this thesis addresses the subject of regression testing. Emphasis is placed on developing a technique for selective revalidation which can be used during software maintenance to analyse and retest only those parts of the program affected by changes. In response to proposed program modifications, the technique assists the maintenance programmer in assessing the extent of the program alterations, in selecting a representative set of test cases to rerun, and in identifying any test cases in the test suite which are no longer required because of the program changes. The proposed technique involves the application of code analysis techniques and operations research. Code analysis techniques are described which derive information about the structure of a program and are used to determine the impact of any modifications on the existing program code. Methods adopted from operations research are then used to select an optimal set of regression tests and to identify any redundant test cases. These methods enable software, which has been validated using a variety of structural testing techniques, to be retested. The development of a prototype tool suite, which can be used to realise the technique for selective revalidation, is described. In particular, the interface between the prototype and existing regression testing tools is discussed. Moreover, the effectiveness of the technique is demonstrated by means of a case study and the results are compared with traditional regression testing strategies and other selective revalidation techniques described in this thesis.

Proceedings ArticleDOI
08 Apr 1992
TL;DR: The effectiveness of local exhaustive testing is demonstrated on a collection of programs that are all implementations of a single specification, the proportional navigation problem, by identifying certain inputs points as "critical," and then test all inputs close to that point.
Abstract: We introduce local exhaustive testing as a simple strategy for creating test cases that uncover faults (a deficiency in the code that is responsible for incorrect behavior) with a higher probability than tests chosen randomly. To use local exhaustive testing, we identify certain inputs points as "critical," and then test all inputs close to that point. We expect that this strategy will be particularly effective in applications that include an emphasis on geometric or other regular organization. We demonstrate the effectiveness of local exhaustive testing on a collection of programs that are all implementations of a single specification, the proportional navigation problem.

Proceedings ArticleDOI
F.C. Wang1
21 Sep 1992
TL;DR: A testing strategy that links design with test for mixed-signal devices in order to solve the testing problems at their roots is described and to incorporate design for testability early in the design phase and to use a special architecture for a mixed-mode simulator that facilitates testing.
Abstract: A testing strategy that links design with test for mixed-signal devices in order to solve the testing problems at their roots is described. Test generation is achieved through mixed-mode simulation of both analog and digital circuitry on a single chip. The key to this strategy is to incorporate design for testability early in the design phase and to use a special architecture for a mixed-mode simulator that facilitates testing. Simulation environment, rules for analog-digital signal conversion at their interface timing delay, and synchronization are described. >

ReportDOI
01 Jun 1992
TL;DR: In this paper, the techniques, assessment, and management of unit analysis and testing are discussed, as well as new techniques and techniques proposed in the literature, and a survey of the literature is presented.
Abstract: : This module examines the techniques, assessment, and management of unit analysis and testing. Analysis strategies are classified according to the view they take of the software: textual, syntactic, control flow, data flow, computation flow, or functional. Testing strategies are categorized according to whether their coverage goal is specification-oriented, implementation-oriented, error-oriented, or a combination of these. Mastery of the material in this module allows the software engineer to define, conduct, and evaluate unit analyses and tests and to assess new techniques proposed in the literature.

Proceedings ArticleDOI
11 Oct 1992
TL;DR: A unifying procedure, called the implicit tree search algorithm (ITSA), is used to fully exploit the test parallelism so that both the total testing time and test resources are optimized.
Abstract: Concurrent testing is used to reduce overall self-testing time and further exploit the power of the built-in self-test (BIST) technique. A unifying procedure, called the implicit tree search algorithm (ITSA), is used to fully exploit the test parallelism so that both the total testing time and test resources are optimized. The operation of the ITSA is demonstrated by detailed examples. >

01 Jan 1992
TL;DR: Based on the knowledge base, the regression testing system is intended to assist in obtaining information such as achieved test coverage and test cases to be rerun, and planning the test process for revalidation.
Abstract: Regression testing is retesting a program to revalidate its correctness after a code change. This thesis centers on how to apply knowledge-based methodologies to extending existing regression testing techniques and tools into more effective supporting environments. For the required knowledge base design, we are using an appropriate model from the concept of frame based systems and truth maintenance systems. Frames represent test-related objects such as test cases, control flow, data flow, and input/output domains. The truth maintenance system is intended to record logical inferences and dependencies among test-related objects (among other, the dependencies denoting how and which objects in one level of abstractions are affected by others in another level due to a code change). Based on the knowledge base, we intend the regression testing system to assist in (1) obtaining information such as achieved test coverage and test cases to be rerun, (2) selecting appropriate tools for running tests and gathering information, and (3) planning the test process for revalidation.

Proceedings ArticleDOI
07 Apr 1992
TL;DR: A methodology is presented, which automatically embeds a self test architecture into hierarchically designed circuits, which is embedded within the boundary-scan architecture and the implementation has been integrated into a commercial design framework.
Abstract: A methodology is presented, which automatically embeds a self test architecture into hierarchically designed circuits. For each module of the design hierarchy the automatic method for the insertion of self test registers as well as the synthesis of a test control unit is presented. These self testable modules are then combined for arbitrary hierarchy levels using test management units. The concept is embedded within the boundary-scan architecture and the implementation has been integrated into a commercial design framework. >

Proceedings ArticleDOI
20 Sep 1992
TL;DR: In future, Synchronous memory may be included in this category.
Abstract: In future, Synchronous memory may be included in this category. On the other hand, Process cost will be increasing at the rate of 1( 1M),2.6(4M),6.7( 16M),20(64M),33(256M),66( 1G)l. Memory tester costs will be at the rate of 1( lM),2(4M),4( 16M),8(64M),16(256M),32( IC). Memory test time will be increasing in the case of no reduction scheme as follows.: 1( lM),3.2(4M),9.6( 16M),30(64M),90(256M),270( IC;). Then the test cost will be : 1( lM),6.4(4M),38.4(16M),240(64M),1440(256M),864 O(1G). And test cost ratio to the total cost will be: 5%(1M),11%(4M),23%( 16M),39%(64M),70%(256M), 87%( 1G). They are very high and not acceptable. Is there any way to keep/decrease test cost ratio to total costs below 5%? OBJEGIlVES