scispace - formally typeset
Search or ask a question

Showing papers on "Test data published in 1981"


01 Apr 1981
TL;DR: In this article, a static strength methodology and evaluations of joint static and fatigue test data are reported, and correlations with analytic predictions are included, relative to joint strength and failure mode.
Abstract: : Static strength methodology and evaluations of joint static and fatigue test data are reported. Analytic studies complement methodology development and illustrate: the need for detailed stress analysis, the utility of the developed 'Bolted Joint Stress Field Model' (BJSFM) procedure, and define model limitations. For static strength data, correlations with analytic predictions are included. Data trends in all cases are discussed relative to joint strength and failure mode. For joint fatigue studies, data trends are discussed relative to life, hole elongation, and failure mode behavior.

74 citations


Patent
Maurice Thomas Mcmahon1
02 Jul 1981
TL;DR: In this paper, the Level Sensitive Scan Design (LSSD) discipline is used for chip-in-place test and interchip wiring test of the package, which is also required that the capability of scanning data into and out of package SRLs (shift register latches) must be satisfied.
Abstract: Design rules and test structure are used to implement machine designs to thereby obviate during testing the need for mechanical probing of the chip, multichip module, card or board at a higher level of package. The design rules and test structure also provide a means of restricting the size of logic partitions on large logical structures to facilitate test pattern generation. A test mechanism is available on every chip to be packaged to drive test data on all chip outputs and observe test data on all chip inputs, independent of the logic function performed by the chip. A control mechanism is also provided to allow a chip to either perform its intended function or to act as a testing mechanism during package test. It is intended that the test mechanism built into every chip will be used in place of mechanical probes to perform a chip-in-place test and interchip wiring test of the package. The intent of the design rules is to design chips such that each chip can be "isolated" for testing purposes through the pins (or other contacts) of a higher level package containing such chips. It is also required that the "Level Sensitive Scan Design" (LSSD) discipline, or rules, be followed for each chip and for the package clock distribution network. Further, the LSSD Rules which ensures the capability of scanning data into and out of the package SRLs (shift register latches) must be satisfied for the total package.

72 citations


Book ChapterDOI
TL;DR: In this paper, a method is proposed for finding the "cutoff" point, that is the endurance at which conventionally shaped S-N curves change slope to the horizontal.
Abstract: This paper discusses the need for defining the shape of S-N curves, the various kinds of test data available for this purpose, and the problems in their statistical evaluation, including the assumptions that must be made.A method is proposed for finding the "cutoff" point, that is the endurance at which conventionally shaped S-N curves change slope to the horizontal. It is based on maximum likelihood principles and deals with runouts in a statistically acceptable way. A sharp or a continuous transition from the horizontal to the sloped straight line log S/log N curve may be considered. The method can be used for analysis and comparison of fatigue test results with a computer program, described and listed elsewhere by the authors, but it is subject to certain amendments, which are described in the paper.

61 citations


Journal ArticleDOI
E. I. Muehldorf1, A. D. Savkar1
TL;DR: The paper concentrates on the testing of logic components and presents in-depth discussions of the methods of fault modeling, test pattern generation, fault simulation, and design for testability.
Abstract: The development of large scale integration (LSI) testing is reviewed. The paper concentrates on the testing of logic components and presents in-depth discussions of the methods of fault modeling, test pattern generation, fault simulation, and design for testability. It is shown how these methods are used in the design of components and how they can be used in support of design automation. Finally, a brief account of test equipment and test data preparation is given.

60 citations


Proceedings ArticleDOI
09 Mar 1981
TL;DR: The partition analysis method is described, which assists in program testing and verification by evaluating information from both a specification and an implementation by employing symbolic evaluation techniques to partition the set of input data into procedure subdomains so that the elements of each subdomain are treated uniformly by the specification and processed uniformly byThe implementation.
Abstract: A major drawback of most program testing methods is that they ignore program specifications, and instead base their analysis solely on the information provided in the implementation. This paper describes the partition analysis method, which assists in program testing and verification by evaluating information from both a specification and an implementation. This method employs symbolic evaluation techniques to partition the set of input data into procedure subdomains so that the elements of each subdomain are treated uniformly by the specification and processed uniformly by the implementation. The partition divides the procedure domain into more manageable units. Information related to each subdomain is used to guide in the selection of test data and to verify consistency between the specification and the implementation. Moreover, the test data selection process, called partition analysis testing, and the verification process, called partition analysis verification, are used to enhance each other, and thus increase program reliability.

60 citations


Patent
16 Jul 1981
TL;DR: A test system for testing circuits in integrated circuit chips includes a host computer for controlling the test system, and a plurality of blocks operable in parallel and each including a controller, storage for test programs and test data, and plurality of electronic units or pin electronics cards, one unit being associated with one of the pins of a device under test.
Abstract: A test system for testing circuits in integrated circuit chips includes a host computer for controlling the test system, and a plurality of blocks operable in parallel and each including a controller, storage for test programs and test data, and plurality of electronic units or pin electronics cards, one unit being associated with one of the pins of a device under test. Each of the electronic units include timing circuitry for timing its associated pin independent of the timing of any other electronics unit.

49 citations


Journal ArticleDOI
TL;DR: In this paper, the generator parameters at load have been calculated by the finite element method, in two dimensions, and compared with test data obtained under EPRI Program RP 997-2, Determination of Synchronous Machine Stability Study Constants.
Abstract: In support of EPRI Program 1288-1, Improvement in Accuracy of Prediction of Electrical Machine Constants, generator parameters at load have been calculated by the finite element method, in two dimensions. These calculated parameters are compared with test data obtained under EPRI Program RP 997-2, Determination of Synchronous Machine Stability Study Constants. An iterative procedure was used to match the calculated terminal voltage with the test value, for a given armature current and power factor. The test/calculated values for field current, the various angles, and Xq are then compared. Good agreement in all the above parameters is obtained for six diverse load points. Reactances are also compared. Considerable saturation effect is found, and is confirmed by test data in the case of X4. The results of this systematic comparison are judged to be a confirmation of the power of the finite element method.

38 citations



Journal ArticleDOI
TL;DR: In this paper, the authors used the Marquardt algorithm for estimating aquifer parameters from pump test data in nonleaky and leaky aquifers and found that in spite of poor initial estimates, the convergence is quick; and the residual square error, for the difference between the observed drawdowns and those calculated from parameters estimated using the estimator and the known methods, is minimum.
Abstract: Marquardt algorithm has been used for estimating aquifer parameters from pump test data in nonleaky and leaky aquifers. It emerges from the study that in spite of poor initial estimates, the convergence is quick; and the residual square error, for the difference between the observed drawdowns and those calculated from parameters estimated using Marquardt algorithm and the known methods, is minimum in the case of Marquardt estimates.

37 citations


Patent
Garry Carter Gillette1
09 Oct 1981
TL;DR: In this paper, a continuous sequence of test data for testing LSI devices is provided selectably from one of a number of memory elements, each of which is reloaded, when not busy providing test data, from a higher capacity, lower speed storage element.
Abstract: A continuous sequence of test data for testing LSI devices is provided selectably from one of a number of memory elements, each of which is reloaded, when not busy providing test data, from a higher capacity, lower speed storage element.

31 citations


Patent
Michael Leroy Krieg1
30 Nov 1981
TL;DR: In this paper, test patterns (6) in bar code of different densities are printed at the start of document (20) preparation on the line with a alignment mark and read at different timing intervals (59) corresponding to the densities of the patterns.
Abstract: Test patterns (6) in bar code of different densities are printed at the start of document (20) preparation on the line with a alignment mark (5). The test patterns (6) are immediately read at different timing intervals (59) corresponding to the densities of the patterns. The highest density pattern (6) having test data which is recognized by the logic (61) as correctly sensed, defines the subsequent printing density to be used. First, the next line of data 3 is printed in the lowest density with a code defining the density of the subsequent printing. When a document (20) is read, the first line is read with the clock (59) intervals corresponding to the lowest density. The frequency is then changed to that defined by the code in the first line. Alternatively, the frequency is adjusted lower when an ordinary line of data is re-read and found to read incorrectly.

Patent
10 Feb 1981
TL;DR: In this article, an interface apparatus and method are described with which an electronic scale system is connected to a storage medium such as a disk or memory of a data processor and whereby scale transaction data related to the mailing of an article with an ECS can be automatically preserved as a unified record along with subsequently appended test information such as an invoice number or customer number.
Abstract: An interface apparatus and method are described with which an electronic scale system is connected to a storage medium such as a disk or memory of a data processor and whereby scale transaction data related to the mailing of an article with an electronic scale can be automatically preserved as a unified record along with subsequently appended test information such as an invoice number or customer number. The interface has a programmable memory with which a normal operating mode is provided to enable an operator to conveniently and rapidly store scale transaction data and in response to displayed prompts enter data such as the number of an invoice accompanying the article being mailed. The test data is appended to the scale transaction data so that all of this data can be transmitted as a unified record to a storage medium. The interface is further provided with a supervisory mode with which special examination and service operations can be performed as these are needed for monitoring or corrective steps in case of errors. The interface displays plain language prompts to guide an operator through a normal RUN mode as well as facilitate supervisory and service technician monitoring and control. With an interface of the invention the unified record enables a rapid update of a customer account and shipping information.

Book ChapterDOI
TL;DR: A round-robin analysis was conducted by ASTM Task Group E24.06.01 on Application of Fracture Data to Life Prediction to predict the fatigue crack growth in 2219-T851 aluminum center-cracked-tension (CCT) specimens subjected to flight loadings in random cycle-by-cycle format.
Abstract: A round-robin analysis was conducted by ASTM Task Group E24.06.01 on Application of Fracture Data to Life Prediction to predict the fatigue crack growth in 2219-T851 aluminum center-cracked-tension (CCT) specimens subjected to flight loadings in random cycle-by-cycle format. Baseline data furnished to each participant of the round-robin analysis are described. These data consisted of constant amplitude crack growth data, specimen dimensions, initial crack sizes, and test spectrum tables. Analytical predictions and their correlation to test data are summarized.

Journal ArticleDOI
TL;DR: In this paper, a detailed description of typical test results and a discussion of the time-dependent processes, such as failure propagation and associated stress redistribution, which cause the observed behaviour are presented.

ReportDOI
01 Dec 1981
TL;DR: The system equivalence command has been implemented as an equivalent mutant detector and automatic detection by this command is implemented here as an application of data flow analysis.
Abstract: : Program mutation is a new approach to program testing, a method designed to test whether a program is either correct or radically incorrect. It requires the creation of a nearly correct program called a mutant from a program P. An adequate set of test data distinguishes all mutants from P by comparing the outputs. Obviously, an equivalent mutant, which performs identically to P, produces the same outputs as those of P. Thus, for adequate data selection, it is desirable that an equivalent mutant be excluded from the testing process. For this purpose, the system equivalence command has been implemented as an equivalent mutant detector. As yet, the command has not been automated. Automatic detection by this command is implemented here as an application of data flow analysis. Algorithms and implementation techniques of data flow analysis are described. Also its application as an automatic detector is described. (Author)

Journal ArticleDOI
TL;DR: In this article, a review of the research on item writing, item format, test instructions, and item readability indicated the importance of instrument structure in the interpretation of test data and the effect of failing to consider these areas on the content validity of achievement test scores was discussed.
Abstract: Content validity has been defined as the formal specification of the universe of tasks the test purports to measure. The universe of tasks consists of both the subject matter of the instrument and the structure of the instrument. However, it has been observed that instrument structure is often overlooked in the development and validation of achievement tests. A review of the research on item writing, item format, test instructions, and item readability indicated the importance of instrument structure in the interpretation of test data. The effect of failing to consider these areas on the content validity of achievement test scores was discussed.

Proceedings ArticleDOI
26 Jan 1981
TL;DR: This paper proposes a practical alternative to program verification -- called formal program testing -- with similar, but less ambitious goals, that simply evaluates the verification conditions on a representative set of test data instead of trying to prove them.
Abstract: This paper proposes a practical alternative to program verification -- called formal program testing -- with similar, but less ambitious goals. Like a program verifier, a formal testing system takes a program annotated with formal specifications as input, generates the corresponding verification conditions, and passes them through a simplifier. After the simplification step, however, a formal testing system simply evaluates the verification conditions on a representative set of test data instead of trying to prove them. Formal testing provides strong evidence that a program is correct, but does not guarantee it. The strength of the evidence depends on the adequacy of the test data.

Dissertation
01 Jan 1981
TL;DR: A new theoretical framework for testing is presented that provides a mechanism for comparing the power of methods of testing programs based on the degree to which the methods approximate program verification and describes a new method for generating test data from specifications expressed in predicate calculus.
Abstract: The theoretical works on program testing by Goodenough and Gerhart, Howden, and Geller are unified and generalized by a new theoretical framework for testing presented in this thesis. The framework provides a mechanism for comparing the power of methods of testing programs based on the degree to which the methods approximate program verification. The framework also provides a reasonable and useful interpretation of the notion that successful tests increase one's confidence in the program's correctness. Applications of the framework include confirmation of a number of common assumptions about practical testing methods. Among the assumptions confirmed is the need for generating tests from specifications as well as programs. On the other hand, a careful formal analysis of assumptions surrounding mutation analysis shows that the "competent programmer hypothesis" does not suffice to ensure the claimed high reliability of mutation testing. Responding to the confirmed need for testing based on specifications as well as programs, the thesis describes a new method for generating test data from specifications expressed in predicate calculus. The new method has the advantages that, beside filling the gap just mentioned, it is very general, working on any order of logic, it is easy enough to be of practical use, it can be automated to a great extent, and it produces test data methodically and consistently that are of obvious utility for the problems studied.

Book ChapterDOI
TL;DR: In this paper, a method for establishing the P-S-N diagram is proposed based on consideration of the distribution of strength deviation values for individual test data determined against the mean S-N curve of the population.
Abstract: A practical method for establishing the P-S-N diagram is proposed. The method is essentially based on consideration of the distribution of strength deviation values for individual test data determined against the mean S-Ncurve of the population. Thus an S-N curve for 1 percent failure probability can be derived from the fatigue test results using 100 specimens. A series of statistically planned fatigue tests was conducted according to the proposed method, in order to obtain basic fatigue data about different materials most commonly used in mechanical industries. This paper deals with the results for some typical carbon and low-alloy steels with different heat treatments, the tests being on smooth specimens in rotating bending, and discusses the variation in statistical properties between materials.

Journal ArticleDOI
TL;DR: It is proposed that testing theory seek out modified reliability ideas with this effective, determining property, and that noneffective ideas of program correctness may find their practical place in aiding people to discover the necessary tests.
Abstract: The formal idea of reliability of a set of test data for a program is explored. Although this idea captures something of what testing should accomplish in practice, it has two defects: in general it is impossible to tell if a given test is reliable; and, if reliability is attained, the test points are linked to errors no longer present, not to the corrected program. Should the program be changed, these tests are intuitively worthless. Two variations of the idea to overcome these defects are suggested: In both variations a new idea arises naturally. Test data "determines" programs for which it is reliable (in the variations defined): given the data there is an algorithm for deciding if programs satisfying it have unique behavior. Any variation of the reliability idea which can be effectively recognized can be used to determine programs in this way. A testing methodology is proposed based on any effective reliability notion. A human being, using noneffective methods, attempts to satisfy a mechanical judgement of reliability. If the person succeeds, the resulting test can be attached to the program, where it is useful when the program is changed. Confidence in the program/test combination is based on the knowledge that no program can satisfy the test yet differ from the given one. That is, the test itself is an unambiguous specification of the program. It is proposed that testing theory seek out modified reliability ideas with this effective, determining property, and that noneffective ideas of program correctness may find their practical place in aiding people to discover the necessary tests.

01 Aug 1981
TL;DR: In this article, the authors investigated the effect of guessing on the dimensionality of test data with common distributions of difficulty and discrimination indices and found that the effect is not very large unless items of extreme difficulty are present in the test.
Abstract: : One of the major assumptions of latent trait theory is that the items in a test measure a single dimension. This report describes an investigation of procedures for forming a set of items that meet this assumption. Factor analysis, nonmetric multidimensional scaling, cluster analysis and latent trait analysis were applied to simulated and real test data to determine which technique could best form an unidimensional set of items. Theoretical and empirical evaluations were also made of the effects of guessing on the dimensionality of test data. The results indicated that guessing affected highly discriminating items more so than poorly discriminating items. However, the effect of guessing on the dimensionality of tests with common distributions of difficulty and discrimination indices was found to be minimal. Of the procedures evaluated for sorting items into unidimensional item sets, principal factor analysis of phi coefficients gave the best results overall. Nonmetric multidimensional scaling also showed promise when used with Yule's Y, phi, or tetrachoric similarity coefficients, but it did not perform as well as the factor analytic techniques on the real test data. In summary, guessing does have an effect on test data, but the effect is not very large unless items of extreme difficulty are present in the test. Of the procedures evaluated, traditional factor analytic techniques gave the most useful information for sorting test items into homogeneous sets. (Author)

Journal ArticleDOI
TL;DR: In this paper, the FASTOP computer program and a modified Nyquist criterion were used to evaluate active flutter suppression at off-design Mach numbers of 0.8 and 1.5.
Abstract: The design, testing, and evaluation of active flutter suppression technology using a common wind tunnel model has been successfully completed by the U.S. and several European organizations. This paper emphasizes analytical predictions and presents test data for correlation. Several control laws were evaluated using the FASTOP computer program and a modified Nyquist criterion. Although the design and tests were conducted for a specific Mach number of 0.8, the analyses were performed at several Mach numbers to determine system effectiveness at off-design conditions.

01 Jan 1981
TL;DR: An interactive model is proposed for simultaneous solution of the vehicle and driver scheduling problems and has achieved results which compare favorably on test data with those achieved by manual methods.
Abstract: An interactive model is proposed for simultaneous solution of the vehicle and driver scheduling problems. A non-interactive version has been programmed and has achieved results which compare favorably on test data with those achieved by manual methods. The model uses a network representation of the problem, and schedules are built up from the network using a series of matching processes.

ReportDOI
01 Jan 1981
TL;DR: In this article, the reliability information based on field operation, dormant state and test data for more than 250 major nonelectronic part types is presented, organized in four major sections: generic data, detailed data, application data, and failure modes and mechanisms.
Abstract: : This report, organized in four major sections, presents reliability information based on field operation, dormant state and test data for more than 250 major nonelectronic part types. The four sections are Generic Data, Detailed Data, Application Data, and failure Modes and Mechanisms. Each device type contains reliability information in relation to the specific operational environments.

Book ChapterDOI
01 Jan 1981
TL;DR: The application of image processing and pattern recognition techniques to support the comparative handwriting analysis is reported and the dominating problem proved to be the selection and formulation of suited task-oriented features.
Abstract: The application of image processing and pattern recognition techniques to support the comparative handwriting analysis is reported. The dominating problem proved to be the selection and formulation of suited task-oriented features. Adequate solutions have been obtained by heuristic approaches, they have to be verified statistically on large test data sets.

Journal ArticleDOI
TL;DR: High-quality data were obtained by using a unique test apparatus and by devoting careful attention to details of the experiment, and full potential solutions were found to be uniformly in better agreement with experiment than small disturbance results.
Abstract: A comprehensive research program was conducted for the specific purpose of acquiring benchmark test data suitable for evaluations of three-dimensional transonic codes. High-quality data were obtained for three advanced technology wings by using a unique test apparatus and by devoting careful attention to details of the experiment. The test apparatus included provisions for removal of the wind tunnel boundary layer and measurements of far-field pressures. The test data were used in preliminary evaluations of three selected transonic computational methods. For these limited evaluations, nonconservative formulations more closely predicted measured pressures than conservative formulations. In addition, full potential solutions were found to be uniformly in better agreement with experiment than small disturbance results.

01 Nov 1981
TL;DR: In this paper, the general strain life as well as homo- and hetero-scedastic models are considered for probabilistic design of gas turbine engine components, including turbine blades and disks and combustor liners.
Abstract: Metal fatigue under stress and thermal cycling is a principal mode of failure in gas turbine engine hot section components such as turbine blades and disks and combustor liners. Designing for fatigue is subject to considerable uncertainty, e.g., scatter in cycles to failure, available fatigue test data and operating environment data, uncertainties in the models used to predict stresses, etc. Methods of analyzing fatigue test data for probabilistic design purposes are summarized. The general strain life as well as homo- and hetero-scedastic models are considered. Modern probabilistic design theory is reviewed and examples are presented which illustrate application to reliability analysis of gas turbine engine components.

Book ChapterDOI
TL;DR: In this paper, a statistical analysis was performed on fatigue test data for aluminum, titanium, steel, and nickel materials, and the best fit was evaluated using the coefficient of correlation and the chi-squared goodness of fit test.
Abstract: A statistical analysis was performed on fatigue test data for aluminum, titanium, steel, and nickel materials. The data for the titanium, steel, and nickel were obtained from spectrum fatigue tests, whereas the data for aluminum were obtained from both constant amplitude and spectrum fatigue tests. The analyzed data consisted of a total of 553 S-N test groups with 2417 specimens and 1288 spectrum test groups with approximately 5000 specimens. The distribution of logarithmic standard deviation of fatigue life for these test groups was analyzed with normal, logarithmic, and two-parameter Weibull probability distribution functions and with 2-deg polynominal equations. The best fit was evaluated using the coefficient of correlation and the chi-squared goodness of fit test. None of the distribution functions or polynomial equations provided the best fit for all of the distributions of the logarithmic standard deviation of fatigue life for the selected sets of test groups. A comparison is also made of three methods of calculating scatter factors.

Patent
22 Jul 1981
TL;DR: In this paper, the authors present a program composing method carried out by program composing terminal system which accepts test data inputs in a user oriented format and in turn generates a machine independent and syntax correct higher order language source for providing an easy to use automatic test equipment oriented program, and generates completed test documents requirements forms.
Abstract: Program composing method carried out by a program composing terminal system which accepts test data inputs in a user oriented format and in turn generates a machine independent and syntax correct higher order language source for providing an easy to use automatic test equipment oriented program, and generates completed test documents requirements forms, this method comprising: writing program specifications in a user oriented format and inputting the specifications to program specification forms; entering the forms into a program terminal for processing these forms and generating a preliminary program; reviewing the preliminary program; and utilizing the reviewed program for generating a syntactically correct higher order language program and generating completed test documents requirements forms.

Proceedings ArticleDOI
06 Apr 1981
TL;DR: In this paper, an extensive program has been undertaken to investigate the dynamic behavior of fuselage structures subject to various impact conditions, including elastic/plastic deformation and panel buckling.
Abstract: An extensive program has been undertaken to investigate the dynamic behaviour of fuselage structures subject to various impact conditions. Extensive testing of scale model stiffened aluminum sections has been completed for a wide range of wing loads, angles of incidence and impact velocities. Both vertical drop tests and "free-flight" impacts using a pendulum gantry have been studied. Test data have been obtained in terms of maximum structural s trains, g-loads and high speed photographs of the dynamic collapse modes. Based on a finite element model, these cases have also been analysed including elastic/plastic deformation and panel buckling. Some comparative results are then presented together with computer graphics.