scispace - formally typeset
Search or ask a question

Showing papers on "Test data published in 1995"


Journal ArticleDOI
TL;DR: A method is developed to estimate Summary Receiver Operating Characteristic Curves, thereby taking account of possible test threshold differences between studies, and to highlight important defects in quality and how they affect summary estimates.

468 citations


Journal ArticleDOI
TL;DR: A new scheme for built-in test that uses multiple-polynomial linear feedback shift registers (MP-LFSR's) and an implicit polynomial identification reduces the number of extra bits per seed to one bit is presented.
Abstract: We propose a new scheme for built-in test (BIT) that uses multiple-polynomial linear feedback shift registers (MP-LFSR's). The same MP-LFSR that generates random patterns to cover easy to test faults is loaded with seeds to generate deterministic vectors for difficult to test faults. The seeds are obtained by solving systems of linear equations involving the seed variables for the positions where the test cubes have specified values. We demonstrate that MP-LFSR's produce sequences with significantly reduced probability of linear dependence compared to single polynomial LFSR's. We present a general method to determine the probability of encoding as a function of the number of specified bits in the test cube, the length of the LFSR and the number of polynomials. Theoretical analysis and experiments show that the probability of encoding a test cube with s specified bits in an s-stage LFSR with 16 polynomials is 1-10/sup -6/. We then present the new BIT scheme that allows for an efficient encoding of the entire test set. Here the seeds are grouped according to the polynomial they use and an implicit polynomial identification reduces the number of extra bits per seed to one bit. The paper also shows methods of processing the entire test set consisting of test cubes with varied number of specified bits. Experimental results show the tradeoffs between test data storage and test application time while maintaining complete fault coverage. >

439 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented a RILEM Recommendation inMater Struct.28 (1995) 357-365, calibrated by a computerized data bank comprising practically all the relevant test data obtained in various laboratories throughout the world.
Abstract: Model B3 for creep and shrinkage prediction in the design of concrete structures, presented as a RILEM Recommendation inMater Struct.28 (1995) 357–365, is calibrated by a computerized data bank comprising practically all the relevant test data obtained in various laboratories throughout the world. The coefficients of variation of deviations of the model from the data are distinctly smaller than for the latest CEB model, and much smaller than for the previous ACI model (which was developed in the mid-1960's). The effect of concrete composition and design strength on the model parameters is identified as the main source of error. The model is simpler than the previous models (BP and BP-KX) developed at Northwestern University, yet it has comparable accuracy and is more rational.

191 citations


Journal ArticleDOI
TL;DR: Three automatic test case generation algorithms intended to test the resource allocation mechanisms of telecommunications software systems are introduced and early experience indicates that they are highly effective at detecting subtle faults that would have been likely to be missed if load testing had been done in the more traditional way, using hand-crafted test cases.
Abstract: Three automatic test case generation algorithms intended to test the resource allocation mechanisms of telecommunications software systems are introduced. Although these techniques were specifically designed for testing telecommunications software, they can be used to generate test cases for any software system that is modelable by a Markov chain provided operational profile data can either be collected or estimated. These algorithms have been used successfully to perform load testing for several real industrial software systems. Experience generating test suites for five such systems is presented. Early experience with the algorithms indicate that they are highly effective at detecting subtle faults that would have been likely to be missed if load testing had been done in the more traditional way, using hand-crafted test cases. A domain-based reliability measure is applied to systems after the load testing algorithms have been used to generate test data. Data are presented for the same five industrial telecommunications systems in order to track the reliability as a function of the degree of system degradation experienced. >

179 citations


ReportDOI
01 Mar 1995
TL;DR: In this article, experimental data and computational modeling for a well-defined glass material are presented for a wide range of strains, strain rates, and pressures that are obtained from quasi-static compression and tension tests, split Hopkinson pressure bar compression tests, explosively driven flyer plate impact tests, and depth of penetration ballistic tests.
Abstract: This paper presents experimental data and computational modeling for a well-defined glass material. The experimental data cover a wide range of strains, strain rates, and pressures that are obtained from quasi-static compression and tension tests, split Hopkinson pressure bar compression tests, explosively driven flyer plate impact tests, and depth of penetration ballistic tests. The test data are used to obtain constitutive model constants for the improved Johnson-Holmquist (JH-2) brittle material model. The model and constants are then used to perform computations of the various tests.

135 citations


Journal ArticleDOI
TL;DR: A project that is applying a range of machine learning strategies to problems in agriculture and horticulture is described, including a case study of dairy herd management in which culling rules were inferred from a medium-sized database of herd information.

132 citations


Proceedings ArticleDOI
30 Apr 1995
TL;DR: A new and efficient scheme to decompress a set of deterministic test vectors for circuits with scan based on the reseeding of a Multiple Polynomial Linear Feedback Shift Register but uses variable-length seeds to improve the encoding efficiency of test vectors with a wide variation in their number of specified bits.
Abstract: This paper presents a new and efficient scheme to decompress a set of deterministic test vectors for circuits with scan. The scheme is based on the reseeding of a Multiple Polynomial Linear Feedback Shift Register (MP-LFSR) but uses variable-length seeds to improve the encoding efficiency of test vectors with a wide variation in their number of specified bits. The paper analyzes the effectiveness of this novel approach both theoretically and through extensive experiments. A modular design of the decompression hardware re-uses the same LFSR used for pseudo-random vector generation and scan registers to minimize the area overhead.

108 citations



Patent
28 Dec 1995
TL;DR: In this article, a non-impact printhead with a plurality of driver IC chips and a number of recording elements such as LEDs is presented, where a test circuit interface is provided by a test access port for input of update command signals and clock inputs to the test circuit.
Abstract: A non-impact printhead having a plurality of driver IC chips and and a plurality of recording elements such as LEDs. Each driver IC chip includes a plurality of current-carrying channels that are operative for carrying current to respective recording elements on the printhead and a control for controlling operation of the driver. The control includes a circuit that provides a test circuit interface which includes (a) a test access port for input of update command signals and clock inputs to the test circuit; (b) a test data input terminal for inputting test data and control data into the chip; (c) a plurality of registers connected to the test data input terminal with at least one of the registers storing control data for controlling operation of the driver; (d) a test data output terminal for outputting test data and control data from the chip to an adjacent chip; and (e) a selector connected to the registers and the test data output terminal for selecting test and control data for output from the test data output terminal.

88 citations


Journal ArticleDOI
TL;DR: In this article, the effect of loading configuration and specimen geometry on the pull-out test is considered and the issue of the impact of load configuration on the performance of the test is discussed.

88 citations


Journal ArticleDOI
TL;DR: It is asserted that researchers should use statistics derived from the linear distances between actual and estimated locations of test transmitters to estimate location error in radiotelemetry data, and this approach is called the location error method.
Abstract: We assert that researchers should use statistics derived from the linear distances between actual and estimated locations of test transmitters to estimate location error in radiotelemetry data. We call this approach the location error method. We used the distribution of such linear distances from a test data set from a study on black bears (Ursus americanus) in the mountains of North Carolina to predict error statistics for another test data set. We then compared the predicted with the actual error statistics. We also predicted error statistics for the second test data set using the error polygon method and Lenth's maximum likelihood estimator method. Linear and areal predictions of error using the location error method closely matched actual error in the second data set. The 90% confidence area calculated from test data contained 90% of the actual locations. The linear error measures taken from the error polygon method averaged twice the length of the 90% confidence distances generated from test data, an...

Patent
07 Jun 1995
TL;DR: In this article, a pattern recognition system which uses data fusion to combine data from a plurality of extracted features and classifiers is presented. But the method is limited to a single classifier.
Abstract: The present invention relates to a pattern recognition system which uses data fusion to combine data from a plurality of extracted features and a plurality of classifiers Speaker patterns can be accurately verified with the combination of discriminant based and distortion based classifiers A novel approach using a training set of a "leave one out" data can be used for training the system with a reduced data set Extracted features can be improved with a pole filtered method for reducing channel effects and an affine transformation for improving the correlation between training and testing data

Journal ArticleDOI
01 May 1995
TL;DR: The TREC programme is reviewed as an evaluation exercise, the methods of indexing and retrieval being tested within it in terms of the approaches to system performance factors these represent, and the test results are analyzed.
Abstract: This paper discusses the Text REtrieval Conferences (TREC) programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within it in terms of the approaches to system performance factors these represent; analyses the test results for solid, overall conclusions that can be drawn from them; and, in the light of the particular features of the test data, assesses TREC both for generally applicable findings that emerge from it and for directions it offers for future research.

Patent
27 Sep 1995
TL;DR: In this article, a method and apparatus for testing software programs systematically generates test data by removing from a potential test data candidate pool data which is syntactically incorrect for proper operation with the software program and then data which are semantically incorrect for the program.
Abstract: A method and apparatus for testing software programs systematically generates test data by removing from a potential test data candidate pool data which is syntactically incorrect for proper operation with the software program and then data which is semantically incorrect for the software program. The resulting reduced data collection is applied to the collection of components to generate output values which are then checked against post condition rules to verify that the software program operated correctly.

Journal ArticleDOI
TL;DR: In this paper, a new formulation permitting system demands to be represented as a distributed pipe flux is presented, and the results of a field test conducted on August 29, 1990, by the City of Calgary Water Works staff on one of the city's major transmission and distribution subsystems is pre- sented.
Abstract: Computerized transient-flow models have been used with great success in the analysis of water-hammer events in topologically simple pipeline systems, and the performance of these models is well documented. This paper addresses the relatively unexplored area of transients in complex pipe networks. A new formulation permitting system demands to be represented as a distributed pipe flux is presented. This approach is compared with two conventional methods for modeling demands in pipe networks. The results of a field test conducted on August 29, 1990, by the City of Calgary Water­ works staff on one of the city's major transmission and distribution subsystems is pre­ sented. The results are compared with the behavior predicted by a network transient model. The computer model was generally in good agreement with the field test data, with all three demand models giving comparable results, particularly with respect to the initial downsurge and the first upsurge following the pump trip. However, the transient's long-term decay was poorly represented by all three demand models.

Proceedings ArticleDOI
24 Oct 1995
TL;DR: In all cases it was observed that an increase in reliability is accompanied by a increase in at least one code coverage measure, and it was also observed that a decrease in reliability was accompanied by at least two code coverage measures.
Abstract: We report experiments conducted to investigate the correlation between code coverage and software reliability. Black-, decision-, and all-use-coverage measures were used. Reliability was estimated to be the probability of no failure over the given input domain defined by an operational profile. Four of the five programs were selected from a set of Unix utilities. These utilities range in size from 121 to 8857 lines of code, artificial faults were seeded manually using a fault seeding algorithm. Test data was generated randomly using a variety of operational profiles for each program. One program was selected from a suite of outer space applications. Faults seeded into this program were obtained from the faults discovered during the integration testing phase of the application. Test cases were generated randomly using the operational profile for the space application. Data obtained was graphed and analyzed to observe the relationship between code coverage and reliability. In all cases it was observed that an increase in reliability is accompanied by an increase in at least one code coverage measure. It was also observed that a decrease in reliability is accompanied by a decrease in at least one code coverage measure. Statistical correlations between coverage and reliability were found to vary between -0.1 and 0.91 for the shortest two of the five programs considered; for the remaining three programs the correlations varied from 0.89 to 0.99.

Journal ArticleDOI
TL;DR: In this article, the authors expand the existing database of semi-rigid steel connections at Purdue University by including additional test data on header-plate and seat-angle, and double-web and seatangle connections.

Journal ArticleDOI
TL;DR: The main features of the proposed intelligent maintenance optimization system (IMOS) are identified, a prototype system is presented and its main mathematical models of maintenance are introduced.
Abstract: This paper is concerned with the evaluation and enhancement of the maintenance routines of large and complex technical systems. An ‘intelligent decision support system’ approach is suggested as a method for overcoming the difficulties associated with the scale, variability and changeability of such systems. The main features of the proposed intelligent maintenance optimization system (IMOS) are identified. A prototype system is then presented and its main mathematical models of maintenance are introduced. Some sample test data and the results produced from them are presented. Other aspects discussed include dealing with censored data, optimization criteria, the development of a maintenance model selection rule base, the recognition of data patterns and models' robustness. Results of IMOS system validation against expert advice have shown a high measure of consistency.

Patent
15 May 1995
TL;DR: In this paper, an error injection test scripting system that permits a test engineer to select from a series of commands those that will induce a desired test scenario is presented to a parser, either in command line form or as a batch of commands, to create a task list which is communicated to a scheduler.
Abstract: An error injection test scripting system that permits a test engineer to select from a series of commands those that will induce a desired test scenario. These commands are presented to a parser, either in command line form or as a batch of commands, which parses the syntax of the commands and associated parameters, to create a task list which is communicated to a scheduler. The scheduler handles the execution of the tasks in the list, converts parameters to explicit logical block test sequences and maintains test results. Tasks such as error injection use a special protocol (which the unit under test must be able to understand and interpret) to circumvent standard bus and controller protocols, so that test data, such as corrupt parity or multiple hard error failures can be sent to the disks in the RAID system, while bypassing the RAID array management functions that would otherwise automatically correct or prevent the errors. The scenario of injected errors to be tested is then executed through a tester, the results are evaluated and posted back to the scheduler.

Patent
Edward Michael Seymour1
06 Oct 1995
TL;DR: In this paper, a central controller for simultaneously testing the embedded arrays in a processor is disclosed, where test data vectors are serially shifted into a latch and stored into each location in the embedded array of the processor.
Abstract: There is disclosed a central controller for simultaneously testing the embedded arrays in a processor. Test data vectors are serially shifted into a latch and stored into each location in the embedded arrays of the processor. The test data are then read out of the embedded arrays into a read latch and serially shifted into a multiple input shift register, where a polynomial division is performed on the test vector data. If all memory locations in the embedded array function properly, a remainder value will result that is equal to a unique signature remainder for the test vectors used.

Book ChapterDOI
01 Jan 1995
TL;DR: The motivation and the theoretical foundation of statistical testing are presented, the feasibility of designing statistical test patterns is exemplified on a safety-critical component from the nuclear industry, and the fault-revealing power of these patterns is assessed.
Abstract: Statistical testing is based on a probabilistic generation of test data: structural or functional criteria serve as guides for defining an input profile and a test size. The method is intended to compensate for the imperfect connection of criteria with software faults, and should not be confused with random testing, a blind approach that uses a uniform profile over the input domain. First, the motivation and the theoretical foundation of statistical testing are presented. Then the feasibility of designing statistical test patterns is exemplified on a safety-critical component from the nuclear industry, and the fault-revealing power of these patterns is assessed through experiments conducted at two different levels: (i) unit testing of four functions extracted from the industrial component, statistical test data being designed according to classical structural criteria; (ii) testing of the whole component, statistical test data being designed from behaviour models deduced from the component specification. The results show the high fault-revealing power of statistical testing, and its greater efficiency in comparison to deterministic and random testing.

Journal ArticleDOI
TL;DR: In this article, a new method for determining mass and stiffness matrices from modal test data is presented, based on the identified modes and the mass-normalized mode shapes at the sensor locations and is not limited by either the number of driving points or measurement points.
Abstract: A new method for determining mass and stiffness matrices from modal test data is presented. The method builds on the identified modes and the mass-normalized mode shapes at the sensor locations and is not limited by either the number of driving points or measurement points, and so it is applicable to most general test settings. A mixed coordinate basis is defined for the mass and stiffness matrices which is analogous to the Craig-Bampton component mode synthesis method for finite element models. The resultant mass and stiffness are of minimal order necessary to span the measured modes, and the resulting generalized coordinates provide an objective basis for the test-derived matrices to be used as if they are component mode-synthesized finite element matrices. Inclusion of rigid-body modes and the relationship of the new method to traditional physical parameter computations based on mobility curves is considered. Examples of the method as applied to numerical and experimental data are provided.

Patent
31 Mar 1995
TL;DR: In this article, the authors propose a declarative language for assigning default values and attributes to packet fields such that only a small amount of data need be specified when regression testing a new routing device.
Abstract: A packet description language which is declarative in nature and suitable for efficiently and flexibly defining data packet formats in accordance with internetwork routing device uses. Data packet formats may be defined utilizing the packet description language and then compiled to create a data structure corresponding to the defined data packet format. A routing device test platform may generate test data packets and decode received test packets by referencing the test data to the compiled data structure defined in accordance with the packet description language. The declarative language provides for assigning numerous default values and attributes to packet fields such that only a small amount of data need be specified when regression testing a new routing device.

Patent
14 Nov 1995
TL;DR: In this article, a trouble-shooting mechanism is incorporated into a telephone service technician's portable computer unit, to enable a craftsperson, to respond to a trouble ticket.
Abstract: A trouble-shooting mechanism is incorporated into a telephone service technician's portable computer unit, to enable a craftsperson, to respond to a trouble ticket. By analyzing multiple sources of information, including user inputs from the craftsperson, parametric data embedded in the trouble ticket, test data obtained through the execution of local tests, and remote test data, the trouble-shooting mechanism derives and suggests a problem solving strategy that is appears accurate. The system architecture includes a trouble-shooting application engine, and an associated set of databases, one of which is a knowledge database, and the other of which is a shared, parameter database. The knowledge database contains rules and static parameters which define the characteristics and behavior of the application engine. These rule sets and information are telephone line trouble-shooting specific.

Book ChapterDOI
Yi-Jen Chiang1
16 Aug 1995
TL;DR: This is the first experimental work comparing the practical performance between external-memory algorithms and conventional algorithms with large-scale test data and three variations of plane sweep, which is optimal in terms of internal computation.
Abstract: We present an extensive experimental study comparing the performance of four algorithms for the orthogonal segment intersection problem. The algorithms under evaluation are distribution sweep, which has optimal I/O cost, and three variations of plane sweep, which is optimal in terms of internal computation. We generate the test data by using a random number generator while producing some interesting properties that are predicted by our theoretical analysis. The sizes of the test data range from 250 thousand segments to 2.5 million segments. The experiments provide detailed quantitative evaluation of the performance of the four algorithms. This is the first experimental work comparing the practical performance between external-memory algorithms and conventional algorithms with large-scale test data.

Journal ArticleDOI
TL;DR: In this paper, an analytical model is developed from crash test measurements using system identification techniques, which is made up of two parts: a differential equation part consisting of mass, stiffness and damping characteristics, and a transfer function part, consisting of an autoregressive moving average (ARMA) of white noise.

Journal ArticleDOI
TL;DR: In this article, the authors examined the question as to whether it is possible to obtain the correct discrete linear analytical model of an actual structure by the use of analysis and test data.
Abstract: This paper examines the question as to whether it is possible to obtain the correct discrete linear analytical model of an actual structure by the use of analysis and test data. It is shown that no such correct model exists. An illustration of an application is shown where multiple acceptable and very different analytical models are all representative of the structure. The implications regarding methods of improving analytical models and detecting damage are discussed

Proceedings ArticleDOI
24 Oct 1995
TL;DR: It is found that identifying new modules and changed modules mere significant components of the discriminant model, and improved its performance, demonstrate that data on module reuse is a valuable input to quality models and that discriminant analysis can be a useful tool in early identification of fault prone software modules in large telecommunications systems.
Abstract: Telecommunications software is known for its high reliability. Society has become so accustomed to reliable telecommunications, that failures can cause major disruptions. This is an experience report on application of discriminant analysis based on 20 static software product metrics, to identify fault prone modules in a large telecommunications system, so that reliability may be improved. We analyzed a sample of 2000 modules representing about 1.3 million lines of code, drawn from a much larger system. Sample modules were randomly divided into a fit data set and a test data set. We simulated utilization of the fitted model with the test data set. We found that identifying new modules and changed modules mere significant components of the discriminant model, and improved its performance. The results demonstrate that data on module reuse is a valuable input to quality models and that discriminant analysis can be a useful tool in early identification of fault prone software modules in large telecommunications systems. Model results could be used to identify those modules that would probably benefit from extra attention, and thus, reduce the risk of unexpected problems with those modules.

Journal ArticleDOI
TL;DR: In this paper, the authors used the reference stress approach for converting impression creep test data to equivalent uniaxial creep data, and showed that the results show that the impression creep technique can be used for obtaining creep properties for materials which have high creep resistance at high temperature and test pressure conditions.
Abstract: Impression creep tests have been performed on a 316-stainless steel at 600°C, for which conventional uniaxial creep test data are available. It is shown that the technique, based on the reference stress approach, for converting impression creep test data to equivalent uniaxial creep data, is accurate. The results show that the impression creep technique can be used for obtaining creep properties for materials which have high creep resistance at high temperature and test pressure conditions. The difficulties and limitations associated with such situations are described and methods of dealing with them are outlined.

Proceedings ArticleDOI
26 Jun 1995
TL;DR: Improved recognition is obtained with the HMM, although it is believed that the spectral pre-emphasis which was carried out on the input data could have contributed to this fact, suggesting that implementation of such pre-processing for the artificial neural network architectures, may be beneficial.
Abstract: The use of hidden Markov models (HMM's), multilayer perceptrons (MLP's) and Kohonen self-organising maps (SOM's) has been proposed previously as efficient analysis and detection tools for condition monitoring of industrial plants and processes (Yin, 1993). Work on such applications with these techniques, has identified a need for a reassessment of these alternative recognition systems with a view to establishing their relative merits. In this paper the three systems are compared for two test data sets, one identifying the response of the systems to varying fault severity, the other showing recognition of faults which are independent of load. It is shown that for the MLP and SOM, implementing multiple networks improved the recognition of faults of varying severity. Possibly of more importance, this technique provided a means of diagnosing combinations of faults. For faults produced under differing load conditions, it is shown that the data cannot be classified by the SOM, and the supervised training regimes of the HMM and MLP provide the only means of classifying the data. Improved recognition is obtained with the HMM, although it is believed that the spectral pre-emphasis which was carried out on the input data could have contributed to this fact, suggesting that implementation of such pre-processing for the artificial neural network architectures, may be beneficial.