scispace - formally typeset
Search or ask a question

Showing papers on "Test data published in 1996"


Journal ArticleDOI
TL;DR: This article describes and evaluates a new visualization-based approach to mining large databases and compares them to other well-known visualization techniques for multidimensional data: the parallel coordinate and stick-figure visualization techniques.
Abstract: Visual data mining techniques have proven to be of high value in exploratory data analysis, and they also have a high potential for mining large databases. In this article, we describe and evaluate a new visualization-based approach to mining large databases. The basic idea of our visual data mining techniques is to represent as many data items as possible on the screen at the same time by mapping each data value to a pixel of the screen and arranging the pixels adequately. The major goal of this article is to evaluate our visual data mining techniques and to compare them to other well-known visualization techniques for multidimensional data: the parallel coordinate and stick-figure visualization techniques. For the evaluation of visual data mining techniques, the perception of data properties counts most, while the CPU time and the number of secondary storage accesses are only of secondary importance. In addition to testing the visualization techniques using real data, we developed a testing environment for database visualizations similar to the benchmark approach used for comparing the performance of database systems. The testing environment allows the generation of test data sets with predefined data characteristics which are important for comparing the perceptual abilities of visual data mining techniques.

405 citations


Journal ArticleDOI
TL;DR: The experiments have shown that the chaining approach may significantly improve the chances of finding test data as compared to the existing methods of automated test data generation.
Abstract: Software testing is very labor intensive and expensive and accounts for a significant portion of software system development cost. If the testing process could be automated, the cost of developing software could be significantly reduced. Test data generation in program testing is the process of identifying a set of test data that satisfies a selected testing criterion, such as statement coverage and branch coverage. In this article we present a chaining approach for automated software test data generation which builds on the current theory of execution-oriented test data generation. In the chaining approach, test data are derived based on the actual execution of the program under test. For many programs, the execution of the selected statement may require prior execution of some other statements. The existing methods of test data generation may not efficiently generate test data for these types of programs because they only use control flow information of a program during the search process. The chaining approach uses data dependence analysis to guide the search process, i.e., data dependence analysis automatically identifies statements that affect the execution of the selected statement. The chaining approach uses these statements to form a sequence of statements that is to be executed prior to the execution of the selected statement. The experiments have shown that the chaining approach may significantly improve the chances of finding test data as compared to the existing methods of automated test data generation.

389 citations


Proceedings Article
03 Dec 1996
TL;DR: A classifier structure and learning algorithm that make effective use of unlabelled data to improve performance and is a "mixture of experts" structure that is equivalent to the radial basis function (RBF) classifier, but unlike RBFs, is amenable to likelihood-based training.
Abstract: We address statistical classifier design given a mixed training set consisting of a small labelled feature set and a (generally larger) set of unlabelled features. This situation arises, e.g., for medical images, where although training features may be plentiful, expensive expertise is required to extract their class labels. We propose a classifier structure and learning algorithm that make effective use of unlabelled data to improve performance. The learning is based on maximization of the total data likelihood, i.e. over both the labelled and unlabelled data subsets. Two distinct EM learning algorithms are proposed, differing in the EM formalism applied for unlabelled data. The classifier, based on a joint probability model for features and labels, is a "mixture of experts" structure that is equivalent to the radial basis function (RBF) classifier, but unlike RBFs, is amenable to likelihood-based training. The scope of application for the new method is greatly extended by the observation that test data, or any new data to classify, is in fact additional, unlabelled data - thus, a combined learning/classification operation - much akin to what is done in image segmentation - can be invoked whenever there is new data to classify. Experiments with data sets from the UC Irvine database demonstrate that the new learning algorithms and structure achieve substantial performance gains over alternative approaches.

352 citations


Journal ArticleDOI
TL;DR: A test template framework is introduced as useful concepts in specification-based testing and formally defines test data sets and their relation to the operations in a specification and to other test data set, providing structure to the testing process.
Abstract: Test templates and a test template framework are introduced as useful concepts in specification-based testing. The framework can be defined using any model-based specification notation and used to derive tests from model-based specifications-in this paper, it is demonstrated using the Z notation. The framework formally defines test data sets and their relation to the operations in a specification and to other test data sets, providing structure to the testing process. Flexibility is preserved, so that many testing strategies can be used. Important application areas of the framework are discussed, including refinement of test data, regression testing, and test oracles.

272 citations


Journal ArticleDOI
TL;DR: Two experimental comparisons of data flow and mutation testing are presented, indicating that while both techniques are effective, mutation‐adequate test sets are closer to satisfying the data flow criterion, and detect more faults.
Abstract: Two experimental comparisons of data flow and mutation testing are presented. These techniques are widely considered to be effective for unit-level software testing, but can only be analytically compared to a limited extent. We compare the techniques by evaluating the effectiveness of test data developed for each. We develop ten independent sets of test data for a number of programs: five to satisfy the mutation criterion and five to satisfy the all-uses data-flow criterion. These test sets are developed using automated tools, in a manner consistent with the way a test engineer might be expected to generate test data in practice. We use these test sets in two separate experiments. First we measure the effectiveness of the test data that was developed for one technique in terms of the other. Second, we investigate the ability of the test sets to find faults. We place a number of faults into each of our subject programs, and measure the number of faults that are detected by the test sets. Our results indicate that while both techniques are effective, mutation-adequate test sets are closer to satisfying the data flow criterion, and detect more faults.

175 citations


Patent
17 Jun 1996
TL;DR: In this paper, a test case generator for generating a plurality of test cases, each test case corresponding to one of the plurality of applications, and a multi-layered interface for communicating between corresponding test cases and applications.
Abstract: An adaptable system and method for testing a plurality of hardware and/or software applications. The system and method include a test case generator for generating a plurality of test cases, each test case corresponding to one of the plurality of applications, and a multi-layered interface for communicating between corresponding test cases and applications. The system and method also include means for executing the test cases to generate test data, and means for recording the test data generated during execution of the test cases.

162 citations


Journal ArticleDOI
01 May 1996
TL;DR: An approach for automated test data generation for programs with procedures that automatically identifies statements that affect the execution of the selected statement and this information is used to guide the search process.
Abstract: Test data generation in program testing is the process of identifying a set of test data that satisfies a selected testing criterion, such as, statement coverage or branch coverage. The existing methods of test data generation are limited to unit testing and may not efficiently generate test data for programs with procedures. In this paper we present an approach for automated test data generation for programs with procedures. This approach builds on the current theory of execution-oriented test data generation. In this approach, test data are derived based on the actual execution of the program under test. For many programs, the execution of the selected statement may require prior execution of some other statements that may be part of some procedures. The existing methods use only control flow information of a program during the search process and may not efficiently generate test data for these types of programs because they are not able to identify statements that affect execution of the selected statement. Our approach uses data dependence analysis to guide the process of test data generation. Data dependence analysis automatically identifies statements (or procedures) that affect the execution of the selected statement and this information is used to guide the search process. The initial experiments have shown that this approach may improve the chances of finding test data.

144 citations


Posted Content
TL;DR: The TSNLP project as mentioned in this paper has investigated various aspects of the construction, maintenance and application of systematic test suites as diagnostic and evaluation tools for NLP applications, and produced substantial multi-purpose and multi-user test suites for three European languages together with a set of specialized tools that facilitate the construction and maintenance of the test data.
Abstract: The TSNLP project has investigated various aspects of the construction, maintenance and application of systematic test suites as diagnostic and evaluation tools for NLP applications. The paper summarizes the motivation and main results of the project: besides the solid methodological foundation, TSNLP has produced substantial multi-purpose and multi-user test suites for three European languages together with a set of specialized tools that facilitate the construction, extension, maintenance, retrieval, and customization of the test data. As TSNLP results, including the data and technology, are made publicly available, the project presents a valuable linguistic resourc e that has the potential of providing a wide-spread pre-standard diagnostic and evaluation tool for both developers and users of NLP applications.

138 citations


Journal ArticleDOI
TL;DR: In this paper, a new microplane model based on the series-coupling model and the size effect law is proposed for data delocalization, i.e., decontamination of laboratory test data afflicted by localization of strain-softening damage and size effect.
Abstract: The new microplane model developed in the preceding companion paper is calibrated and verified by comparison with test data. A new approximate method is proposed for data delocalization, i.e., decontamination of laboratory test data afflicted by localization of strain-softening damage and size effect. This method, applicable more generally to any type of constitutive model, is based on the series-coupling model and on the size-effect law proposed by Bazant. An effective and simplified method of material parameter identification, exploiting affinity transformations of stress-strain curves, is also given. Only five parameters need to be adjusted if a complete set of uniaxial, biaxial, and triaxial test data is available, and two of them can be determined separately in advance from the volumetric compression curve. If the data are limited, fewer parameters need to be adjusted. The parameters are formulated in such a manner that two of them represent scaling by affinity transformation. Normally only these two parameters need to be adjusted, which can be done by simple closed-form formulas. The new model allows good fit of all the basic types of uniaxial, biaxial, and triaxial test data for concrete.

112 citations


Proceedings ArticleDOI
30 Oct 1996
TL;DR: This study analyzes two consecutive releases of a large legacy software system for telecommunications and applies discriminant analysis to identify fault prone modules based on 16 static software product metrics and the amount of code changed during development.
Abstract: Society has become so dependent on reliable telecommunications, that failures can risk loss of emergency service, business disruptions, or isolation from friends. Consequently, telecommunications software is required to have high reliability. Many previous studies define the classification fault prone in terms of fault counts. This study defines fault prone as exceeding a threshold of debug code churn, defined as the number of lines added or changed due to bug fixes. Previous studies have characterized reuse history with simple categories. This study quantified new functionality with lines of code. The paper analyzes two consecutive releases of a large legacy software system for telecommunications. We applied discriminant analysis to identify fault prone modules based on 16 static software product metrics and the amount of code changed during development. Modules from one release were used as a fit data set and modules from the subsequent release were used as a test data set. In contrast, comparable prior studies of legacy systems split the data to simulate two releases. We validated the model with a realistic simulation of utilization of the fitted model with the test data set. Model results could be used to give extra attention to fault prone modules and thus, reduce the risk of unexpected problems.

101 citations


Patent
15 Oct 1996
TL;DR: In this paper, a parallel decompressor capable of concurrently generating in parallel multiple portions of a deterministic partially specified data vector is disclosed, also capable of functioning as a PRPG for generating pseudo-random data vectors.
Abstract: A parallel decompressor capable of concurrently generating in parallel multiple portions of a deterministic partially specified data vector is disclosed. The parallel decompressor is also capable of functioning as a PRPG for generating pseudo-random data vectors. The parallel decompressor is suitable for incorporation into BIST circuitry of ICs. For BIST circuitry with multiple scan chains, the parallel decompressor may be incorporated without requiring additional flip-flops (beyond those presence in the LFSR and scan chains). In one embodiment, an incorporating IC includes boundary scan design compatible with the IEEE 1194.1 standard. Multiple ones of such ICs may be incorporated in a circuit board. Software tools for generating ICs with boundary scan having BIST circuitry incorporated with the parallel decompressor, and for computing the test data seeds for the deterministic partially specified test vectors are also disclosed.

Journal ArticleDOI
TL;DR: The relationship between a previously published sensor placement technique, called Effective Independence, and system-realization methods of modal identification is presented and it is shown that the Effective Independence sensor configuration provides superior modal Identification results as p~tedicted by the analytical work.
Abstract: The relationship between a previously published sensor placement technique, called Effective Independence, and system-realization methods of modal identification is presented. The sensor placement method maximizes spatial independence and signal strength of targeted mode shapes by maximizing the determinant of an associated Fisher information matrix. It is shown that the sensor placement method also enhances modal identification using system realization techniques by minimizing the size of the required test data matrix, maximizing the modal observability, enhancing the separation of target modes from computational or noise modes, and optimizing the estimation of the discrete system plant matrix. Three currently popular system-realization methods are considered in the analysis, including the Eigensystem Realization Algorithm, the Q-Markov COVER method, and the Polyreference method. A numerical example is also presented using the plyreference modal identification technique in conjunction with several sensor configurations selected using differing placement methods. The corresponding test data is from a modal survey performed on the Controls-Structures-Interaction Evolutionary Model testbed at the NASA LaRC Space Structures Research Laboratory. It is shown that the Effective Independence sensor configuration provides superior modal identification results as p~tedicted by the analytical work.

Journal ArticleDOI
TL;DR: In this paper, the authors have shown how a random search technique, genetic algorithm (GA), can be used to calibrate constitutive models, which considers the overall behavior of a material, not the behavior at some specific states as the traditional method does.

Proceedings ArticleDOI
01 May 1996
TL;DR: A novel approach of automated test data generation in which assertions are used to generate test cases to identify test cases on which an assertion is violated and this approach may significantly improve the chances of finding software errors as compared to the existing methods of test generation.
Abstract: Assertions are recognized as a powerful tool for automatic run time detection of software errors. However, existing testing methods do not use assertions to generate test cases. We present a novel approach of automated test data generation in which assertions are used to generate test cases. In this approach the goal is to identify test cases on which an assertion is violated. If such a test is found then this test uncovers an error in the program. The problem of finding program input on which an assertion is violated may be reduced to the problem of finding program input on which a selected statement is executed. As a result, the existing methods of automated test data generation for white box testing may be used to generate tests to violate assertions. The experiments have shown that this approach may significantly improve the chances of finding software errors as compared to the existing methods of test generation.

Patent
10 Jul 1996
TL;DR: In this article, the authors present a method for automatically generating validation tests for implementations of a program specification for an operating system, software application or a machine, where the program specification is expressed at least in part in terms of data structures and relationships.
Abstract: A method and apparatus for automatically generating validation tests for implementations of a program specification for an operating system, software application or a machine, where the program specification is expressed at least in part in terms of data structures and relationships. The method is carried out by a computer. The program specification is expressed in an interface specification language which is automatically parsed, and is then transformed into an extended finite state machine (EFSM) or multiple-EFSM architecture internally represented in the computer, the EFSM including objects representing states and transitions between those states representing executable functions, annotated to the states. The annotations may represent predicates, test data, value assignments, branch conditions, etc. The EFSM or architecture is traversed by a path traversal procedure, either exhaustively or in part, thereby producing path files, one for each path taken. Each path file is linked to a program shell, which is automatically generated for the specification, resulting in one independent validation test for each path file. Each validation test includes a call to the implementation of the program specification, and presents that implementation with a test vector representing a given path through the model. Failure and success responses are produced, indicating whether the implementation behaved as expected. Multiple validation tests may be linked or combined in a variety of ways to form a superstructure (architecture) of validation tests for testing of many routines in conjunction with one another, such as for testing all the routines specified for an operating system at the same time.

Journal ArticleDOI
TL;DR: In this paper, a three-dimensional mathematical model is developed to simulate the response of steel structures in the event of a fire, using isoparametric shell finite elements, based on a tangential stiffness approach which allows sophisticated simulations to be executed economically.

Proceedings ArticleDOI
20 Oct 1996
TL;DR: This paper presents a new effective scheme to decompress in parallel deterministic test patterns for circuits with multiple scan chains and shows that the proposed technique greatly reduces the amount of test data with low cost.
Abstract: This paper presents a new effective scheme to decompress in parallel deterministic test patterns for circuits with multiple scan chains. Two implementations of the scheme are discussed. In the first one, the patterns are generated by the reseeding of a hardware structure which is mostly comprised of the already existing DFT environment. In the second approach, the patterns are generated through the execution of a program on a simple embedded processor. Extensive experiments with the largest ISCAS'89 benchmarks show that the proposed technique greatly reduces the amount of test data with low cost. Efficient automatic test pattern generation algorithms are also presented to enhance the efficiency of the proposed approach.

Journal ArticleDOI
TL;DR: In this paper, a method has been developed by which existing fire resistance test data can be used to calculate the total heat flux incident on the specimen at any instant during the test.

Proceedings ArticleDOI
TL;DR: In this paper, a combined numerical and experimental study was conducted to assess the accuracy of six different methods of data reduction for the mixed-mode bending test, including two methods that use only the load data from the test, a modification to one of these methods to improve accuracy, two variations of compliance calibration, and a newly proposed loaddeflection method.
Abstract: Results are presented from a combined numerical and experimental study to assess the accuracy of six different methods of data reduction for the mixed-mode bending test. These include two methods in the literature that use only the load data from the test, a modification to one of these methods to improve accuracy, two variations of compliance calibration, and a newly proposed load-deflection method. First, the accuracy of the various methods were evaluated by comparison to finite element predictions for a typical laminate. Second, the various methods were applied to double cantilever beam and end-notched flexure test data that had previously been reduced by well-established techniques. Finally, five laminates were tested in the mixed-mode bending fixture at each of five mode ratios: G II /G = 0.2, 0.4, 0.6, 0.8, and 1.0. The data from these tests were reduced by the various data reduction methods. The mean value of the critical energy release rate G e at G II /G = 0.4 was compared to the mean G c obtained by compliance calibration of a separate set of five single leg bending test specimens, and G c at G II /G = 1.0 was compared to the mean G c obtained by compliance calibration of a separate set of five end-notched flexure test specimens. By these comparisons, by physical considerations of the test results, and by examinations of the standard deviations of the various data pools, it was concluded that a method that uses only load data from the test is the most accurate. For improved accuracy, a modification to this method is suggested that involves only the experimental determination of the bending rigidities of the cracked and uncracked regions and the use of these results in the reduction of data.

Journal ArticleDOI
TL;DR: The CEG‐BOR strategy is believed to be a cost‐effective methodology for the development of systematic specification‐based software test‐suites and to be very effective in detecting other fault types.
Abstract: Software testing is a very important phase in the software development life cycle. There are two approaches to software testing. Specification based testing treats a program as a black box and disregards its internal structure. Program based testing, on the other hand, depends on the implementation code. Even though studies indicate that specification based testing is comparable, or even superior, to program based testing, most of specification based testing approaches are informal. This thesis presents a specification based testing strategy, called CEG-BOR, which combines the use of cause-effect graphs (CEGs) as a mechanism for representing specifications and the use of the Boolean operator (BOR) strategy for generating tests for a Boolean expression. If all causes of a CEG are independent from each other, a test set for the CEG can be constructed such that all boolean operator faults in the CEG can be detected and the size of this test set grows linearly with the number of nodes in the CEG. Four case studies are conducted to provide empirical data on the performance of CEG-BOR. Empirical results indicate that CEGs can be used to model a large class of software specifications and that CEG-BOR is very effective in detecting a broad spectrum of faults. Also, a BOR test set based on a CEG specification provides better coverage of the implementation code than test sets based on random testing, functional testing, and state-based testing. For a CEG that does not have mutually independent causes, the BOR strategy does not perform well. To remedy this problem, a new test generation strategy is presented, which combines the BOR strategy with the Meaningful Impact (MI) strategy, a recently developed test generation strategy for Boolean expressions. This new strategy, called BOR+MI, decomposes a Boolean expression into mutually independent components, applies the BOR or MI strategy to each component for test generation, and then applies the BOR strategy to combine the test sets for all components. The size and fault detection capability of a BOR+MI test set are investigated. Both analytical and empirical results show that the BOR+MI strategy generates a smaller test set than the MI strategy and provides comparable fault detection ability as the MI strategy. The BOR strategy is also refined to improve the detection of several types of parenthesis faults in a Boolean expression. The BOR+MI approach is further extended to CEGs with causes being relational expressions. This extension, called BRO+MI, detects incorrect relational operators in relational expressions and also accounts for user-defined or implicit restrictions on the causes of a CEG. Use of constraint logic programming techniques to automatically generate test data to satisfy BRO+MI test cases is proposed. Heuristics for incremental constraint propagation and deferred constraint satisfaction to make test data generation more efficient are presented. Empirical results indicate that the application of BOR based test generation strategies to specifications represented as predicates is a practical, scalable and cost-effective approach for development of effective software test suites across a wide set of application areas.

Proceedings ArticleDOI
03 Oct 1996
TL;DR: Results show how performance varies with length of test utterance, and whether or not the training data has been transcribed, and the dominant factor in performance appears to be channel or handset mismatch between training and testing data.
Abstract: The authors present a study of a speaker verification system for telephone data based on large-vocabulary speech recognition. After describing the recognition engine, they give details of the verification algorithm and draw comparisons with other systems. The system has been tested on a test set taken from the Switchboard corpus of conversational telephone speech, and they present results showing how performance varies with length of test utterance, and whether or not the training data has been transcribed. The dominant factor in performance appears to be channel or handset mismatch between training and testing data.

01 Sep 1996
TL;DR: In this article, an F/A-18 aircraft was modified to perform flight research at high angles of attack (AOA) using thrust vectoring and advanced control law concepts for agility and performance enhancement and to provide a testbed for the computational fluid dynamics community.
Abstract: An F/A-18 aircraft was modified to perform flight research at high angles of attack (AOA) using thrust vectoring and advanced control law concepts for agility and performance enhancement and to provide a testbed for the computational fluid dynamics community. Aeroservoelastic (ASE) characteristics had changed considerably from the baseline F/A-18 aircraft because of structural and flight control system amendments, so analyses and flight tests were performed to verify structural stability at high AOA. Detailed actuator models that consider the physical, electrical, and mechanical elements of actuation and its installation on the airframe were employed in the analysis to accurately model the coupled dynamics of the airframe, actuators, and control surfaces. This report describes the ASE modeling procedure, ground test validation, flight test clearance, and test data analysis for the reconfigured F/A-18 aircraft. Multivariable ASE stability margins are calculated from flight data and compared to analytical margins. Because this thrust-vectoring configuration uses exhaust vanes to vector the thrust, the modeling issues are nearly identical for modem multi-axis nozzle configurations. This report correlates analysis results with flight test data and makes observations concerning the application of the linear predictions to thrust-vectoring and high-AOA flight.

Patent
12 Nov 1996
TL;DR: In this article, a record-keeping method for serialized portable memory devices is proposed, which consists of assigning serialized devices to individual lots of material and manufacturing process batches, writing identification and requirements data into the devices, writing manufacturing and test data into devices during the process and writing the test data from one device into another device if the comparison is favorable.
Abstract: A recordkeeping method comprises assigning serialized portable memory devices to individual lots of material and manufacturing process batches, writing identification and requirements data into the portable memory devices, writing manufacturing and test data into the portable memory devices during the process and writing the manufacturing and test data from one portable memory device into another by first reading the identification data from the both portable memory devices, comparing the identification data read from the two portable memory devices and writing the manufacturing and test data into the other portable memory device if the comparison is favorable.

Patent
Yan Xu1, Murtuza Ali Lakhani1
15 Aug 1996
TL;DR: In this paper, the authors present a system for interactive built-in self-testing with user-programmable test patterns, which operates in the context of an integrated circuit (IC) including BIST logic and test interface circuit resident on the IC.
Abstract: Methods and apparatus for interactive built-in self-testing with user-programmable test patterns are disclosed. The present invention operates in the context of an integrated circuit (IC) including built-in self-test (BIST) logic and a test interface circuit resident on the IC. The BIST logic executes a BIST routine for testing the IC, and the test interface achieves the inputting of an external test pattern into the BIST logic from an external logic circuit. The test interface includes a first flag storage element accessible to the BIST logic. The first flag storage element stores a first flag that indicates whether the test pattern will be provided to the IC from the external logic. A test data storage element in the test interface stores the external test pattern, and is also accessible to the BIST logic. A second flag storage element accessible to the BIST logic stores a second flag to indicate whether the test pattern is available in the test data storage element. Test control logic receives a first instruction from the external logic, and executes the first instruction to set the first flag. The test control logic reads the test pattern and sets the second flag after the test pattern is stored in the test data storage element. If the first flag is not set, the BIST logic executes the BIST routine using a test pattern internally generated on the IC. On the other hand, if the first flag is set, then the BIST logic executes the BIST routine using the test patterns stored in the test data storage element.

Journal ArticleDOI
TL;DR: This paper presents a classification method that tries to anticipate slight deviations of the (statistical properties of the) training and the test data (i.e. the data to which the classifier is applied in the operational phase), based on a generalised likelihood function for determining the kernel width of the Parzen classifier.

01 Sep 1996
TL;DR: In this article, a commercial-type bypass engine with aviation fuel was used in this test series and the test matrix was set by parametrically selecting the temperature, pressure, and flow rate at sea-level-static and different altitudes to obtain a parametric set of data.
Abstract: NASA's Atmospheric Effects of Aviation Project (AEAP) is developing a scientific basis for assessment of the atmospheric impact of subsonic and supersonic aviation A primary goal is to assist assessments of United Nations scientific organizations and hence, consideration of emissions standards by the International Civil Aviation Organization (ICAO) Engine tests have been conducted at AEDC to fulfill the need of AEAP The purpose of these tests is to obtain a comprehensive database to be used for supplying critical information to the atmospheric research community It includes: (1) simulated sea-level-static test data as well as simulated altitude data; and (2) intrusive (extractive probe) data as well as non-intrusive (optical techniques) data A commercial-type bypass engine with aviation fuel was used in this test series The test matrix was set by parametrically selecting the temperature, pressure, and flow rate at sea-level-static and different altitudes to obtain a parametric set of data

Book ChapterDOI
01 Jan 1996
TL;DR: It is demonstrated, using an expertly classified test data set, that ‘naive’ Bayesian models and multi-layered perceptrons can significantly out-perform the traditional methods.
Abstract: Biological methods of monitoring river water quality have enormous potential but this is not presently being realised owing to inadequacies in methods of data interpretation and classification. This paper describes the development and testing of several classification models based on Bayesian, neural and machine learning techniques, and compares their performance with two traditional models. It is demonstrated, using an expertly classified test data set, that ‘naive’ Bayesian models and multi-layered perceptrons can significantly out-perform the traditional methods. It is concluded that these two techniques presently provide the most promising means of realising the full potential of bio-monitoring, either acting separately or jointly as complementary ’experts’.

Journal ArticleDOI
TL;DR: Results validate the accuracy and utility of physiological parametric models and multivariate statistical classification in SHA test interpretation and classify a test group consisting of patients with possible partial unilateral deficits using the same classification function as the normal and full unilateral learning sets.
Abstract: The usefulness of vestibular testing is directly related to the accuracy of the test interpretations. Two factors, subjective analysis of large test data sets and failure to make appropriate age corrections, tend to reduce test accuracy. Correction of these problems can be accomplished by application of physiologically based models of vestibular function and multivariate classification techniques to the test data, thereby creating a more objective test interpretation procedure. Herein we report our results on the use of this strategy for analysis of sinusoidal harmonic acceleration (SHA) test interpretation. For each patient, models reduce the large set of SHA test variables to three key parameters: asymptotic gain, vestibulo-ocular reflex time constant, and bias. in addition, the new technique objectively adjusts these parameters for the patient's age. Finally, each patient's set of parameters are statistically classified as either normal or as unilateral peripheral deficit. Based on learning sets of 57 ...

Journal ArticleDOI
TL;DR: In this paper, the design, performance, and analysis of slug interference tests for two field test examples are presented, and compared with standard constant-rate pumping tests for the determination of transmissivity, storativity, and vertical anisotropy.
Abstract: Slug interference testing may be particularly useful for characterizing hydraulic properties of aquifer sites where disposal contaminated ground water makes pumping tests undesirable. The design, performance, and analysis of slug interference tests for two field test examples are presented. Results were compared with standard constant-rate pumping tests. The comparison indicates that slug interference tests provide estimates comparable to those obtained from short duration pumping tests for the determination of transmissivity, storativity, and vertical anisotropy. The close agreement in hydraulic property values obtained using the two test methods suggests that slug interference satin under favorable test conditions (for example, observation well distances less than or equal to 30 m), can provide representative aquifer characterization results. The quality and extent of test data obtained also indicate the potential use of slug interference testing for three-dimensional hydrologic characterization investigations, when conducted using multilevel monitoring facilities.

Journal Article
TL;DR: In this article, an assessment is made of recent test data from France, Japan, and Russia and of earlier test data in the United States in relation to the safety analysis performed for reactivity-initiated accidents in power reactors.
Abstract: An assessment is made of recent test data from France, Japan, and Russia and of earlier test data from the United States in relation to the safety analysis performed for reactivity-initiated accidents in power reactors in the United States. Considerations include mode of cladding failure, oxidation, hydriding, and pulse-width effects. From the data trend and from these considerations, we conclude that the cladding failure threshold for fuel rods with moderate-to-high burnup is roughly 100-cal/g fuel peak fuel-rod enthalpy for boiling-water reactors and pressurized-water reactors. Realistic plant calculations suggest that cladding failure would not occur for rod-ejection or rod-drop accidents and therefore that pellet fragmentation and enhanced fission-product release from fuel pellets should not have to be considered in the safety analysis for these reactivity accidents. The data base, however, is sparse and contains much uncertainty. 32 refs., 11 figs., 8 tabs.