scispace - formally typeset
Search or ask a question

Showing papers on "Test data published in 2002"


Journal ArticleDOI
TL;DR: Development work in moving from laboratory read speech data to real-world or `found' speech data in preparation for the DARPA evaluations on this task from 1996 to 1999 is described.

493 citations


Proceedings ArticleDOI
07 Oct 2002
TL;DR: Embedded deterministic test technology is introduced, which reduces manufacturing test cost by providing one to two orders of magnitude reduction in scan test data volume and scan test time.
Abstract: This paper introduces embedded deterministic test (EDT) technology, which reduces manufacturing test cost by providing one to two orders of magnitude reduction in scan test data volume and scan test time. The EDT architecture, the compression algorithm, design flow, experimental results, and silicon implementation are presented.

430 citations


Journal ArticleDOI
TL;DR: A fast transient testing methodology for predicting the performance parameters of analog circuits showed a ten times speedup in production testing; accurate prediction of the performance parameter; and a simpler test configuration.
Abstract: In this paper, a fast transient testing methodology for predicting the performance parameters of analog circuits is presented. A transient test signal is applied to the circuit under (cut) test and the transient response of the circuit is sampled and analyzed to predict the circuit's performance parameters. An algorithm for generating the optimum transient test signal is presented. The methodology is demonstrated in a production environment using a low-power opamp. Result from production test data showed: 1) a ten times speedup in production testing; 2) accurate prediction of the performance parameters; and 3) a simpler test configuration.

286 citations


Journal ArticleDOI
TL;DR: This investigation seeks to confirm several key factors in computer-based versus paper-based assessment, including content familiarity, computer familiarity, competitiveness, and gender, and found that higher-attaining students benefited most from computer- based assessment relative to higher- attaining students under paper- based testing.
Abstract: This investigation seeks to confirm several key factors in computer-based versus paper-based assessment. Based on earlier research, the factors considered here include content familiarity, computer familiarity, competitiveness, and gender. Following classroom instruction, freshman business undergraduates (N = 105) were randomly assigned to either a computer-based or identical paper-based test. ANOVA of test data showed that the computer-based test group outperformed the paper-based test group. Gender, competitiveness, and computer familiarity were NOT related to this performance difference, though content familiarity was. Higher-attaining students benefited most from computer-based assessment relative to higher-attaining students under paper-based testing. With the current increase in computer-based assessment, instructors and institutions must be aware of and plan for possible test mode effects.

274 citations


Journal Article
TL;DR: A new approach that makes writing unit tests easier is presented, which uses a formal specification language's runtime assertion checker to decide whether methods are working correctly, thus automating the writing of unit test oracles.
Abstract: Writing unit test code is labor-intensive, hence it is often not done as an integral part of programming. However, unit testing is a practical approach to increasing the correctness and quality of software; for example, the Extreme Programming approach relies on frequent unit testing. In this paper we present a new approach that makes writing unit tests easier. It uses a formal specification language's runtime assertion checker to decide whether methods are working correctly, thus automating the writing of unit test oracles. These oracles can be easily combined with hand-written test data. Instead of writing testing code, the programmer writes formal specifications (e.g., pre- and postconditions). This makes the programmer's task easier, because specifications are more concise and abstract than the equivalent test code, and hence more readable and maintainable. Furthermore, by using specifications in testing, specification errors are quickly discovered, so the specifications are more likely to provide useful documentation and inputs to other tools. We have implemented this idea using the Java Modeling Language (JML) and the JUnit testing framework, but the approach could be easily implemented with other combinations of formal specification languages and unit test tools.

261 citations


Proceedings ArticleDOI
Subhasish Mitra1, Kee Sup Kim1
07 Oct 2002
TL;DR: This work presents a technique for compacting test response data using combinational logic circuits that enables up to an exponential reduction in the number of pins required to collect test response from a chip.
Abstract: We present a technique for compacting test response data using combinational logic circuits. Our compaction technique enables up to an exponential reduction in the number of pins required to collect test response from a chip. The combinational circuits require negligible area, do not add any extra delay during normal operation, guarantee detection of defective chips even in the presence of sources of unknown logic values (often referred to as Xs) and preserve diagnosis capabilities for all practical scenarios. The technique has minimum impact on current design and test flows, and can be used to reduce test time, test data volume, test-I/O pins and tester channels, and also to improve test quality.

243 citations


01 Jan 2002
TL;DR: This article presents a survey on automatic test data generation techniques that can be found in current literature, and the focus of this article is program-based generation, where the generation starts from the actual programs.
Abstract: In order to reduce the high cost of manual software testing and at the same time to increase the reliability of the testing processes researchers and practitioners have tried to automate it. One of the most important components in a testing environment is an automatic test data generator — a system that automatically generates test data for a given program. Through the years several attempts in automatic test data generations have been made. The focus of this article is program-based generation, where the generation starts from the actual programs. Thus, techniques such as GUI-based and syntax-based test data generation are not an issue in this article. In this article I present a survey on automatic test data generation techniques that can be found in current literature. Basic concepts and notions of test data generation as well as how a test data generator system works are described. Problems of automatic generation are identified and explained. Finally important and challenging future research topics are presented.

227 citations


Patent
01 May 2002
TL;DR: A system and method of diagnosing diseases from biological data is disclosed in this article, where a system for automated disease diagnostics prediction can be generated using a database of clinical test data.
Abstract: A system and method of diagnosing diseases from biological data is disclosed. A system for automated disease diagnostics prediction can be generated using a database of clinical test data. The diagnostics prediction can also be used to develop screening tests to screen for one or more inapparent diseases. The prediction method can be implemented with Bayesian probability estimation techniques. The system and method permit clinical test data to be analyzed and mined for improved disease diagnosis.

218 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an analysis of the response of silicon carbide to high velocity impact, including a wide range of loading conditions that produce large strains, high strain rates, and high pressures.
Abstract: This article presents an analysis of the response of silicon carbide to high velocity impact. This includes a wide range of loading conditions that produce large strains, high strain rates, and high pressures. Experimental data from the literature are used to determine constants for the Johnson–Holmquist constitutive model for brittle materials (JH-1). It is possible to directly determine the strength and pressure response of the intact material from test data in the literature. After the ceramic has failed, however, there are not adequate experimental data to directly determine the response of the failed material. Instead, the response is inferred from a comparison of computational results to ballistic penetration test results. After the constants have been obtained for the JH-1 model, a wide range of computational results are compared to experimental data in the literature. Generally, the computational results are in good agreement with the experimental results. Included are computational results that m...

203 citations


Journal ArticleDOI
TL;DR: In this article, a vibration waveform normalisation approach is presented, which enables the use of the pseudo-Wigner-Ville distribution to indicate deteriorating fault conditions under fluctuating load conditions.

139 citations


Proceedings ArticleDOI
10 Jun 2002
TL;DR: Experimental results for the larger ISCAS-89 benchmarks and an IBM production circuit show that reduced test data volume, test application time and low power scan testing can indeed be achieved in all cases.
Abstract: We present a test resource partitioning (TRP) technique that simultaneously reduces test data volume, test application time and scan power. The proposed approach is based on the use of alternating run-length codes for test data compression. Experimental results for the larger ISCAS-89 benchmarks and an IBM production circuit show that reduced test data volume, test application time and low power scan testing can indeed be achieved in all cases.

Journal ArticleDOI
TL;DR: A server, NETASA is implemented for predicting solvent accessibility of amino acids using a newly optimized neural network algorithm, and applicability of neural networks for ASA prediction has been confirmed with a larger data set and wider range of state thresholds.
Abstract: Motivation: Prediction of the tertiary structure of a protein from its amino acid sequence is one of the most important problems in molecular biology. The successful prediction of solvent accessibility will be very helpful to achieve this goal. In the present work, we have implemented a server, NETASA for predicting solvent accessibility of amino acids using our newly optimized neural network algorithm. Several new features in the neural network architecture and training method have been introduced, and the network learns faster to provide accuracy values, which are comparable or better than other methods of ASA prediction. Results: Prediction in two and three state classification systems with several thresholds are provided. Our prediction method achieved the accuracy level upto 90% for training and 88% for test data sets. Three state prediction results provide a maximum 65% accuracy for training and 63% for the test data. Applicability of neural networks for ASA prediction has been confirmed with a larger data set and wider range of state thresholds. Salient differences between a linear and exponential network for ASA prediction have been analysed.

Proceedings ArticleDOI
28 Apr 2002
TL;DR: This paper proposes a technique based on the reconfiguration of scan chains to reduce test time and test data volume for Illinois Scan Architecture (ILS) based designs.
Abstract: As the complexity of VLSI circuits is increasing due to the exponential rise in transistor count per chip, testing cost is becoming an important factor in the overall integrated circuit (IC) manufacturing cost. This paper addresses the issue of decreasing test cost by lowering the test data bits and the number of clock cycles required to test a chip. We propose a technique based on the reconfiguration of scan chains to reduce test time and test data volume for Illinois Scan Architecture (ILS) based designs. This technique is presented with details of hardware implementation as well as the test generation and test application procedures. The reduction in test time and test data volume achieved using this technique is quite significant in most circuits.

Patent
07 Aug 2002
TL;DR: In this article, a data collection device having a memory is fixedly connected to a component, and a test device communicates with the data collector to store test data concerning the component in the data collection devices.
Abstract: The present invention provides systems and methods for testing and storage of information related to a component. A data collection device having a memory is fixedly connected to the component. A test device communicates with the data collection device to store test data concerning the component in the data collection device. The test device also performs analysis of the test data and provides information concerning the health and maintenance history of the component. The present invention also provides systems and methods for determining the current drawn or supplied by electrical components connected in parallel in an electrical system. A current sensor located between the electrical components determines the current supply or draw of one of the electrical components, while a current sensor between the electrical components and the remainder of the electrical system determine a cumulative current draw or supply by both the electrical components.

Patent
Ming Jin1, Myint Ngwe, David Loh, Quek Leong Choo, Mingyou Hu 
31 Jan 2002
TL;DR: In this article, a method for detecting defects in a recordable medium such as a hard disc drive based on error energy is presented, which may include the steps of writing test data to the medium and reading back the test data.
Abstract: A method is disclosed for detecting defects in a recordable medium such as a hard disc drive based on error energy. The method may include the steps of writing test data to the medium and reading back the test data. The method may also include the steps of computing an error energy based on the square of the difference between the read back data and an ideal version of the test data and comparing the error energy with an energy threshold. The method generates a defect signal when the error energy exceeds the energy threshold. The method may also be used to identify the media defect according to its error energy profile. An apparatus for detecting defects in a recordable medium is also disclosed.

Journal ArticleDOI
TL;DR: Test data compression is used to reduce both test data volume and scan power and shows that Golomb coding of precomputed test sets leads to significant savings in peak and average power, without requiring either a slower scan clock or blocking logic in the scan cells.
Abstract: Test data volume and power consumption for scan vectors are two major problems in system-on-a-chip testing. Since static compaction of scan vectors invariably leads to higher power for scan testing, the conflicting goals of low-power scan testing and reduced test data volume appear to be irreconcilable. We tackle this problem by using test data compression to reduce both test data volume and scan power. In particular, we show that Golomb coding of precomputed test sets leads to significant savings in peak and average power, without requiring either a slower scan clock or blocking logic in the scan cells. We also improve upon prior work on Golomb coding by showing that a separate cyclical scan register is not necessary for pattern decompression. Experimental results for the larger ISCAS 89 benchmarks show that reduced test data volume and low power scan testing can indeed be achieved in all cases.

Journal ArticleDOI
TL;DR: In this article, a model for the damage behavior of polymer matrix composite laminates is presented, which predicts the inelastic effects as reduction of stiffness and increments of damage and unrecoverable deformation.
Abstract: A new modelfor damage behavior of polymer matrix composite laminates is presented The model is developed for an individual lamina, and then assembled to describe the nonlinear behavior of the laminate The model predicts the inelastic effects as reduction of stiffness and increments of damage and unrecoverable deformation The modelis defined using Continuous Damage Mechanics coupled with Classical Thermodynamic Theory Unrecoverable deformations and Damage are coupled by the concept of effective stress New expressions of damage and unrecoverable deformation domains are presented so that the number of model parameters is small Furthermore, model parameters are obtained from existing test data for unidirectional laminae, supplemented by cyclic shear stress strain data Comparison with lamina and laminate test data are presented to demonstrate the ability of the model to predict the observed behavior

Proceedings ArticleDOI
28 Apr 2002
TL;DR: A metric that can be used to evaluate the effectiveness of procedures for reducing the scan data volume is proposed that compares the achieved compression to the compression which is intrinsic to the use of multiple scan chains.
Abstract: We consider issues related to the reduction of scan test data in designs with multiple scan chains. We propose a metric that can be used to evaluate the effectiveness of procedures for reducing the scan data volume. The metric compares the achieved compression to the compression which is intrinsic to the use of multiple scan chains. We also propose a procedure for modifying a given test set so as to achieve reductions in test data volume assuming a combinational decompressor circuit.

Proceedings ArticleDOI
28 Jul 2002
TL;DR: A simple approach for combining global search with local optimization to discover improved hypotheses in general machine learning problems, and considers example-reweighting strategies that are reminiscent of boosting and other ensemble learning methods, but applied in a different way with a different goal.
Abstract: Almost all machine learning algorithms--be they for regression, classification or density estimation--seek hypotheses that optimize a score on training data. In most interesting cases, however, full global optimization is not feasible and local search techniques are used to discover reasonable solutions. Unfortunately, the quality of the local maxima reached depends on initialization and is often weaker than the global maximum. In this paper, we present a simple approach for combining global search with local optimization to discover improved hypotheses in general machine learning problems. The main idea is to escape local maxima by perturbing the training data to create plausible new ascent directions, rather than perturbing hypotheses directly. Specifically, we consider example-reweighting strategies that are reminiscent of boosting and other ensemble learning methods, but applied in a different way with a different goal: to produce a single hypothesis that achieves a good score on training and test data. To evaluate the performance of our algorithms we consider a number of problems in learning Bayesian networks from data, including discrete training problems (structure search), continuous training problems (parametric EM, non-linear logistic regression), and mixed training problems (Structural EM)- on both synthetic and real-world data. In each case, we obtain state of the art performance on both training and test data.

01 Jun 2002
TL;DR: The HART-II (Higher harmonic control Aeroacoustics Rotor Test) as discussed by the authors is a major cooperative program within the existing US-German and US-French Memoranda of Understanding/Agreements (MOU/MOA).
Abstract: : In a major cooperative program within the existing US-German and US-French Memoranda of Understanding/Agreements (MOU/MOA), researchers from German DLR, French ONERA, NASA Langley, and the US Army Aeroflightdynamics Directorate (AFDD) conducted a comprehensive experimental program in October 2001 with a 40%-geometrically and aeroelastically scaled model of a BO-105 main rotor in the open-jet anechoic test section of the German-Dutch Windtunnel (DNW). This international cooperative program carries the acronym HART-II (Higher harmonic control Aeroacoustics Rotor Test). The main objective of the program is to improve the basic understanding and the analytical modeling capabilities of rotor blade-vortex interaction noise with and without higher harmonic pitch control (HHC) inputs, particularly the effect of rotor wakes on rotor noise and vibration. Comprehensive acoustic, rotor wakes, aerodynamic, and blade deformation data were obtained with pressure-instrumented blades. The test plan has been concentrated on measuring extensive rotor wakes with a 3-component Particle Image Velocimetry (PIV) technique, along with measurements of acoustics, blade surface pressures, and blade deformations. The prediction team with researchers from DLR, ONERA, NASA-Langley and AFDD was actively involved with the pre-test activities to formulate a test plan and measurement areas of the PIV technique. The prediction team predicted all the test results in advance before performing the wind tunnel test. This was done to obtain the best quality of test data, to improve the speed of measurements, and to determine the necessary measurement information for code validation. In this paper, an overview of the HART-II program and some representative measured and predicted results are presented.

Proceedings ArticleDOI
10 Dec 2002
TL;DR: It is demonstrated that higher test data compression can be achieved based on encoding both runs of 0's and 1's and its effectiveness in achieving higher compression ratio is demonstrated.
Abstract: One of the major challenges in testing a system-on-a-chip (SOC) is dealing with the large test data size. To reduce the volume of test data, several test data compression techniques have been proposed. The frequency-directed run-length (FDR) code is a variable-to-variable run length code based on encoding runs of 0's. In this work, we demonstrate that higher test data compression can be achieved based on encoding both runs of 0's and 1's. We propose an extension to the FDR code and demonstrate by experimental results its effectiveness in achieving higher compression ratio.


Proceedings Article
09 Jul 2002
TL;DR: This paper presents an evolutionary test environment, which performs fully automatic test data generation for most structural test methods and reports on the results gained from the testing of real-world software modules.
Abstract: Testing is the most important analytic quality assurance measure for software. The systematic design of test cases is crucial for test quality. Structure-oriented test methods, which define test cases on the basis of the internal program structures, are widely used. Evolutionary testing is a promising approach for the automation of structural test case design which searches test data that fulfil given structural test criteria by means of evolutionary computation. In this paper we present our evolutionary test environment, which performs fully automatic test data generation for most structural test methods. We shall report on the results gained from the testing of real-world software modules. For most modules we reached full coverage for the structural test criteria.

Proceedings ArticleDOI
04 Mar 2002
TL;DR: A new compression algorithm geared to reduce the time needed to test scan-based designs by compressing the test vector set by encoding the bits that need to be flipped in the current test data slice in order to obtain the mutated subsequenttest data slice.
Abstract: In this paper we propose a new compression algorithm geared to reduce the time needed to test scan-based designs. Our scheme compresses the test vector set by encoding the bits that need to be flipped in the current test data slice in order to obtain the mutated subsequent test data slice. Exploitation of the overlap in the encoded data by effective traversal search algorithms results in drastic overall compression. The technique we propose can be utilized as not only a stand-alone technique but also can be utilized on test data already compressed, extracting even further compression. The performance of the algorithm is mathematically analyzed and its merits experimentally confirmed on the larger examples of the ISCAS '89 benchmark circuits.

Journal ArticleDOI
TL;DR: In this article, the performance of UD and laminated composites was compared with test data and the results showed that the intact and degraded models provided good agreement and several stress/strain curves provided good performance.

Proceedings ArticleDOI
07 Oct 2002
TL;DR: This paper presents a test input data compression technique, which can be used to reduce input test data volume, test time, and the number of required tester channels.
Abstract: This paper presents a test input data compression technique, which can be used to reduce input test data volume, test time, and the number of required tester channels. The technique is based on grouping data packets and applying various binary encoding techniques, such as Huffman codes and Golomb-Rice codes. Experiments on actual industrial designs and benchmark circuits show an input vector data reduction ranging from 17/spl times/ to 70/spl times/.

Patent
22 Aug 2002
TL;DR: In this article, a system, method and computer program product for determining whether a test sample is in a first or a second class of data (for example: cancerous or normal), comprising: extracting a plurality of emerging patterns from a training data set, creating a fIrst and a second list containing respectively, a frequency of occurrence of each emerging pattern that has a non-zero occurrence in the first and in the second classes of data.
Abstract: A system, method and computer program product for determining whether a test sample is in a first or a second class of data (for example: cancerous or normal), comprising: extracting a plurality of emerging patterns from a training data set, creating a fIrst and a second list containing respectively, a frequency of occurrence of each emerging pattern that has a non-zero occurrence in the first and in the second class of data; using a fixed number of emerging patterns, calculating a first and second score derived respectively from the frequencies of emerging patterns in the first list that also occur in the test data, and from the frequencies of emerging patterns in the second list that also occur in the test data; and deducing whether the test sample is categorized in the first or the second class of data by selecting the higher of the first and the second score.

Proceedings ArticleDOI
10 Jun 2002
TL;DR: An integrated framework for plug-and-play SOC test automation based on a new approach for wrapper/TAM co optimization based on rectangle packing is described, which incorporates precedence and power constraints in the test schedule, while allowing the SOC integrator to designate a group of tests as preemptable.
Abstract: This paper describes an integrated framework for plug-and-play SOC test automation. This framework is based on a new approach for wrapper/TAM co optimization based on rectangle packing. We first tailor TAM widths to each core's test data needs. We then use rectangle packing to develop an integrated scheduling algorithm that incorporates precedence and power constraints in the test schedule, while allowing the SOC integrator to designate a group of tests as preemptable. Finally, we study the relationship between TAM width and tester data volume to identify an effective TAM width for the SOC. We present experimental results for non-preemptive, preemptive, and power-constrained test scheduling, as well as for effective TAM width identification for an academic benchmark SOC and three industrial SOCs.

Journal ArticleDOI
TL;DR: In this paper, an ANN was developed and used to estimate aquifer parameter values, namely transmissivity and storage coefficient, from pumping test data for a large diameter well, based upon a pre-specified range of aquifer parameters.

Patent
25 Sep 2002
TL;DR: In this article, a system and method for software test is presented that allows user management and adaptation of test procedures and the resulting test data and provides a cross-platform user interface which allows testing to be conducted on a plurality of platforms, i.e. it is not integrated with a single platform or language.
Abstract: A system and method for software test is disclosed that allows user management and adaptation of test procedures and the resulting test data. In an embodiment, the system and method provide a cross-platform user interface which allows testing to be conducted on a plurality of platforms, i.e. it is not integrated with a single platform or language (e.g., C++, Visual Basic, Java, or the like). The method further allows a user to customize a predetermined set of system characteristics relating to the stored test procedure. It is emphasized that this abstract is provided to comply with the rules requiring an abstract which will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope of meaning of the claims.