scispace - formally typeset
Search or ask a question

Showing papers on "Test data published in 1998"


Journal ArticleDOI
TL;DR: The results support the feasibility of using EEG-based methods for monitoring cognitive load during human-computer interaction with neural network pattern recognition applied to EEG spectral features.
Abstract: We assessed working memory load during computer use with neural network pattern recognition applied to EEG spectral features. Eight participants performed high-, moderate-, and low-load working memory tasks. Frontal theta EEG activity increased and alpha activity decreased with increasing load. These changes probably reflect task difficulty-related increases in mental effort and the proportion of cortical resources allocated to task performance. In network analyses, test data segments from high and low load levels were discriminated with better than 95% accuracy. More than 80% of test data segments associated with a moderate load could be discriminated from high- or low-load data segments. Statistically significant classification was also achieved when applying networks trained with data from one day to data from another day, when applying networks trained with data from one task to data from another task, and when applying networks trained with data from a group of participants to data from new participants. These results support the feasibility of using EEG-based methods for monitoring cognitive load during human-computer interaction.

522 citations


Journal ArticleDOI
01 Dec 1998
TL;DR: A split-and-merge expectation-maximization algorithm to overcome the local maxima problem in parameter estimation of finite mixture models and is applied to the training of gaussian mixtures and mixtures of factor analyzers and shows the practical usefulness by applying it to image compression and pattern recognition problems.
Abstract: We present a split-and-merge expectation-maximization (SMEM) algorithm to overcome the local maxima problem in parameter estimation of finite mixture models. In the case of mixture models, local maxima often involve having too many components of a mixture model in one part of the space and too few in another, widely separated part of the space. To escape from such configurations, we repeatedly perform simultaneous split-and-merge operations using a new criterion for efficiently selecting the split-and-merge candidates. We apply the proposed algorithm to the training of gaussian mixtures and mixtures of factor analyzers using synthetic and real data and show the effectiveness of using the split- and-merge operations to improve the likelihood of both the training data and of held-out test data. We also show the practical usefulness of the proposed algorithm by applying it to image compression and pattern recognition problems.

422 citations


Proceedings ArticleDOI
18 Oct 1998
TL;DR: This paper presents the concept of a structured test access mechanism for embedded cores: test data access from chip pins to TESTSHELL and vice versa is provided by the TESTRAIL, while the operation of the TEStsHELL is controlled by a dedicated test control mechanism (TCM).
Abstract: The main objective of core-based IC design is improvement of design efficiency and time-to-market. In order to prevent test development from becoming the bottleneck in the entire development trajectory, reuse of pre-computed tests for the reusable pre-designed cores is mandatory. The core user is responsible for translating the test at core level into a test at chip level. A standardized test access mechanism eases this task, therefore contributing to the plug-n-play character of core-based design. This paper presents the concept of a structured test access mechanism for embedded cores. Reusable IP modules are wrapped in a TESTSHELL. Test data access from chip pins to TESTSHELL and vice versa is provided by the TESTRAIL, while the operation of the TESTSHELL is controlled by a dedicated test control mechanism (TCM). Both TESTRAIL as well as TCM are standardized, but open for extensions.

338 citations


Proceedings ArticleDOI
18 Oct 1998
TL;DR: A novel test vector compression/decompression technique is proposed for reducing the amount of test data that must be stored on a tester and transferred to each core when testing a core-based design.
Abstract: A novel test vector compression/decompression technique is proposed for reducing the amount of test data that must be stored on a tester and transferred to each core when testing a core-based design. A small amount of on-chip circuitry is used to reduce both the test storage and test time required for testing a core-based design. The fully specified test vectors provided by the core vendor are stored in compressed form in the tester memory and transferred to the chip where they are decompressed and applied to the core (the compression is lossless). Instead of having to transfer each entire test vector from the tester to the core, a smaller amount of compressed data is transferred instead. This reduces the amount of test data that must be stored on the tester and hence reduces the total amount of test time required for transferring the data with a given test data bandwidth.

310 citations


Proceedings ArticleDOI
01 Mar 1998
TL;DR: This paper statically transform a procedure into a constraint system by using well-known "Static Single Assignment" form and control-dependencies to solve this system to check whether at least one feasible control flow path going through the selected point exists and to generate test data that correspond to one of these paths.
Abstract: Automatic test data generation leads to identify input values on which a selected point in a procedure is executed. This paper introduces a new method for this problem based on constraint solving techniques. First, we statically transform a procedure into a constraint system by using well-known "Static Single Assignment" form and control-dependencies. Second, we solve this system to check whether at least one feasible control flow path going through the selected point exists and to generate test data that correspond to one of these paths.The key point of our approach is to take advantage of current advances in constraint techniques when solving the generated constraint system. Global constraints are used in a preliminary step to detect some of the non feasible paths. Partial consistency techniques are employed to reduce the domains of possible values of the test data. A prototype implementation has been developped on a restricted subset of the C language. Advantages of our approach are illustrated on a non-trivial example.

267 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed autoprogressive training for training neural networks to learn complex stress-strain behavior of materials using global load-deflection response measured in a structural test.
Abstract: A new method, termed autoprogressive training, for training neural networks to learn complex stress–strain behaviour of materials using global load–deflection response measured in a structural test is described. The richness of the constitutive information that is generally implicitly contained in the results of structural tests may in many cases make it possible to train a neural network material model from only a small number of such tests, thus overcoming one of the perceived limitations of a neural network approach to modelling of material behaviour; namely, that a voluminous amount of material test data is required. The method uses the partially-trained neural network in a central way in an iterative non-linear finite element analysis of the test specimen in order to extract approximate, but gradually improving, stress–strain information with which to train the neural network. An example is presented in which a simple neural network constitutive model of a T300/976 graphite/epoxy unidirectional lamina is trained, using the load–deflection response recorded during a destructive compressive test of a [(±45)6]S laminated structural plate containing an open hole. The results of a subsequent forward analysis are also presented, in which the trained material model is used to simulate the response of a compressively loaded [(±30)6]S structural laminate containing an open hole. Avenues for further improvement of the neural network model are also suggested. The proposed autoprogressive algorithm appears to have wide application in the general area of Non-Destructive Evaluation (NDE) and damage detection. Most NDE experiments can be viewed as structural tests and the proposed methodology can be used to determine certain damage indices, similar to the way in which constitutive models are determined. © 1998 John Wiley & Sons, Ltd.

263 citations


Journal ArticleDOI
TL;DR: Empirical studies on a particular safe regression test selection technique are reported, in which the technique is compared to the alternative regression testing strategy of running all tests, and indicate that it can be cost-effective, but that its costs and benefits vary widely based on a number of factors.
Abstract: Regression testing is an expensive testing procedure utilized to validate modified software. Regression test selection techniques attempt to reduce the cost of regression testing by selecting a subset of a program's existing test suite. Safe regression test selection techniques select subsets that, under certain well-defined conditions, exclude no tests (from the original test suite) that if executed would reveal faults in the modified software. Many regression test selection techniques, including several safe techniques, have been proposed, but few have been subjected to empirical validation. This paper reports empirical studies on a particular safe regression test selection technique, in which the technique is compared to the alternative regression testing strategy of running all tests. The results indicate that safe regression test selection can be cost-effective, but that its costs and benefits vary widely based on a number of factors. In particular, test suite design can significantly affect the effectiveness of test selection, and coverage-based test suites may provide test selection results superior to those provided by test suites that are not coverage-based.

253 citations


Journal ArticleDOI
TL;DR: In this article, the problem of detecting the location and extent of structural damage from measured vibration test data is examined, based upon a mathematical model representing the undamaged vibrating structure and a local description of the damage, e.g. a finite element for a cracked beam.

232 citations


Proceedings ArticleDOI
13 Oct 1998
TL;DR: The results of preliminary experiments using this technique and the prototype tool-set are presented and show the efficiency and effectiveness of this approach to automate the generation of test data.
Abstract: Structural testing criteria are mandated in many software development standards and guidelines. The process of generating test data to achieve 100% coverage of a given structural coverage metric is labour-intensive and expensive. This paper presents an approach to automate the generation of such test data. The test-data generation is based on the application of a dynamic optimisation-based search for the required test data. The same approach can be generalised to solve other test-data generation problems. Three such applications are discussed-boundary value analysis, assertion/run-time exception testing, and component re-use testing. A prototype tool-set has been developed to facilitate the automatic generation of test data for these structural testing problems. The results of preliminary experiments using this technique and the prototype tool-set are presented and show the efficiency and effectiveness of this approach.

215 citations


Proceedings ArticleDOI
01 Nov 1998
TL;DR: This paper presents a novel program execution based approach using an iterative relaxation method to address the generation of test data that causes a program to follow a given path.
Abstract: An important problem that arises in path oriented testing is the generation of test data that causes a program to follow a given path. In this paper, we present a novel program execution based approach using an iterative relaxation method to address the above problem. In this method, test data generation is initiated with an arbitrarily chosen input from a given domain. This input is then iteratively refined to obtain an input on which all the branch predicates on the given path evaluate to the desired outcome. In each iteration the program statements relevant to the evaluation of each branch predicate on the path are executed, and a set of linear constraints is derived. The constraints are then solved to obtain the increments for the input. These increments are added to the current input to obtain the input for the next iteration. The relaxation technique used in deriving the constraints provides feedback on the amount by which each input variable should be adjusted for the branches on the path to evaluate to the desired outcome.When the branch conditions on a path are linear functions of input variables, our technique either finds a solution for such paths in one iteration or it guarantees that the path is infeasible. In contrast, existing execution based approaches may require an unacceptably large number of iterations for relatively long paths because they consider only one input variable and one branch predicate at a time and use backtracking. When the branch conditions on a path are nonlinear functions of input variables, though it may take more then one iteration to derive a desired input, the set of constraints to be solved in each iteration is linear and is solved using Gaussian elimination. This makes our technique practical and suitable for automation.

177 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented a simplified method of data analysis that can be used to estimate a first-order reaction rate coefficient from breakthrough curves, which are obtained by fitting a regression line to a plot of normalized concentrations versus elapsed time.
Abstract: The single-well, ``push-pull`` test method is useful for obtaining information on a wide variety of aquifer physical, chemical, and microbiological characteristics. A push-pull test consists of the pulse-type injection of a prepared test solution into a single monitoring well followed by the extraction of the test solution/ground water mixture from the same well. The test solution contains a conservative tracer and one or more reactants selected to investigate a particular process. During the extraction phase, the concentrations of tracer, reactants, and possible reaction products are measured to obtain breakthrough curves for all solutes. This paper presents a simplified method of data analysis that can be used to estimate a first-order reaction rate coefficient from these breakthrough curves. Rate coefficients are obtained by fitting a regression line to a plot of normalized concentrations versus elapsed time, requiring no knowledge of aquifer porosity, dispersivity, or hydraulic conductivity. A semi-analytical solution to the advective-dispersion equation is derived and used in a sensitivity analysis to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a homogeneous, confined aquifer with a fully-penetrating injection/extraction well and varying porosity, dispersivity, test duration, and reaction rate. A numerical flow andmore » transport code (SUTRA) is used to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a heterogeneous, unconfined aquifer with a partially penetrating well. In all cases the simplified method provides accurate estimates of reaction rate coefficients; estimation errors ranged from 0.1 to 8.9% with most errors less than 5%.« less

Patent
19 Jun 1998
TL;DR: In this paper, a neural network or algorithm is used to assess a microcycle sequence of microload/microcharge tests utilizing one of a series of battery parameters including impedance as well as voltage characteristics to effect classification.
Abstract: Method and apparatus for battery evaluation and classification applies transient microcharge and/or microload pulses to an automotive battery. Classification is made on the basis of analysis of the resultant voltage profile or portions or dimensions thereof. In one embodiment the analysis utilizes a neural network or algorithm to assess a microcycle sequence of microload/microcharge tests utilizing one of a series of battery parameters including impedance as well as voltage characteristics to effect classification. Another embodiment adopts an optimized (not maximum) level of prior test-based data-training for a self-organizing neural network. A third embodiment utilizes prior test data correlation to enable algorithm-based classification without use of a neural network.

Journal ArticleDOI
TL;DR: Tabu search (TS) as discussed by the authors was used as an alternative to backpropagation for neural network optimization in the context of forecasting out-of-sample data, and the results showed that TS derived solutions that were significantly superior to those of backpropaggregation solutions for in-sample, interpolation, and extrapolation test data for all seven test functions.

Journal ArticleDOI
TL;DR: Experiments using evolutionary testing on a number of programs with up to 1511 LOC and 5000 input parameters have successfully identified new longer and shorter execution times than had been found using other testing techniques, and evolutionary testing seems to be a promising approach for the verification of timing constraints.
Abstract: Many industrial products are based on the use of embedded computer systems. Usually, these systems have to fulfil real-time requirements, and correct system functionality depends on their logical correctness as well as on their temporal correctness. In order to verify the temporal behavior of real-time systems, previous scientific work has, to a large extent, concentrated on static analysis techniques. Although these techniques offer the possibilty of providing safe estimates of temporal behavior for certain cases, there are a number of cases in practice for which static analysis can not be easily applied. Furthermore, no commercial tools for timing analysis of real-world programs are available. Therefore, the developed systems have to be thoroughly tested in order to detect existing deficiencies in temporal behavior, as well as to strengthen the confidence in temporal correctness. An investigation of existing test methods shows that they mostly concentrate on testing for logical correctness. They are not specialised in the examination of temporal correctness which is also essential to real-time systems. For this reason, existing test procedures must be supplemented by new methods which concentrate on determining whether the system violates its specified timing constraints. Normally, a violation means that outputs are produced too early, or their computation takes too long. The task of the tester therefore is to find the input situations with the longest or shortest execution times, in order to check whether they produce a temporal error. If the search for such inputs is interpreted as a problem of optimization, evolutionary computation can be used to automatically find the inputs with the longest or shortest execution times. This automatic search for accurate test data by means of evolutionary computation is called evolutionary testing. Experiments using evolutionary testing on a number of programs with up to 1511 LOC and 5000 input parameters have successfully identified new longer and shorter execution times than had been found using other testing techniques. Evolutionary testing, therefore, seems to be a promising approach for the verification of timing constraints. A combination of evolutionary testing and systematic testing offers further opportunities to improve the test quality, and could lead to an effective test strategy for real-time systems.

Journal ArticleDOI
TL;DR: An efficient scheme to compress and decompress in parallel deterministic test patterns for circuits with multiple scan chains while achieving a complete fault coverage for any fault model for which test cubes are obtainable is presented.
Abstract: The paper presents an efficient scheme to compress and decompress in parallel deterministic test patterns for circuits with multiple scan chains. It employs a boundary-scan-based environment for high quality testing with flexible trade-offs between test data volume and test application time while achieving a complete fault coverage for any fault model for which test cubes are obtainable. It also reduces bandwidth requirements, as all test cube transfers involve compressed data. The test patterns are generated by the reseeding of a two-dimensional hardware structure which is comprised of a linear feedback shift register (LFSR), a network of exclusive-or (XOR) gates used to scramble the bits of test vectors, and extra feedbacks which allow including internal scan flip-flops into the decompressor structure to minimize the area overhead. The test data decompressor operates in two modes: pseudorandom and deterministic. In the first mode, the pseudorandom pattern generator (PRPG) is used purely as a generator of test vectors. In the latter case, variable-length seeds are serially scanned through the boundary-scan interface into the PRPG and parts of internal scan chains and, subsequently, a decompression is performed in parallel by means of the PRPG and selected scan flip-flops interconnected to form the decompression device. Extensive experiments with the largest ISCAS' 89 benchmarks show that the proposed technique greatly reduces the amount of test data in a cost effective manner.

Journal ArticleDOI
TL;DR: Genetic algorithms have been used successfully to generate software test data automatically to give 100% branch coverage in up to two orders of magnitude fewer tests than random testing.
Abstract: Genetic algorithms have been used successfully to generate software test data automatically; all branches were covered with substantially fewer generated tests than simple random testing. We generated test sets which executed all branches in a variety of programs including a quadratic equation solver, remainder, linear and binary search procedures, and a triangle classifier comprising a system of five procedures. We regard the generation of test sets as a search through the input domain for appropriate inputs. The genetic algorithms generated test data to give 100% branch coverage in up to two orders of magnitude fewer tests than random testing. Whilst some of this benefit is offset by increased computation effort, the adequacy of the test data is improved by the genetic algorithm's ability to generate test sets which are at or close to the input subdomain boundaries. Genetic algorithms may be used for fault-based testing where faults associated with mistakes in branch predicates are revealed. The software has been deliberately seeded with faults in the branch predicates (i.e. mutation testing), and our system successfully killed 97% of the mutants.

Proceedings ArticleDOI
J. Wu1, G. Song1, C.-P. Yeh1, K. Wyatt1
27 May 1998
TL;DR: In this article, a drop simulation and test validation case in ASMR is reported in detail, and the analysis is carried out with LS-DYNA3D, which focuses on housing breakage, LCD cracking and structural disconnection under drop/impact shock.
Abstract: Portable communication devices suffer impact-induced failure in usage. The products must pass drop/impact tests before shipment. The drop/impact performance is an important concern in product design. Due to the small size of this kind of electronic products, it is very expensive, time-consuming, and difficult to conduct drop tests to detect the failure mechanism and identify the drop behaviour. Finite element analysis provides a vital, powerful vehicle to solve this problem. The methodology of computer modeling, finite element method simulation and test validation techniques developed in ASMR at Motorola over the last two years are introduced in this paper. Two drop simulation and test validation cases in ASMR are reported in detail. The models are created with HYPERMESH, and the analysis is carried out with LS-DYNA3D. The analysis focuses on housing breakage, LCD cracking and structural disconnection under drop/impact shock. Apart from the computer simulation, a drop laboratory has been built in ASMR. With a customized drop tester, the drop orientation of the specimen can be controlled. The impact force relation to barrier, acceleration and strain inside the specimen during drop can be recorded in terms of time history curves. The test device, drop test and correlation of analysis and test data are illustrated in the paper. The simulation and test technology are applied to reliability identification and design support to Motorola's products.

Journal ArticleDOI
TL;DR: In this paper, the modal flexibility and its derivative, uniform load surface (ULS) are analyzed for their truncation effect and sensitivity to experimental error, and the ULS is found to have much less truncation effects and is least sensitive to experimental errors.
Abstract: Using erroneous test data can be misleading in nondestructive evaluation practice. The objective of this paper is to discuss what experimental data to use and how to mitigate experimental error when a modal test is used. In this paper, the modal flexibility and its derivative, uniform load surface (ULS) are analyzed for their truncation effect and sensitivity to experimental error. The ULS is found to have much less truncation effect and is least sensitive to experimental error. These features make it a critical experimental index in the structural identification as needed in nondestructive evaluation. This paper also discusses how to utilize pretest analysis to determine the test frequency band that will lead to the least truncation error. The ultimate usefulness of the approach presented in this paper is that it can lead to effective and accurate nondestructive evaluation. A numerical example and a real structure, the Cross-Country Highway Bridge in the Cincinnati, Ohio, area, are analyzed for the truncation effect.

Proceedings ArticleDOI
01 Mar 1998
TL;DR: This paper presents a novel approach of automated regression test generation in which all generated test cases uncover an error(s), used to test the common functionality of the original program and its modified version, i.e., programs whose functionality is unchanged after modifications.
Abstract: Regression testing involves testing the modified program in order to establish the confidence in the modifications. Existing regression testing methods generate test cases to satisfy selected testing criteria in the hope that this process may reveal faults in the modified program. In this paper we present a novel approach of automated regression test generation in which all generated test cases uncover an error(s). This approach is used to test the common functionality of the original program and its modified version, i.e., it is used for programs whose functionality is unchanged after modifications. The goal in this approach is to identify test cases for which the original program and the modified program produce different outputs. If such a test is found, then this test uncovers an error. The problem of finding such a test case may be reduced to the problem of finding program input on which a selected statement is executed. As a result, existing methods of automated test data generation for white-box testing may be used to generate these tests. Our experiments have shown that our approach may improve the chances of finding software errors as compared to the existing methods of regression testing. The advantage of our approach is that it is fully automated and that all generated test cases reveal an error(s).

Patent
29 Sep 1998
TL;DR: In this paper, a method of testing high speed interconnectivity of circuit boards having components operable at a high speed system clock, employing an IEEE 1149.1 standard test method in which test data is shifted into and from the components at the rate of a test clock during Shift_In and Shift_Out operations, and having an Update operation and a Capture operation between the Shift-Out and Shift-In operations, was proposed.
Abstract: A method of testing high speed interconnectivity of circuit boards having components operable at a high speed system clock, employing an IEEE 1149.1 standard test method in which test data is shifted into and from the components at the rate of a test clock during Shift_In and Shift_Out operations, and having an Update operation and a Capture operation between the Shift_In and Shift_Out operations, the components including a first group of components capable of performing the Update and Capture operations at the rate of the Test Clock only and a second group of components capable of performing the Update and Capture operations at the rate of the system clock, the method comprising the steps of performing the Shift_In operation in all of the components concurrently at the rate of the Test Clock; performing the Update and Capture Operations in the first group of components at the rate of the Test Clock; and performing the Update and Capture operations in the second group of components at the rate of the system Clock. The method employs anovel integrated circuit, test controller and boundary scan cells.

Book ChapterDOI
16 Aug 1998
TL;DR: Evaluation of this technique with artificial students indicates that it can deliver highly accurate assessments and can be used to initialize the student model of an ITS.
Abstract: Although conventional tests are often used for determining a student's overall competence, they are seldom used for determining a finegrained model. However, this problem does arise occasionally, such as when a conventional test is used to initialize the student model of an ITS. Existing psychometric techniques for solving this problem are intractable. Straightforward Bayesian techniques are also inapplicable because they depend too strongly on the priors, which are often not available. Our solution is to base the assessment on the difference between the prior and posterior probabilities. If the test data raise the posterior probability of mastery of a piece of knowledge even slightly above its prior probability, then that is interpreted as evidence that the student has mastered that piece of knowledge. Evaluation of this technique with artificial students indicates that it can deliver highly accurate assessments.

Patent
27 Mar 1998
TL;DR: In this paper, an approach for testing of memory locations containing both test data and test check bits is presented, where the memory controller determines whether a correspondence exists between the test check bit that were written and the test data that were read.
Abstract: Apparatus and method for testing of memory locations containing both test data and test check bits are provided The apparatus includes a memory controller that communicates with memory devices In a test mode of operation using a test mode control bit, the memory controller receives test data, together with test check bits that have values corresponding to at least some of the values of the test data The test data and test check bits are written to desired memory locations of the memory devices The memory controller is involved in a subsequent read of these same memory locations and receives the test data and test check bits from those previously written memory locations The memory controller determines whether a correspondence exists between the test check bits that were written and the test check bits that were read Any lack of correspondence is indicative of one or more memory location faults Both the test data and the test check bits are checked for accuracy during single transfer operations and the checking of the test check bits is conducted using at least some of the values of the associated test data

Journal ArticleDOI
TL;DR: BIST circuitry has been designed and evaluated using complementary metal-oxide-semiconductor (CMOS) 1.2 /spl mu/m technology and the proposed BIST structure presents a compromise between test cost, area overhead, and test time.
Abstract: In this paper, built-in self-test (BIST) approach has been applied to test digital-to-analog (D/A) and analog-to-digital (A/D) converters. Offset, gain, integral nonlinearity (INL), and differential nonlinearity (DNL) errors and monotonicity are tested without using mixed-mode or logic test equipment. An off-line calibrating technique has been used to insure the accuracy of BIST circuitry and to reduce area overhead by avoiding the use of high quality analog blocks. The proposed BIST structure presents a compromise between test cost, area overhead, and test time. By a minor modification the test structure would be able to localize the fail situation. The same approach may be used to construct a fast low cost off-chip D/A converter tester. The BIST circuitry has been designed and evaluated using complementary metal-oxide-semiconductor (CMOS) 1.2 /spl mu/m technology.

Journal ArticleDOI
TL;DR: Four different approaches to solving the trilinear three-way factor analysis problem are compared, and their performance with `difficult' (i.e., ill-conditioned) data is tested.

Journal ArticleDOI
TL;DR: Several methods are available to interpret slug tests; however, when applied to the same test data, they usually yield very different results as mentioned in this paper, and the methods are classified into three categories depen...
Abstract: Several methods are available to interpret slug tests; however, when applied to the same test data, they usually yield very different results. The methods are classified into three categories depen...

Proceedings ArticleDOI
26 Apr 1998
TL;DR: A test data compression method based on the Burrows-Wheeler transformation on the sequence of test patterns and run-length coding which outperforms other methods for compressing test data.
Abstract: The overall throughput of automatic test equipment (ATE) is sensitive to the download time of test data. An effective approach to the reduction of the download time is to compress test data before the download. The authors introduced a test data compression method which outperforms other methods for compressing test data. Our previous method was based on the Burrows-Wheeler transformation on the sequence of test patterns and run-length coding. In this paper, we present a new method, called COMPACT, which further improves our previous method. The key idea of COMPACT is to employ two data compression schemes, run-length coding for data with low activity and GZIP for data with high activity. COMPACT increases the compression ratio of test data, on average, by 1.9 times compared with our previous method.

Patent
09 Dec 1998
TL;DR: In this article, a method and apparatus for performing an operability test on a communications system device such as a cable modem is provided, which comprises the steps of providing a set of output test data to the communication system device being tested, and generating an output signal with the device in a manner that is responsive to the output test test data.
Abstract: A method and apparatus for performing an operability test on a communications system device such as a cable modem is provided. The testing method comprises the steps of providing a set of output test data to the communications system device being tested, and generating an output signal with the device in a manner that is responsive to the output test data. In the case of a cable modem, the output signal is generated by the modem's modulator. The output signal is then provided as an input to a reflective mixer which generates a reflected signal in response to the output signal and directs it back into the communications system device being tested. The communications device can then use the reflected signal to generate a set of input test data that can be compared to the output test data to check the accuracy of the device's operation. In the case of a cable modem, the reflected signal is demodulated by a demodulator at one or more frequencies, and this demodulated data can then be compared to the output test data. In one case, the testing method is controlled by a computer device running a program of instructions to carry out the aforementioned steps. The computer device can send command signals to the modulator and the demodulator to place them in a test mode. Within this mode, the computer device can command the modulator to generate a particular output signal and command the demodulator to demodulate at certain frequencies.

Journal ArticleDOI
26 May 1998
TL;DR: This paper recalls the principles of test data selection from algebraic data types specifications and transposes them to basic and full LOTOS, and suggests a new integrated approach to test derivation from full LotOS specifications, where both behavioural properties and data types properties are taken into account when dealing with processes.
Abstract: There is now a lot of interest in program testing based on formal specifications However, most the works in this area focus on one formalized aspect of the software under test For instance, some previous works of the first author consider abstract data type specifications Other works are based on behavioural descriptions, such as finite state machines or finite labelled transition systems This paper begins by brie y recalling the principles of test data selection from algebraic data types specifications Then, it transposes them to basic and full LOTOS Finally it exploits this uniform framework and suggests a new integrated approach to test derivation from full LOTOS specifications, where both behavioural properties and data types properties are taken into account when dealing with processes

Patent
23 Oct 1998
TL;DR: In this article, a multi-function test set is programmed to automate testing of the radios used in a telemetry system in conjunction with other test software embedded in the host telemetry devices.
Abstract: Railroad telemetry radios are tested by an automated method for in situ testing, so that only those units requiring adjustment and maintenance are removed. A multi-function test set is programmed to automate testing of the radios used in a telemetry system in conjunction with other test software embedded in the host telemetry devices. The radios contain both a transmitter and a receiver. Both are individually tested to verify proper performance. Receiver sensitivity testing of the radio is tested by bit error rate (BER) measurement with test software and a dedicated BER modulator. A known low amplitude message comprised of a short pseudorandom pattern continuously repeated by the test set BER modulator is demodulated by the radio receiver. The test software processes the received data and counts the number of errorless messages received over a specific period of time. The receiver sensitivity is known to be acceptable if the number of correct messages received is higher than a predetermined minimum value. The transmitter performance is tested by measuring radio frequency (RF) carrier frequency, modulation frequency, deviation and RF output power. The test set is programmed to automatically measure these parameters, determine whether they meet minimum requirements, prompt the technician as to pass/fail status, and optionally display measured test data for use in radio repair, if required, or for statistical purposes.

Patent
James R. Kunz1
05 Feb 1998
TL;DR: In this article, the authors present a system and method to determine a data transfer rate from a server to a client as follows: a server transfers a test program or a reference to the test program to the client and the client executes the test programs.
Abstract: A system and method determines a data transfer rate from a server to a client as follows. A server transfers a test program or a reference to the test program to the client and the client executes the test program. The test program requests test data from the server, measures the elapsed time to obtain the test data, and calculates a transfer rate based on the length of the test data and the elapsed time. Either the client or the server selects data corresponding to the client request, to send to the client based on the calculated transfer rate. The system and method can be used in a WWW environment where the data is a web page and the test program is an applet.