scispace - formally typeset
Search or ask a question

Showing papers on "Test data published in 1997"


Journal ArticleDOI
Laura E. Ray1
TL;DR: Extended Kalman-Bucy filtering and Bayesian hypothesis selection are applied to estimate motion, tire forces, and road coefficient of friction of vehicles on asphalt surfaces to select the most likely μ from a set of hypothesized values.

472 citations


Journal ArticleDOI
TL;DR: The effectiveness of the mutation testing and all-uses test data adequacy criteria at various coverage levels, for randomly generated test sets, was mixed: at the highest coverage levels considered, mutation was more effective than all-use for five of the nine subjects, all- uses was moreeffective than mutation for two subjects, and there was no clear winner for two Subjects.

235 citations


Patent
14 Aug 1997
TL;DR: In this article, decision support systems are employed to evaluate specific observation values and test results, to guide the development of biochemical or other diagnostic tests, too assess a course of treatment, to identify new diagnostic tests and disease markers, and to identify useful therapies, and provide the decision-support functionality for the test.
Abstract: Methods are provided for developing medical diagnostic tests using decision-support systems, such as neural networks. Patient data or information, typically patient history or clinical data, are analyzed by the decision-support systems to identify important or relevant variables and decision-support systems are trained on the patient data. Patient data are augmented by biochemical test data, or results, where available, to refine performance. The resulting decision-support systems are employed to evaluate specific observation values and test results, to guide the development of biochemical or other diagnostic tests, too assess a course of treatment, to identify new diagnostic tests and disease markers, to identify useful therapies, and to provide the decision-support functionality for the test. Methods for identification of important input variables for a medical diagnostic tests for use in training the decision-support systems to guide the development of the tests, for improving the sensitivity and specificity of such tests, and for selecting diagnostic tests that improve overall diagnosis of, or potential for, a disease state and that permit the effectiveness of a selected therapeutic protocol to be assessed are provided. The methods for identification can be applied in any field in which statistics are used to determine outcomes. A method for evaluating the effectiveness of any given diagnostic test is also provided.

228 citations


ReportDOI
01 Dec 1997
TL;DR: In this article, a detailed analysis of the results from fatigue studies of wind turbine blade composite materials carried out at Montana State University (MSU) over the last seven years is presented, which is intended to be used in conjunction with the DOE/MSU composite Materials Fatigue Database.
Abstract: This report presents a detailed analysis of the results from fatigue studies of wind turbine blade composite materials carried out at Montana State University (MSU) over the last seven years. It is intended to be used in conjunction with the DOE/MSU composite Materials Fatigue Database. The fatigue testing of composite materials requires the adaptation of standard test methods to the particular composite structure of concern. The stranded fabric E-glass reinforcement used by many blade manufacturers has required the development of several test modifications to obtain valid test data for materials with particular reinforcement details, over the required range of tensile and compressive loadings. Additionally, a novel testing approach to high frequency (100 Hz) testing for high cycle fatigue using minicoupons has been developed and validated. The database for standard coupon tests now includes over 4,100 data points for over 110 materials systems. The report analyzes the database for trends and transitions in static and fatigue behavior with various materials parameters. Parameters explored are reinforcement fabric architecture, fiber content, content of fibers oriented in the load direction, matrix material, and loading parameters (tension, compression, and reversed loading). Significant transitions from good fatigue resistance to poor fatigue resistance are evident in the range of materials currently used in many blades. A preliminary evaluation of knockdowns for selected structural details is also presented. The high frequency database provides a significant set of data for various loading conditions in the longitudinal and transverse directions of unidirectional composites out to 10{sup 8} cycles. The results are expressed in stress and strain based Goodman Diagrams suitable for design. A discussion is provided to guide the user of the database in its application to blade design.

190 citations


Journal ArticleDOI
TL;DR: The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.
Abstract: A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

187 citations


Patent
David N. Hinckley1
26 Feb 1997
TL;DR: A test automation system for performing functional tests of a software program is described in this paper, which includes a plurality of test functions each configured to test a discrete component of the software program.
Abstract: A test automation system for performing functional tests of a software program The system includes a plurality of test functions each configured to test a discrete component of the software program a user-defined test specification associated with the program provides state definitions that specify a desired test approach for each type of test procedure to be performed on the program A test engine creates all test cases appropriate for a user-selected test type and controls the software program, applying the test functions and state definitions in accordance with the test specification All test-specific and software program-specific data are located in the user-defined test functions and specifications while all generic test system processing resides in the test engine The test specifications are preferably implemented in modifiable text files to maintain concurrency with an evolving software program The test engine creates all possible permutations and combinations for performing a desired test These test specification includes such items as the states that the software program may possess, the test functions required to transfer between one state and other possible states, information pertaining to the values that specific inputs may have, etc During operation, the test engine generates test histories indicating the results of the test performed in accordance with one of the test specifications The contents of the test histories include determination made by the test functions executed in accordance with an associated test specification

168 citations


Journal ArticleDOI
TL;DR: In this paper, a general regression model is proposed to evaluate covariate effects on ROC'S. The method is illustrated on data from a study of multiformat photographic images used for scintigraphy.
Abstract: SUMMARY Receiver operating characteristic curves (ROC's) are used to evaluate diagnostic tests when test results are not binary. They describe the inherent capacity of the test for distinguishing between truly diseased and nondiseased subjects. Although methodology for estimating and for comparing ROC'S is well developed, to date no general framework exists for evaluating covariate effects on ROC'S. We formulate a general regression model which allows the effects of covariates on test accuracy to be succinctly summarised. Such covariates might include, for example, characteristics of the patient or test environment, test type or severity of disease. The regression models are shown to arise naturally from some classic models for continuous or ordinal test data. Regression parameters are fitted using an estimating equation approach. The method is illustrated on data from a study of multiformat photographic images used for scintigraphy.

141 citations


Journal ArticleDOI
TL;DR: This paper presents a novel functional analysis of the weight matrix based on a technique developed for determining the behavioral significance of hidden neurons, compared with the application of the same technique to the training and test data.
Abstract: The problem of data encoding and feature selection for training back-propagation neural networks is well known The basic principles are to avoid encrypting the underlying structure of the data, and to avoid using irrelevant inputs This is not easy in the real world, where we often receive data which has been processed by at least one previous user The data may contain too many instances of some class, and too few instances of other classes Real data sets often include many irrelevant or redundant input fields This paper examines the use of weight matrix analysis techniques and functional measures using two real (and hence noisy) data sets The first part of this paper examines the use of the weight matrix of the trained neural network itself to determine which inputs are significant A new technique is introduced and compared with two other techniques from the literature We present our experience and results on some satellite data augmented by a terrain model The task was to predict the forest supra-type based on the available information A brute force technique eliminating randomly selected inputs was used to validate our approach The second part of this paper examines the use of measures to determine the functional contribution of inputs to outputs Inputs which include minor but unique information to the network are more significant than inputs with higher magnitude contribution but providing redundant information, which is also provided by another input A comparison is made to sensitivity analysis, where the sensitivity of outputs to input perturbation is used as a measure of the significance of inputs This paper presents a novel functional analysis of the weight matrix based on a technique developed for determining the behavioral significance of hidden neurons This is compared with the application of the same technique to the training and test data Finally, a novel aggregation technique is introduced

141 citations


Journal ArticleDOI
TL;DR: In this article, the results of experiments on a small scale steel frame model are presented to support the "disposition equation error function," and "strain output error function" met.
Abstract: e results of experiments on a small scale steel frame model are presented to support the "dis­ placement equation error function,". "di.splace~ent o~tput error function," and "strain output error function" met.h0ds ~f structural parameter estimatIOn usmg static nondestructive test data. Both static displacement and static stram mea~urements are used to successfully evaluate the unknown stiffness parameters of the structural components. ~elght factors calculated from the variance of the measured data are applied to reduce error in the parameter estimates.

132 citations


Patent
08 Sep 1997
TL;DR: In this paper, a sensor receives a print image from an authorized person (21) to form a template, and from a candidate (11), to form test data, and the test data are bandpassed and normalized and expressed as local sinusoids for comparison.
Abstract: A sensor receives a print image from an authorized person (21) to form a template, and from a candidate (11) to form test data. Noise variance (12) is estimated from the test data as a function of position in the image, and used to weight the importance of comparison with the template at each position. Test data are multilevel, and are bandpassed and normalized (13) and expressed as local sinusoids for comparison. A ridge spacing and direction map (28) of the template is stored as vector wavenumber fields, which are later used to refine comparison. Global dilation (34) and also differential distortions (45) of the test image are estimated, and taken into account in the comparison. Comparison yields a test statistic (52) that is the ratio, or log of the ratio, of the likelihoods of obtaining the test image assuming that it respectively was, and was not, formed by an authorized user. The test statistic is compared with a threshold value, preselected for a desired level of certainty, to make the verification decision.

125 citations


Journal ArticleDOI
TL;DR: The design of the software system ADTEST (ADa TESTing), for generating test data for programs developed in Ada83, is presented, finding to reduce the effort required to test programs as well as providing an increase in test coverage.
Abstract: Presents the design of the software system ADTEST (ADa TESTing), for generating test data for programs developed in Ada83. The key feature of this system is that the problem of test data generation is treated entirely as a numerical optimization problem and, as a consequence, this method does not suffer from the difficulties commonly found in symbolic execution systems, such as those associated with input variable-dependent loops, array references and module calls. Instead, program instrumentation is used to solve a set of path constraints without explicitly knowing their form. The system supports not only the generation of integer and real data types, but also non-numerical discrete types such as characters and enumerated types. The system has been tested on large Ada programs (60,000 lines of code) and found to reduce the effort required to test programs as well as providing an increase in test coverage.

Journal ArticleDOI
TL;DR: Model robustness is shown to be significantly improved as a direct consequence of using multiple neural network representations and confidence bands for the neural network model predictions also result directly from the application of the bootstrap technique.

Proceedings ArticleDOI
01 Nov 1997
TL;DR: A design for testability and symbolic test generation technique for testing such core-based systems on a chip and shows that the proposed scheme has significantly lower area overhead, delay overhead, and test application time compared to FScan-BScan and F Scan-TBus, without any compromise in the system fault coverage.
Abstract: In a fundamental paradigm shift in system design, entire systems are being built on a single chip, using multiple embedded cores. Though the newest system design methodology has several advantages in terms of time-to-market and system cost, testing such core-based systems is difficult due to the problem of justifying test sequences at the inputs of a core embedded deep in the system, and propagating test responses from the core outputs. In this paper, we present a design for testability and symbolic test generation technique for testing such core-based systems on a chip. The proposed method consists of two parts: (i) core-level DFT to make each core testable and transparent, the latter needed to propagate test data through the cores, and (ii) system-level DFT and test generation to ensure the justification and propagation of the precomputed test sequences and test responses of the core. Since the hierarchical testability analysis technique used to tackle the above problem is symbolic, the system test generation method is independent of the bit-width of the cores. The system-level test set is obtained as a by-product of the testability analysis and insertion method without further search. Besides the proposed test method, the two methods that are currently used in the industry were also evaluated on two example systems: (i) FScan-BScan, where each core is full-scanned, and system test is performed using boundary scan, and (ii) FScan-TBus, where each core is full-scanned, and system test is performed using a test bus. The experiments show that the proposed scheme has significantly lower area overhead, delay overhead, and test application time compared to FScan-BScan and FScan-TBus, without any compromise in the system fault coverage.

Patent
30 Sep 1997
TL;DR: In this article, the authors define a private LUN as a data storage area known and accessible to all controllers in the system and used by them for diagnostic purposes, where the test data preferably include a data portion and a redundancy portion to enable testing of redundancy computations within the controllers.
Abstract: Methods and associated apparatus within a RAID subsystem having redundant controllers define a private LUN as a data storage area known and accessible to all controllers in the system and used by them for diagnostic purposes. The methods involve sending a diagnostic write command to a first controller with instructions for it to write test data to the private LUN. This first controller writes this test data to the private LUN. A second controller, in response to another diagnostic command, then reads this test data from the private LUN and compares it to expected values provided in the diagnostic command. Using the results, it can then be determined which controller, if any, failed. If the first controller fails, then the second controller takes over ownership of portions of the data storage area assigned to the first controller. The private LUN is preferably striped across all channels used by the controllers to communicate to commonly attached disk drives. This allows the diagnostic process to test disk channel data paths in determining whether a controller has failed. The test data preferably include a data portion and a redundancy portion to enable testing of redundancy computations within the controllers. In an alternate embodiment, a host computer attached via an interface in common with the redundant controllers initiates and controls the diagnostic process to enable testing of the host/controller communication paths. Timed event messages (e.g., watchdog timer features) may be used in conjunction with other methods of the invention to further enhance failure detection.

Journal ArticleDOI
TL;DR: It is shown that the ANN approach is more accurate than other methods and that the use of principal components analysis on the inputs can improve the model.
Abstract: Artificial neural networks (ANNs) are used to model the interactions that occur between ozone pollution, climatic conditions, and the sensitivity of crops and other plants to ozone A number of generic methods for analysis and modeling are presented These methods are applicable to the modeling and analysis of any data where an effect (in this case damage to plants) is caused by a number of variables that have a nonlinear influence Multilayer perceptron ANNs are used to model data from a number of sources and analysis of the trained optimized models determines the accuracy of the model's predictions The models are sufficiently general and accurate to be employed as decision support systems by United Nations Economic Commission for Europe (UNECE) in determining the critical acceptable levels of ozone in Europe Comparison is made of the accuracy of predictions for a number of modeling approaches It is shown that the ANN approach is more accurate than other methods and that the use of principal components analysis on the inputs can improve the model The validation of the models relies on more than simply an error measure on the test data The relative importance of the causal agents in the model is established in the first instance by summing absolute weight values This indicates whether the model is consistent with domain knowledge The application of a range of conditions to the model then allows predictions to be made about the nonlinear influences of the individual principal inputs and of combinations of two inputs viewed as a three-dimensional graph Equations are synthesized from the ANN to represent the model in an explicit mathematical form Models are formed with essential parameters and other inputs are added as necessary, in order of decreasing priority, until an acceptable error level is reached Secondary indicators substituting for primary indicators with which they are strongly correlated can be removed From the synthesized equations both known and novel aspects of the process modeled can be identified Known effects validate the model Novel effects form the basis of hypotheses which can then be tested

Patent
08 Dec 1997
TL;DR: In this paper, a method and apparatus for sharing integrated testing services with a plurality of autonomous remote clients is disclosed, in which in response to an access request message, a process controller transmits an access enabling message to the remote client.
Abstract: A method and apparatus for sharing integrated testing services with a plurality of autonomous remote clients is disclosed. In the disclosed method, in response to an access request message, a process controller transmits an access enabling message to the remote client. The access enabling message includes instructions performable by a remote client to generate test equipment commands. A process controller interprets and transforms these commands into automated test instrument suite commands, which are provided to laboratory modules to perform the indicated tests. Test data results are then obtained and transmitted to the remote client.

Journal ArticleDOI
TL;DR: In this article, the authors compared three modeling approaches, viz. the conventional mechanistic approach, formulations based on different artificial neural network (ANN) topologies and a hybrid mechanistic-ANN structure.

Journal ArticleDOI
TL;DR: Methods for routinely monitoring probe test data at the wafer map level to detect the presence of spatial clustering are developed under various null and alternative situations of interest.
Abstract: Quality control in integrated circuit (IC) fabrication has traditionally been based on overall summary data such as lot or wafer yield. These measures are adequate if the defective IC's are distributed randomly both within and across wafers in a lot. In practice, however, the defects often occur in clusters or display other systematic patterns, In general, these spatially clustered defects have assignable causes that can be traced to individual machines or to a series of process steps that did not meet specified requirements. In this article, we develop methods for routinely monitoring probe test data at the wafer map level to detect the presence of spatial clustering. The statistical properties of a family of monitoring statistics are developed under various null and alternative situations of interest, and the resulting methodology is applied to manufacturing data.

Patent
21 Aug 1997
TL;DR: In this paper, a self-test architecture for testing one or more integrated circuits is presented, where each circuit is provided with an interface compatible with IEEE standard 1149.1 and a scan register containing scan cells for supplying input test data to and receiving output test data from the internal circuitry of the integrated circuits.
Abstract: A built-in self test architecture for testing one or more integrated circuits. Each circuit is provided with an interface compatible with IEEE standard 1149.1 and one or more scan registers containing scan cells for supplying input test data to, and receiving output test data from, the internal circuitry of the integrated circuits, a pseudo-random pattern generator for supplying patterns of test data to the boundary scan register, and a pattern compressor for compressing the output test data into a signature. The architecture also includes a single clock multiplexer, located external to the integrated circuits, for selectively supplying a system clock or a test clock to the testing components of each integrated circuit.

Journal ArticleDOI
TL;DR: In this article, the point load test is shown useful for weak/saturated rocks which are commonly encountered in mechanical dredging operation, and a comparative, testing program was initiated, which included more than four hundred rock tests.

Journal ArticleDOI
TL;DR: Although sensitivity, specificity, and predictive value have long been used as indices of test accuracy, newer methods such as receiver operating characteristic curve (ROC) analysis, logistic regression analysis and likelihood ratios are more robust indicators that overcome many limitations of the traditional indices.
Abstract: Various approaches have been proposed for evaluating the diagnostic value of biochemical markers Careful design of experimental protocol is key in carrying out any evaluation of clinical diagnostic value A prospective cohort study is the best clinical trial design and should include an appropriate reference (gold) standard applied in every patient, the results of which are assessed blindly The spectrum of patients evaluated should reflect the population in which the test will be used, be appropriately broad to avoid bias, and include both symptomatic and asymptomatic patients The handling of indeterminate results and the eligibility criteria for inclusion in the study should be carefully defined Although sensitivity, specificity, and predictive value have long been used as indices of test accuracy, newer methods such as receiver operating characteristic curve (ROC) analysis, logistic regression analysis and likelihood ratios are more robust indicators that overcome many limitations of the traditional indices The area under the ROC curve (AUC) is the best global indicator of test accuracy, but comparisons of AUC for different tests must take correlation between the tests into account if they have been performed in the same patients Logistic regression analysis allows the diagnostic information from several tests to be evaluated multivariately, provides a probability estimate for a given outcome, and requires few assumptions regarding the underlying distributions of test data Logistic regression also provides a straightforward method for calculating likelihood ratios Likelihood ratios are useful for interpreting test results in the individual patient because they provide a convenient means to directly determine predictive value without having to calculate sensitivity and specificity for a given decision limit Application of these methods is demonstrated using specific examples

Proceedings ArticleDOI
01 Nov 1997
TL;DR: The key idea of the proposed method is to perform the Burrows-Wheeler transformation on the sequence of test patterns, and then to apply run-length coding, and the experimental results show that the compression method performs better than six other methods for compressing test data.
Abstract: The overall throughput of automatic test equipment (ATE) is sensitive to the download time of test data. An effective approach to the reduction of the download time is to compress test data before the download. A compression algorithm for test data should meet two requirements: lossless and simple decompression. In this paper we propose a new test data compression method that aims to fully utilize the unique characteristics of test data compression. The key idea of the proposed method is to perform the Burrows-Wheeler transformation on the sequence of test patterns, and then to apply run-length coding. The experimental results show that our compression method performs better than six other methods for compressing test data. The average compression ratio of the proposed method performed on five test data sets is 315, while that for the next best one, the LZW method, is 21.

Journal ArticleDOI
TL;DR: In this article, the axial strains of the truss elements measured in static tests are used in the identification process, and the identification problem is formulated as an optimization program in which the error norm of the equilibrium equation is minimized.
Abstract: A method for identifying the element properties of a truss is developed in this paper. The axial strains of the truss elements measured in static tests are used in the identification process. The finite-element method is used to derive the equilibrium equation of the truss. The identification problem is then formulated as an optimization program in which the error norm of the equilibrium equation is minimized. It is shown that given sufficient test data the element properties can be directly attained without iterations. Furthermore, the solution is unique and globally minimal. The proposed method can be applied to total truss identification as well as substructure identification. The identifiability of the inverse problem is then studied in depth. The perturbation method is adopted to investigate the influence of measurement errors on the identification results. A numerical example is presented to demonstrate the proposed method. Finally, a model test is performed to examine the effectiveness of the method in real applications.

Dissertation
01 Jan 1997
TL;DR: In this paper, a multi-linear representation of response has been adopted, where the stiffness of the connection is measured as elements enter the plastic range and the response of the component parts.
Abstract: Observation of fire damaged structures and recent fire tests at the Cardington LBTF have suggested that even nominally `simple' connections are capable of providing significant restraint at elevated-temperatures. As most frames are designed assuming pinned response at ambient-temperature, with no account being taken of the reduction in mid-span moments, this is an aspect of connectivity which may be utilised in the assessment of the fire resistance of steel framed buildings, without necessitating changes in the approach adopted in ambient-temperature design or construction. To date the assessment of the influence of connection response on frame behaviour has been limited by the quantity of available test data, although initial studies based on postulated moment-rotation- temperature characteristics concluded that the failure temperatures for beams are increased due to the rigidity of `simple' connections. Moment-rotation relationships have been measured for a flush end-plate connection, both as bare-steel and as composite with a concrete slab across a range of temperatures. To define accurately the full moment-rotation-temperature response a series of tests have been conducted for each arrangement, where specimens were subject to varying constant levels of load and increasing temperatures. Observed failure mechanisms have been compared with those for a nominally identical specimen tested at ambient-temperature, and initial recommendations presented for the degradation of ambient-temperature connection characteristics. A mathematical expression is proposed in order to represent the test data at a number of temperatures. It is clearly unrealistic to expect that many such tests can be anticipated in the future, and as such a spring-stiffness model has been presented for both bare-steel and composite flush end-plate connections. The use of a spring-stiffness model compares favourably with other forms of modelling due to the combination of efficient solution and the ability to follow accurately the full non-linear range of connection response, based on an understanding of the response of the component parts. A multi-linear representation of response has been adopted, where the stiffness of the connection is revised as elements enter the plastic range of response. Comparison has been made between the response predicted and that recorded experimentally. Experimentally derived connection characteristics have been incorporated within analysis of typical sub-frames, with parameters including connection stiffness, capacity and temperature being varied. Further studies are presented considering the sensitivity of overall frame behaviour to inaccuracies in the representation of connection response and the use of simplified models to generate elevated-temperature connection characteristics. Based on postulated elevated-temperature moment-rotation characteristics for the connections contained within the Cardington test frame, predictions have been presented for the response of the structure subject to a series of full scale fire tests, with semi-rigid behaviour being compared with the common assumptions of pinned and rigid characteristics.

Journal ArticleDOI
01 Nov 1997
TL;DR: In this article, the results from a single test at ambient temperature and five tests at elevated temperature of a bare steel flush endplate beam-to-column connection are presented and discussed.
Abstract: This paper summarises the results from the first phase of a collaborative project to investigate the moment-rotation behaviour of commonly used connections at elevated temperatures. The results from a single test at ambient temperature and five tests at elevated temperature of a bare steel flush endplate beam-to-column connection are presented and discussed. Failure mechanisms are compared with those calculated according to existing design codes. A mathematical expression is proposed in order to represent the test data at a number of temperatures. Resultant connection characteristics are incorporated within existing software in order to assess the influence of realistic connection characteristics on frame response. Recommendations are mde for future tests and for numerical analysis to be carried out based on the work conducted to date.

Patent
Robert Warren1
20 Oct 1997
TL;DR: In this article, a test access port controller for effecting communications across a chip boundary having a test mode and a diagnostic mode of operation, wherein in the test data is resultant data from a test operation having an expected and time delayed relationship, and in the diagnostic data is conveyed both on and off chip in the form of respective independent input and output serial bit streams simultaneously through the test access controller.
Abstract: There is disclosed a test access port controller for effecting communications across a chip boundary having a test mode and a diagnostic mode of operation, wherein in the test mode of operation the test data is resultant data from a test operation having an expected and time delayed relationship, and in the diagnostic mode of operation diagnostic data is conveyed both on and off chip in the form of respective independent input and output serial bit streams simultaneously through the test access port controller.

Patent
17 Oct 1997
TL;DR: In this paper, a client application is provided to measure the performance, reliability or security of a system under test, based on user-defined loads to be applied to the system in test.
Abstract: The present invention provides a client application to measure the performance, reliability or security of a system under test, based on user-defined loads to be applied to the system under test. In the present invention, a test may be performed simultaneously on several servers and applications. As the test progresses, results are compiled during run time and visual feedback is provided. By allowing a user to define the test, and by providing run time compilation of results, the present invention can be used for capacity planning. Stopped or truncated tests still provide relevant results. The application also may allow acceptance criteria to be analyzed during the run time of test. Finally, the number of users simulated may be regulated by the application.

Patent
30 Dec 1997
TL;DR: In this paper, a self-test controller for internally generating test data patterns and expected resulting data and for comparing the expected resulting resulting data with actual resulting data, test interface circuitry for loading the test data pattern into the memory and reading back the actual data from the memory, means for selectively programming a voltage level to be applied to a selected cell plate of the memory according to predetermined test requirements and means for storing an address of a defective memory cell.
Abstract: A semiconductor device having a self test circuit including an embedded dynamic random access memory array for storing data, a self test controller for internally generating test data patterns and expected resulting data and for comparing the expected resulting data with actual resulting data, test interface circuitry for loading the test data patterns into the memory and reading back the actual resulting data from the memory, means for selectively programming a voltage level to be applied to a selected cell plate of the memory according to predetermined test requirements and means for storing an address of a defective memory cell. In addition, the semiconductor device includes means for repairing a defective memory row or column in response to a signal received from the self test controller.

Patent
13 Jun 1997
TL;DR: In this paper, a memory device includes an output data path that uses single-ended data in conjunction with a flag signal, and the output buffer outputs a tri-state condition on the data bus.
Abstract: A memory device includes an output data path that uses single-ended data in conjunction with a flag signal. The output data path transfers data from an I/O circuit coupled to a memory array to an output tri-state buffer. A comparing circuit compares data from the I/O circuit to a desired data pattern if the data does not match the desired pattern outputs the flag signal. The flag signal is input to the output buffer and the output buffer outputs a tri-state condition on the data bus. Since the flag signal corresponds to more than one data bit, the tri-state condition of the output buffer is held for more than one tick of the data clock, rather than only a single tick. Consequently, the tri-state condition remains on the bus for sufficiently long that a test system can detect the tri-state condition even at very high clock frequencies.

Proceedings ArticleDOI
04 Aug 1997
TL;DR: A novel feed forward neural network is used to classify hyperspectral data from the AVIRIS sensor using an alternating direction singular value decomposition technique to achieve rapid training times.
Abstract: A novel feed forward neural network is used to classify hyperspectral data from the AVIRIS sensor. The network applies an alternating direction singular value decomposition technique to achieve rapid training times (few seconds per class). Very few samples (10-12) are required for training. 100% accurate classification is obtained using test data sets. The methodology combines this rapid training neural network together with data reduction and maximal feature separation techniques such as principal component analysis and simultaneous diagonalization of covariance matrices, for rapid and accurate classification of large hyperspectral images. The results are compared to those of standard statistical classifiers.