scispace - formally typeset
Search or ask a question

Showing papers on "Test data published in 1993"


Journal ArticleDOI
TL;DR: A technique to select a representative set of test cases from a test suite that provides the same coverage as the entire test suite by identifying, and then eliminating, the redundant and obsolete test cases in the test suite is presented.
Abstract: This paper presents a technique to select a representative set of test cases from a test suite that provides the same coverage as the entire test suite. This selection is performed by identifying, and then eliminating, the redundant and obsolete test cases in the test suite. The representative set replaces the original test suite and thus, potentially produces a smaller test suite. The representative set can also be used to identify those test cases that should be rerun to test the program after it has been changed. Our technique is independent of the testing methodology and only requires an association between a testing requirement and the test cases that satisfy the requirement. We illustrate the technique using the data flow testing methodology. The reduction that is possible with our technique is illustrated by experimental results.

630 citations


Journal ArticleDOI
TL;DR: In this article, the effectiveness of the all-uses and all-edges test data adequacy criteria is discussed, and a large number of test sets was randomly generated for each of nine subject programs with subtle errors, and it was determined whether the test set exposed an error.
Abstract: An experiment comparing the effectiveness of the all-uses and all-edges test data adequacy criteria is discussed. The experiment was designed to overcome some of the deficiencies of previous software testing experiments. A large number of test sets was randomly generated for each of nine subject programs with subtle errors. For each test set, the percentages of executable edges and definition-use associations covered were measured, and it was determined whether the test set exposed an error. Hypothesis testing was used to investigate whether all-uses adequate test sets are more likely to expose errors than are all-edges adequate test sets. Logistic regression analysis was used to investigate whether the probability that a test set exposes an error increases as the percentage of definition-use associations or edges covered by it increases. Error exposing ability was shown to be strongly positively correlated to percentage of covered definition-use associations in only four of the nine subjects. Error exposing ability was also shown to be positively correlated to the percentage of covered edges in four different subjects, but the relationship was weaker. >

275 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a detailed, concise account and analyses of the design attributes which led to dynamic stability in F-l developmental injectors, and provide a set of 16 reports (four volumes of four reports each) which present a somewhat chronological account of the methodology leading to a dynamically stable injector design.
Abstract: disparity in time and length scales exist in close proximity to one another. Although advances in the field have been made, the largest and most reliable source of information to date applicable to the design of improved combustion devices is the store of experimental data from full-scale engine tests. Consequently, the motivation behind the present work was to gain further insight into the mechanisms associated with combustion instability by providing a detailed, concise account and analyses of the design attributes which led to dynamic stability in F-l developmental injectors. Objectives were 1) to preserve the experience gained through development of the F-l engine; 2) to merge full-scale test results with corresponding theories and experiments; and 3) to analyze the effect of proposed solutions. To facilitate analysis, all available full-scale component and engine test data have been combined into a single data base. This compilation provides a complete genealogy of F-l developmental injector design configurations, and contains all available measured and observed test results. Table 1 lists the injector design parameters and test results acquired. The complete data base is available as an appendix to a separate technical report prepared by the authors.1 These data have been assembled from a variety of sources.2^ Reference 2 is a chronological tabulation of full-scale injector component test results recorded at the test site. This document lists the injectors tested along with the date, chamber pressure, thrust, run time, mixture ratio, bomb size, and damp times, as well as observations made during various tests. Reference 3 contains a set of 16 reports (four volumes of four reports each) which present a somewhat chronological account of the methodology leading to a dynamically stable injector design. Full-scale engine and component test results are discussed throughout this set of reports. Reference 4 provides a broad overview of the problems and solutions encountered with combustion instability in the F-l engine. Finally, Refs. 5 and 6 are weekly

269 citations


Journal ArticleDOI
TL;DR: In this paper, a path independent multiaxial fatigue damage criterion is proposed based on critical plane concepts: fatigue crack growth is controlled by the maximum shear strain, and an important secondary effect is due to the normal strain excursion over one reversal of the maximum Shear strain.
Abstract: -A path-independent multiaxial fatigue damage criterion is proposed based on critical plane concepts: fatigue crack growth is controlled by the maximum shear strain, and an important secondary effect is due to the normal strain excursion over one reversal of the maximum shear strain. The effect of loading path on fatigue endurance is quantified by the normal strain excursion. Only one multiaxial material constant is required in the model which can be determined from uniaxial test data plus one torsional result. The parameter can be easily integrated with a shear strain-life relationship to predict low cycle fatigue lifetime. Experimental data of four different materials: En1 5R steel, I% Cr-Mo-V steel, 304 stainless steel, and 316 stainless steel at two temperatures were used to verify the criterion. It is shown that the proposed parameter can satisfactorily correlate test results for various proportional and non-proportional straining paths. NOMENCLATURE

267 citations


Proceedings ArticleDOI
27 Apr 1993
TL;DR: Experiments with a recognizer trained on clean speech and test data degraded by both convolutional and additive noise show that doing RASTA processing in the new domain yields results comparable with those obtained by training the recognizer on known noise.
Abstract: RASTA (relative spectral) processing is studied in a spectral domain which is linear-like for small spectral values and logarithmic-like for large spectral values. Experiments with a recognizer trained on clean speech and test data degraded by both convolutional and additive noise show that doing RASTA processing in the new domain yields results comparable with those obtained by training the recognizer on known noise. >

266 citations


Proceedings ArticleDOI
01 Jan 1993
TL;DR: Different methods of optimizing the classification process of terminological representation systems are considered, and their effect on three different types of test data is evaluated.
Abstract: We consider different methods of optimizing the classification process of terminological representation systems, and evaluate their effect on three different types of test data Though these techniques can probably be found in many existing systems, until now there has been no coherent description of these techniques and their impact on the performance of a system One goal of this paper is to make such a description available for future implementors of terminological systems Building the optimizations that came off best into the KRIS system greatly enhanced its efficiency

250 citations


Proceedings ArticleDOI
01 Mar 1993
TL;DR: This paper defines adequacy criteria based on the program dependence graph, and proposes techniques based on program slicing to identify components of the modified program that can be tested using files from the old test suite, and components that have been affected by the modification.
Abstract: Program dependence graphs have been proposed for use in optimizing, vectorizing, and parallelizing compilers, and for program integration. This paper proposes their use as the basis for incremental program testing when using test data adequacy criteria. Test data adequacy is commonly used to provide some confidence that a particular test suite does a reasonable job of testing a program. Incremental program testing using test data adequacy criteria addresses the problem of testing a modified program given an adequate test suite for the original program. Ideally, one would like to create an adequate test suite for the modified program that reuses as many files from the old test suite as possible. Furthermore, one would like to know, for every file that is in both the old and the new test suites, whether the program components exercised by that file have been affected by the program modification; if no components have been affected, then it is not necessary to rerun the program using that file.In this paper we define adequacy criteria based on the program dependence graph, and propose techniques based on program slicing to identify components of the modified program that can be tested using files from the old test suite, and components that have been affected by the modification. This information can be used to reduce the time required to create new test files, and to avoid unproductive retesting of unaffected components. Although exact identification of the components listed above is, in general, undecidable, we demonstrate that our techniques provide safe approximations.

245 citations


Journal ArticleDOI
TL;DR: Experimental results from using Godzilla show that the technique can produce testData that is very close in terms of mutation adequacy to test data that is produced manually, and at substantially reduced cost.
Abstract: Constraint-based testing is a novel way of generating test data to detect specific types of common programming faults. The conditions under which faults will be detected are encoded as mathematical systems of constraints in terms of program symbols. A set of tools, collectively called Godzilla, has been implemented that automatically generates constraint systems and solves them to create test cases for use by the Mothra testing system. Experimental results from using Godzilla show that the technique can produce test data that is very close in terms of mutation adequacy to test data that is produced manually, and at substantially reduced cost. Additionally, these experiments have suggested a new procedure for unit testing, where test cases are viewed as throw-away items rather than scarce resources.

135 citations


Patent
01 Jul 1993
TL;DR: In this paper, a data processing system for projected data for variables is described, which includes a digital computer performing steps of processing the input data to calculate projected data respectively for a plurality of variables, and generating output including the projected data.
Abstract: In a computer system, a method for making the system and a method for using the system, the invention includes providing a data processing system to generate projected data for variables. The data processing system includes a digital computer performing steps of processing the input data to calculate projected data respectively for a plurality of variables, and generating output including the projected data; wherein the processing was first tested for accuracy by preprocessing input test data to calculate projected test data for the variables and by preprocessing the projected test data to derive a portion of the input test data from the projected test data.

131 citations


Proceedings ArticleDOI
07 Nov 1993
TL;DR: An optimized BIST scheme based on reseeding of multiple polynomial Linear Feedback Shift Registers (LFSRs) that allows an excellent trade-off between test data storage and test application time (number of test patterns) with a very small hardware overhead.
Abstract: In this paper we describe an optimized BIST scheme based on reseeding of multiple polynomial Linear Feedback Shift Registers (LFSRs). The same LFSR that is used to generate pseudo-random patterns, is loaded with seeds from which it produces vectors that cover the testcubes of difficult to test faults. The scheme is compatible with scandesign and achieves full coverage as it is based on random patterns combined with a deterministic test set. A method for processing the test s et to allow for efficient encoding by the .scheme is described. Algorithms for Calculating LFSR seeds from the test set and for the selection and ordering of polynomials are described. Experimental results are provided for ISCAS-89 benchmark circuits to demonstrate the effectiveness of the scheme. The scheme allows an excellent trade-off between test data storage and test application time (number of test patterns) with a very small hardware overhead. We show the trade-off between test data storage and number of test patterns under the scheme.

113 citations


Journal ArticleDOI
TL;DR: This model of fault detection provides a framework within which other testing criteria's capabilities can be evaluated and shows that none of these criteria is capable of guaranteeing detection for these fault classes and points out two major weaknesses.
Abstract: RELAY is a model of faults and failures that defines failure conditions, which describe test data for which execution will guarantee that a fault originates erroneous behavior that also transfers through computations and information flow until a failure is revealed. This model of fault detection provides a framework within which other testing criteria's capabilities can be evaluated. Three test data selection criteria that detect faults in six fault classes are analyzed. This analysis shows that none of these criteria is capable of guaranteeing detection for these fault classes and points out two major weaknesses of these criteria. The first weakness is that the criteria do not consider the potential unsatisfiability of their rules. Each criterion includes rules that are sufficient to cause potential failures for some fault classes, yet when such rules are unsatisfiable, many faults may remain undetected. Their second weakness is failure to integrate their proposed rules. >

Book ChapterDOI
01 Jan 1993
TL;DR: A portable/in-situ stress-strain microprobe system was developed to evaluate nondestructively in situ the integrity of metallic components (including base metal, welds, and heat-affected zones (HAZs)) as mentioned in this paper.
Abstract: Determination of the integrity of any metallic structure is required either to ensure that failure will not occur during the service life of the components (particularly following any weld repair) or to evaluate the lifetime extension of the structure. A portable/in-situ stress-strain microprobe system was developed to evaluate nondestructively in situ the integrity of metallic components (including base metal, welds, and heat-affected zones (HAZs)) The microprobe system utilizes an innovative automated ball indentation (ABI) technique to determine several key mechanical properties (yield strength, true-stress/true-plastic-strain curve, strain-hardening exponent, Luders strain, elastic modulus, and an estimate of the local fracture toughness). This paper presents ABI test results from several metallic samples. The microprobe system was used successfully to nondestructively test in-situ a circumferentially welded Type 347 stainless steel pipe. Four V-blocks were used to mount the testing head of the microprobe system, allowing a 360° inspection of property gradients in the weld and its HAZ. The ABI test is based on strain-controlled multiple indentations (at the same penetration location) of a polished surface by a spherical indenter (0.25 to 1.57-mm diameter). The microprobe system and test methods (1) are based on well demonstrated and accepted physical and mathematical relationships which govern metal behavior under multiaxial indentation loading. A summary of the ABI test technique is presented here, and more details are given elsewhere (1-14). The microprobe system currently utilizes an electromechanically driven indenter, high-resolution penetration transducer and load cell, a personal computer (PC), a 16-bit data acquisition/control unit, and copyrighted ABI software. Automation of the test, where a 486 PC and a test controller were used in innovative ways to control the test (including a real-time graphic and digital display of load-depth test data) as well as to analyze test data (including tabulated summary and macro-generated plots), make it simple, rapid (less than 10 min for a complete ABI test), accurate, economical, and highly reproducible. Results of ABI tests (at several strain rates) on various base metals, welds, and HAZs at different metallurgical conditions are presented and discussed in this paper.

Journal ArticleDOI
TL;DR: In this paper, the average length of a demonstration test is derived and inferences based on demonstration test data for the probability p of a successful start-up are discussed, and the results on startup demonstration tests are obtained.
Abstract: New results on start-up demonstration tests are obtained. In particular, the average length of a demonstration test is derived and inferences based on demonstration test data for the probability p of a successful start-up are discussed. Practical recomm..

Journal ArticleDOI
TL;DR: In this paper, a comparison of dynamic characteristics of five buildings determined from recorded strong-motion response data and from low-amplitude (ambient vibration) tests, and a description of the PC-based data-acquisition approach that is integrated with the permanent strong motion instrumentation in the five buildings.
Abstract: The objectives of this paper are to present (1) a comparison of dynamic characteristics of five buildings determined from recorded strong-motion response data and from low-amplitude (ambient vibration) tests, and (2) a description of the low-amplitude ambient testing and PC-based data-acquisition approach that is integrated with the permanent strong-motion instrumentation in the five buildings. All five buildings are within the San Francisco Bay area and the strong-motion dynamic characteristics are extracted from the October 17, 1989 Loma Prieta earthquake response records. Ambient vibration tests on the same five buildings were conducted in September 1990. Analyses of strong-motion response and low-amplitude test data have been performed by many investigators. The present study differs from numerous previous investigations because (1) in this study, accelerometers in the five permanently-instrumented buildings were used during the low-amplitude testing, and (2) rapid screening of the strong-motion response data was achieved with a concerted use of system identification software. The results show for all cases that the fundamental periods and corresponding percentages of critical damping determined from low-amplitude tests are appreciably lower than those determined from strong-motion response records. The data set collected during this study is a useful contribution to the data base of dynamic characteristics of engineered structures and reconfirms the differences between the dynamic characteristics identified from strong-motion records and from low-amplitude tests.

Patent
08 Dec 1993
TL;DR: In this article, a data interface (508) which is capable of receiving test information from a variety of biological fluid testing instruments (530) employs first and second processes (512, 522) to determine the type of data provided by analyzing the format of the test information using a master rules file.
Abstract: A data interface (508) which is capable of receiving test information from a variety of biological fluid testing instruments (530) employs first and second processes (512, 522). The first process determines the type of data which is provided by analyzing the format of the test information using a master rules file. Once the type of the source is identified, the first process passes the data and a set of rules to use to parse the data to the second process. The second process extracts the test data from the information provided by the testing instruments using the set of rules identified by the first process. Furthermore, if the test information is received in pieces the present invention can marry the pieces to provide a complete data set to be stored in the database.

Proceedings ArticleDOI
21 May 1993
TL;DR: Important application areas of the framework are discussed, including refinement of test data, regression testing, and test oracles, as well as formally defines test data sets and their relation to the operations in a specification and other test data set.
Abstract: Test templates and a test template framework are introduced as useful concepts in specification-based testing. The framework can be defined using any model-based specification notation and used to derive tests from model-based specifications. It is demonstrated using the Z notation. The framework formally defines test data sets and their relation to the operations in a specification and other test data sets, providing structure to the testing process. Flexibility is also preserved, so that many testing strategies can be used. Important application areas of the framework are discussed, including refinement of test data, regression testing, and test oracles. >

Journal ArticleDOI
TL;DR: DERIV as mentioned in this paper is a program for converting slug and pumping test data and associated type curves to derivative format, which has features that permit smoothing of noisy test data, accounts for pressure derivative end-effects, and can be used to convert slug test data to equivalent pumping test responses.
Abstract: Hydrologie test analysis based on the derivative of pressure (i.e., rate of pressure change) with respect to the natural logarithm of time has been shown to significantly improve the diagnostic and quantitative analysis of slug and constant-rate pumping tests. The improvement in test analysis is attributed to the sensitivity of the derivative to small variations in the pressure change that occurs during testing, which would otherwise be less obvious with standard pressure change versus time analysis. The sensitivity of the pressure derivative to pressure change facilitates its use in identifying the effects of wellbore storage, boundaries, and establishment of radial flow conditions on the test. DERIV is a program for converting slug and pumping test data and associated type curves to derivative format. The program has features that permit the smoothing of noisy test data, accounts for pressure derivative end-effects, and can be used to convert slug test data to equivalent pumping test responses. Test examples that demonstrate the use of pressure derivatives also are provided.

Patent
02 Nov 1993
TL;DR: Disclosed as mentioned in this paper is a test method and system for boundary testing a circuit network, made up of individual integrated circuit chips mounted on a printed circuit card or board, has at least one integrated circuit that is testable by IEEE 1149.1 Standard boundary testing, and at least two second integrated circuits that are tested by Level Sensitive Scan Design boundary testing but not by IEEE 802.15.1 standard boundary testing.
Abstract: Disclosed is a test method and system for boundary testing a circuit network. The network, made up of individual integrated circuit chips mounted on a printed circuit card or board, has at least one integrated circuit that is testable by IEEE 1149.1 Standard boundary testing, and at least one second integrated circuit that is testable by Level Sensitive Scan Design boundary testing but not by IEEE 1149.1 Standard boundary testing. The test system has a test access port interface with a test access port controller with Test Clock, Test Data In, Test Data Out, Test Mode Select, and Test Reset I/O. The test access port also has an instruction register, a bypass register, a test clock generator, and a Level Sensitive Scan Device boundary scan register.

Journal ArticleDOI
TL;DR: An experiment using a formal specification of a piece of critical software as a basis to derive test data sets and established that the approach is practicable for industrial examples.

Patent
05 Mar 1993
TL;DR: In this paper, a scan test architecture includes first and second serial scan paths for transferring test data to and from an integrated circuit's logic, and a first clock controls information on the first scan path and a second clock controls transfer of data on the second scan path.
Abstract: A scan test architecture includes first and second serial scan paths for transferring test data to and from an integrated circuit's logic. A first clock controls transfer of information on the first scan path and a second clock controls transfer of data on the second scan path. The first and second clocks are alternately enabled by a control signal initiated under program control of the external test system.

Proceedings ArticleDOI
22 Jun 1993
TL;DR: A taxonomy of test data compaction functions for BIST that uses space, time, memory, linearity, and circuit (functional) specificity as attributes and a probabilistic characterization of faulty CUT behaviors are developed.
Abstract: The authors address space compaction for built-in self-test (BIST). High quality BIST of modern complex VLSI circuits requires that large numbers of internal nodes be monitored during test. Unfortunately, area limitations typically preclude the association of observation latches for all the nodes of interest in the case where these are very numerous. Consequently, the process known as space compaction is increasingly required. A first contribution here is a taxonomy of test data compaction functions for BIST that uses space, time, memory, linearity, and circuit (functional) specificity as attributes. The major contribution is the introduction of a general class of space compactors called programmable space compactors (PSCs). Programmability enables highly effective space compactors to be designed for circuits under test (CUTs) subjected to a specific set of test patterns. The measures of effectiveness used to assess the PSCs are realizability, e.g., area, and error propagation performance. The authors develop a probabilistic error propagation model and propose a technique for selecting an effective PSC given the expected test data from a CUT and a probabilistic characterization of faulty CUT behaviors.

Patent
06 Sep 1993
TL;DR: In this article, an apparatus and technique for fluid level determination in automatic transmissions is presented. But the authors employ an ECC unit in association with an automatic transmission for a vehicle, the electronic control unit receiving data corresponding to transmission oil level, transmission oil temperature, engine speed, transmission speed and transmission selected range.
Abstract: An apparatus and technique for fluid level determination in automatic transmissions. The invention employs an electronic control unit in association with an automatic transmission for a vehicle, the electronic control unit receiving data corresponding to transmission oil level, transmission oil temperature, engine speed, transmission speed, and transmission selected range. Upon determining that such data satisfies certain diagnostic tests, the level of transmission fluid within the transmission is determined and adjusted or otherwise normalized to ideal test conditions. The electronic control unit compensates an oil level signal for deviations in oil temperature from an optimum test temperature, and makes further compensation for deviations in engine speed and settling time at the time that the test data is acquired. Additionally, variations in oil density as a function of oil temperature are also compensated.

Journal ArticleDOI
01 Apr 1993
TL;DR: A new two-step approach is proposed that operationalizes the generator formula by translating it into a sequence of operators, and then executes it to construct the test database and introduces two powerful operators: the generation operator and the test-and-repair operator.
Abstract: To address the problem of generating test data for a set of general consistency constraints, we propose a new two-step approach: First the interdependencies between consistency constraints are explored and a generator formula is derived on their basis. During its creation, the user may exert control. In essence, the generator formula contains information to restrict the search for consistent test databases. In the second step, the test database is generated. Here, two different approaches are proposed. The first adapts an already published approach to generating finite models by enhancing it with requirements imposed by test data generation. The second, a new approach, operationalizes the generator formula by translating it into a sequence of operators, and then executes it to construct the test database. For this purpose, we introduce two powerful operators: the generation operator and the test-and-repair operator. This approach also allows for enhancing the generation operators with heuristics for generating facts in a goal-directed fashion. It avoids the generation of test data that may contradict the consistency constraints, and limits the search space for the test data. This article concludes with a careful evaluation and comparison of the performance of the two approaches and their variants by describing a number of benchmarks and their results.

Patent
20 Sep 1993
TL;DR: In this paper, a method for checking the test logic contained in a computer memory system during POST such that any errors can be determined and made available to the system software prior to beginning processing operations is presented.
Abstract: A system and method for checking the test logic contained in a computer memory system during POST such that any errors can be determined and made available to the system software prior to beginning processing operations. Single and double bit errors are induced which the ECC logic must identify and correct. The CPU compares the data that is written to memory with the data that is read back. Thus, since it is known that an error occurred, due to the induced error provided by the present invention, identical data will verify that the ECC correction logic is working properly. More specifically, a multiplexer is provided in the data write path which substitutes a constant set of identical bits for the actual data generated by the CPU. ECC bits are generated based on the actual generated test data, rather than the inserted identical bits. The substituted data bits and generated ECC bits are then stored in memory. An error condition is identified when the data and ECC code is read back from memory. The correction logic then corrects the data, in the case of a single bit error, such that the data read by the CPU is the same as the originally generated data.

Patent
17 May 1993
TL;DR: In this article, a boundary scan control circuit discriminates a category code at the top of data inputted from a serial input terminal to control a pair of switching circuits, when the category code represents a test mode, predetermined terminals of the switching circuits are selected so that input data are sent out to boundary scan cells.
Abstract: A method of testing an electronic apparatus which eliminates a control signal line for setting an integrated circuit to a test mode and a test mode select terminal of an external terminal section and wherein fetching of test data and transfer of the thus fetched test data are performed in an integrated operation. In each of the integrated circuits, a boundary scan control circuit discriminates a category code at the top of data inputted from a serial input terminal to control a pair of switching circuits. When the category code represents a test mode, predetermined terminals of the switching circuits are selected so that input data are sent out to boundary scan cells. Fetching of parallel data from parallel input terminals and transfer to the boundary scan cells are performed at a time.

Proceedings ArticleDOI
07 Nov 1993
TL;DR: A novel methodology based on reconfiguring a single scan chain to minimize the shifting time in applying test patterns to a device and demonstrate test time reductions as large as 75% over traditional test schemes at the expense of 1-3 multipliers.
Abstract: A major drawback in using scan techniques is the long test application times needed to shift test data in and out of a device. This paper presents a novel methodology based on reconfiguring a single scan chain to minimize the shifting time in applying test patterns to a device. The main idea is to employ multiplexers to bypass registers that are not frequently accessed in the test process and hence reduce the overall test time. For partitioned scan designs, we describe two different modes of test application which can be used to efficiently tradeoff the logic and routing overheads of the reconfiguration strategy with the test application time. In each case, we provide optimization techniques to minimize the number of added multiplexers and the corresponding test time. Implementation results demonstrate test time reductions as large as 75% over traditional test schemes at the expense of 1-3 multipliers.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new method of estimating parameter values of the exact equivalent circuit of a three-phase induction motor, which requires the name plate data, ratio of starting to full load torque and the efficiency and power factor values at half and full load.

Proceedings ArticleDOI
17 Oct 1993
TL;DR: Basic requirements for a standard bus for testing analog interconnects are described, of a mixed-signal version of boundary-scan and is compatible with, indeed built upon, ANSI/IEEE Std 1149.1.
Abstract: This paper describes basic requirements for a standard bus for testing analog interconnects The viewpoint is that of prospective users An architecture for such a standard bus is proposed The basic conception is of a mixed-signal version of boundary-scan and is compatible with, indeed built upon, ANSI/IEEE Std 11491 The goals in mind are the detection of faults and the measurement of analog interconnect parameters Among the desired benefits are test and test data commonality throughout an assembly hierarchy, from manufacturing to field service >

01 Dec 1993
TL;DR: In this paper, the authors presented a comprehensive study of continuous welded rail (CWR) track buckling strength as influenced by the range of all key parameters such as the lateral, torsional and longitudinal resistance, vehicle loads, etc.
Abstract: The report presents a comprehensive study of continuous welded rail (CWR) track buckling strength as influenced by the range of all key parameters such as the lateral, torsional and longitudinal resistance, vehicle loads, etc The parametric study presented here is based on the computer program jointly developed by Volpe National Transportation Systems Center (VNTSC) and Foster-Miller The computer program is based on the dynamic buckling theory developed and validated by previous research efforts of VNTSC On the basis of test data, the practical range of each of the parameters involved has been identified and computer runs have been made over this range to yield the buckling strength variations and the sensitivity with respect to the parameters Critical parameters and their ranges have been evaluated through this process Several conclusions of practical interest are drawn from the study

Journal ArticleDOI
TL;DR: In this article, a regression analysis was performed to assess the use of the liquid limit, the plasticity index, the percent clay, percent colloids, and activity of a soil as single variables to estimate the swell potential under a 1-psi (6.9-kPa) pressure of a specimen compacted to maximum dry unit weight based on the standard AASHTO test at optimum water content.
Abstract: This paper presents a laboratory study aimed at developing a reliable method to predict the expansion degree of clays. The work herein is intended to complement, not replace, existing research in this area. Utilizing test data for 128 natural soils, a regression analysis was performed to assess the use of the liquid limit, the plasticity index, the percent clay, the percent colloids, and activity of a soil as single variables to estimate the swell potential under a 1-psi (6.9-kPa) pressure of a specimen compacted to maximum dry unit weight based on the standard AASHTO test at optimum water content. Through multiple regression analysis, combinations of the aforementioned variables were also used to evaluate the swelling potential and to predict the degree of expansion. These relationships were used to establish nomographic charts for quantitative and qualitative evaluation of swell characteristics. The results of the proposed charts are shown to be in good agreement with swell test data provided by various researchers.