scispace - formally typeset
Search or ask a question

Showing papers presented at "AUTOTESTCON in 1997"


Proceedings Article•DOI•
22 Sep 1997
TL;DR: An overview of the QSI Integrated Toolset is presented, with examples of its real-world applications in model-based TPS development, real-time process monitoring, and PIMA.
Abstract: The QSI integrated tool set, consisting of TEAMS, TEAMS-RT, TEAMATE and HARVESTER, offers a comprehensive solution to integrated diagnosis of systems with many components (modules, boards, replaceable units, etc.) that are subject to failure. The software tool set automates the DFT, FMECA, on-line monitoring, off-line diagnosis, and maintenance data management tasks. Integration is achieved via a common model-based approach wherein a consistent model is used across different tools at various stages of a system's life-cycle. In this paper, we present an overview of the Integrated Toolset, with examples of its real-world applications in model-based TPS development, real-time process monitoring, and PIMA.

60 citations


Proceedings Article•DOI•
22 Sep 1997
TL;DR: In this article, the authors focus on the contribution of prognostics and health management (PHM) and its relationship to autonomic logistics in the Joint Strike Fighter (JSF) program.
Abstract: The Joint Strike Fighter (JSF) program is the focal point for defining the next generation of strike aircraft weapon systems for the Navy, Marines, Air Force, and allies. The focus of the program is affordability reducing the costs of development, production, and ownership of the JSF family of aircraft. In addition to affordability as a "pillar" of the JSF program, three additional pillars have been established: survivability, lethality, and supportability/deployability. These four pillars have established the foundation for the design and development of the JSF weapon system. One of the keys to providing an affordable approach to supportability and deployability lies in the strategy of prognostics and health management (PHM) and how it supports the concept of autonomic logistics. This paper focuses on the contribution of PHM and its relationship to autonomic logistics.

45 citations


Proceedings Article•DOI•
P. Turley1, M. Wright•
22 Sep 1997
TL;DR: This paper discusses the advantages gained in using LabVIEW 4.0.1, a graphical programming language, rather than a conventional programming language as a software development environment, and details how it was able to take advantage of LabVIEW's instrument control capabilities to optimize the VXI data acquisition process.
Abstract: CACI International Inc. is on contract with SAALC/LDAD, the Air Force engine tester program management office, to build an Engine Test/Trim Automated System II (ETTAS II) using Commercial Off The Shelf (COTS) hardware and software. This tester will ultimately replace the three aircraft engine test systems currently used by the Air Force, all of which are becoming increasingly difficult to maintain doe to hardware/software obsolescence problems. In keeping with the COTS requirement, we chose to develop our data acquisition and test program software in LabVIEW 4.0.1 for Windows NT/95. This paper discusses the advantages we have gained in using LabVIEW 4.0.1, a graphical programming language, rather than a conventional programming language as our software development environment. We detail how we were able to take advantage of LabVIEW's instrument control capabilities to optimize our VXI data acquisition process. We then discuss how LabVIEW can be used not only as an instrument control language, but also as a general purpose programming language. We discuss how we used LabVIEW for test program set (TPS) development and for rapidly prototyping user interfaces and program features for immediate operator/customer feedback. The paper also details how LabVIEW enabled os to readily establish a core of "generic" VIs (virtual instruments) for subsequent reuse in developing additional TPS for other aircraft engine types/variants.

18 citations


Proceedings Article•DOI•
22 Sep 1997
TL;DR: In this paper, a solution of the instrumentation problems in measuring spacecraft magnetic parameters is reported, and the development basics of both static and dynamic magnetic measuring systems are presented, and brief descriptions are also given to analyse and compare the systems.
Abstract: We report a solution of the instrumentation problems in measuring spacecraft magnetic parameters. A list of the main magnetic parameters is considered. The development basics of both static and dynamic magnetic measuring systems are presented. The brief descriptions are also given to analyse and compare the systems.

18 citations


Proceedings Article•DOI•
S. Pateras1, P. McHugh•
22 Sep 1997
TL;DR: A test and diagnosis methodology that is based on built-in self-test(BIST) is defined and described and a BIST solution based on a maintainable system architecture is described that includes the technology, and tools needed for the development of chip, board, and system BIST.
Abstract: A test and diagnosis methodology that is based on built-in self-test(BIST) is defined and described. A BIST solution based on a maintainable system architecture is described that includes the technology, and tools needed for the development of chip, board, and system BIST. This architecture is based on the IEEE 1149.5 MTM-Bus at the backplane level and the IEEE 1149.1 (JTAG) Boundary Scan Architecture at the chip level.

15 citations


Proceedings Article•DOI•
22 Sep 1997
TL;DR: A new approach to data collection at Boeing Defense and Space Group manufacturing centers is described, which uses a wireless, precision data collector that incorporates an integral laser bar-code reader and interfaces to industry standard gauges.
Abstract: Precision physical dimensional measurements are required for many aerospace structures to assure adherence to demanding tolerances. Data collection devices have typically interfaced to specific dimensional gauges to measure physical attributes of a manufactured part. This data is usually transferred to a computer via cable. This paper describes a new approach to data collection at Boeing Defense and Space Group manufacturing centers. This new approach uses a wireless, precision data collector that incorporates an integral laser bar-code reader and interfaces to industry standard gauges. Data is transferred by an internal 2.4 GHz radio to the statistical process control (SPC) data base used throughout The Boeing Company.

14 citations


Proceedings Article•DOI•
22 Sep 1997
TL;DR: This paper gives an overview of the field of software testing, including terminology; the role of software test; testing-related statistics; descriptions of functional, structural, static, and dynamic test techniques; and discussion of test management issues including the test implications of alternative software development models.
Abstract: This paper gives an overview of the field of software testing. Some of the topics covered include: terminology; the role of software test; testing-related statistics; descriptions of functional, structural, static, and dynamic test techniques; and discussion of test management issues including the test implications of alternative software development models, test process improvement, and how much testing is enough. The paper ends with resources and references for further study.

13 citations


Proceedings Article•DOI•
22 Sep 1997
TL;DR: This paper reports on a research project whose aim is the use of Java and related web tools for building object-oriented portable, open and re-configurable distributed measurement systems.
Abstract: Java is rapidly emerging as a powerful language for web programming. This paper reports on a research project whose aim is the use of Java and related web tools for building object-oriented portable, open and re-configurable distributed measurement systems. Different architectural patterns are possible. The paper discusses some useful patterns and exemplifies them by a developed example.

12 citations


Proceedings Article•DOI•
22 Sep 1997
TL;DR: This paper describes a data acquisition system which was designed to acquire performance data on the Minuteman III MK12/W62 Re-entry Vehicle Nuclear Weapon Sub-System, as the Sub- system is functionally tested in a simulated flight test environment.
Abstract: The collection of high speed, multi-channel test data in a VXI environment can be challenging, especially when the number of input channels is large and data processing is required for analysis of the test data. This paper describes a data acquisition system which was designed to acquire performance data on the Minuteman III MK12/W62 Re-entry Vehicle Nuclear Weapon Sub-System, as the Sub-System is functionally tested in a simulated flight test environment. The testing requires the simultaneous measurement of 53 channels of digital and analog data with a cumulative data capture rate of 100 kHz. The data is used to evaluate the reliability of the test specimen, using a post-processing algorithm. To further complicate the acquisition and post-test processing, the signals under measurement are interrelated in that some signals act as triggering events for other signals. The solution presented in this paper is a VXI-based data acquisition instrument operating with a PC running a LabVIEW(R) application while using an MXI interface for data transfer between the VXI and PC buses. The software element of this system uses a uniquely developed data analysis tool. The analysis tool, named Data Score/sup TM/, is programmed in a scripting language to allow test engineers the ability to define signal interrelations and data processing algorithms. The graphical user interface provides a flexible ability to view either post-test or archived data.

11 citations


Proceedings Article•DOI•
22 Sep 1997
TL;DR: This paper addresses the aspects of transitioning from flow-chart intensive knowledge representation to a format which provides the benefits described above.
Abstract: Technical Manuals used for field maintenance of US Army systems rely heavily on troubleshooting procedures which are presented in "flow chart" format These flow charts guide the technician through test procedures to isolate the cause of an equipment malfunction These procedures are static, that is they are highly structured around a pre-determined sequence of tests, do not become "smarter" over time with historical maintenance data and they only take into account those symptoms and faults which the original developer considered They are often incomplete, sometimes wrong, and are very difficult to update and maintain As the Army moves to computer-assisted methods of maintenance, the opportunity exists to significantly enhance the basic logic and knowledge representation underlying troubleshooting procedures The enhancements include knowledge based reasoning about faults related to symptoms, the ability to dynamically relate faults to symptoms, the ability to use historical maintenance data to continuously improve maintenance capability, and the ability to house "expert" diagnostics information in a form that becomes usable and available to novice technicians However, can tree-based troubleshooting logic from legacy systems be efficiently re-engineered to a knowledge-based system with the same benefits? This paper addresses the aspects of transitioning from flow-chart intensive knowledge representation to a format which provides the benefits described above

10 citations


Proceedings Article•DOI•
22 Sep 1997
TL;DR: This new approach can revolutionize the implementation of health management for fault tolerant systems by developing a deterministic model-based diagnostic capability that is adaptive to a vast number of dynamic reconfiguration states.
Abstract: Safety, sustainability and mission criticality considerations often predicate the requirement for built-in fault tolerance in aerospace systems. Existing approaches to accomplishing fault tolerance typically focus on "brute-force" hardware redundancy and extensive, complex control logic developed as a "point solution" to effect reconfiguration actions. This paper describes the principal concepts and design implementation of an innovative approach for embedding an adaptive model-based diagnostic reasoning capability into a Fault Tolerant Remote Power Controller (FTRPC) to provide rapid fault diagnostics and reconfiguration of powerflow to critical users. A key aspect of this approach is that a systems engineering process was used to develop the reasoning capability that could be embedded in the system to accomplish fault detection, isolation, reconfiguration and recovery. The system engineering process, applied through an automated tool set, is generic in nature and can be applied to any system, as opposed to a "point solution" developed by intensive engineering efforts. The extensibility and applicability of the overall approach is a key technology accomplishment of the program. This paper describes the underlying concepts and implementation of embedding Diagnostician-on-a-Chip technology into a state-of-the-art remote power controller. This design was recently implemented in an Integrated Product Development environment under a NASA Phase II SBIR Program conducted under the auspices of Marshall Space Flight Center (MSFC). This new approach can revolutionize the implementation of health management for fault tolerant systems by developing a deterministic model-based diagnostic capability that is adaptive to a vast number of dynamic reconfiguration states.

Proceedings Article•DOI•
P. Hansen1•
22 Sep 1997
TL;DR: This paper will describe how an integrated TPS development and execution environment can capitalize on these new technologies to improve test programming efficiency.
Abstract: New software technologies, including the World Wide Web, may seem far removed from the tasks facing test program set (TPS) developers, but they promise to revolutionize the way TPS data is organized, presented, and used. This paper will describe how an integrated TPS development and execution environment can capitalize on these new technologies to improve test programming efficiency.

Proceedings Article•DOI•
K. Fertitta1, B. Meacham•
22 Sep 1997
TL;DR: A method using hardware configuration tables to effectively defer binding of test resources until program execution is described, which allows the TPS to compensate for minor changes in hardware configuration without having to edit or recompile any LabVIEW code.
Abstract: This paper describes techniques for reducing test station hardware dependence in test programs implemented in National instrument's LabVIEW development environment hardware dependence is reduced by a combination of design strategies, and by the definition of a Hardware Abstraction Layer (HAL). The HAL reduces hardware dependence by insulating the developer from the test station resources, by encapsulating the hardware drivers supplied by the equipment manufacturer with wrapper functions. The HAL allows the TPS to be partitioned into hardware dependent and independent components, localizing the hardware dependencies in the HAL wrapper Vis. This paper also describes a method using hardware configuration tables to effectively defer binding of test resources until program execution. This technique allows the TPS to compensate for minor changes in hardware configuration without having to edit or recompile any LabVIEW code.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: The Benefield Anechoic Facility (BAF) as discussed by the authors is an ideal ground test facility to investigate and evaluate anomalies associated with EW systems; avionics, tactical missiles, and their host platform.
Abstract: This paper discusses the test capabilities of the Benefield Anechoic Facility (BAF) and its mission to support avionics and electronic warfare (EW) test and evaluation (TE such as, the F-3 Tornado, F-16, F-15, C-130, C-17, bombers, attack aircraft, and trainers and their associated EW systems. The BAF provides a quiet, secure, and controlled electromagnetic environment to test installed/integrated systems, their associated weapons, avionics and EW systems. This testing is accomplished within a very large anechoic chamber, providing a realistic free-space and controllable radio frequency (RF) environment. The BAF is an ideal ground test facility to investigate and evaluate anomalies associated with EW systems; avionics, tactical missiles, and their host platform. The BAF experience in EW evaluation includes, but is not limited to: electromagnetic interference (EMI)/electromagnetic compatibility (EMC), antenna pattern measurement, angle-of-arrival (AOA) measurement, electronic countermeasures (ECM) response and EW avionics integration.

Proceedings Article•DOI•
R.P. Oblad1•
22 Sep 1997
TL;DR: This paper introduces a new concept and model for building complex ATE systems that provided answers to the following three questions: How to achieve asset interchangeability in complex test and measurement systems.
Abstract: This paper introduces a new concept and model for building complex ATE systems. The underlying principles were discovered over several years but were driven home by the difficulty found in modernizing a legacy test system. The result of this effort produced a solution that provided answers to the following three questions: (1) How to achieve asset interchangeability in complex test and measurement systems, (2) How to place test system software in a modular component form that can be reused in different ATE or desk top environments, and (3) How to apply new software technologies in distributed computing to ATE systems.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: A diagnostic system that traverses a fault tree to find the shortest path to a result, which differs from many systems using If /Then/ Else statements placed in the test software to direct troubleshooting, and has several unique advantages.
Abstract: This paper describes a diagnostic system that traverses a fault tree to find the shortest path to a result. This differs from many systems using If /Then/ Else statements placed in the test software to direct troubleshooting, and has several unique advantages. Expert system test selection and result evaluation strategies allow diagnostics to be started at any time, even if the previous tests have been executed out of sequence. A simple text tree format captures the fault isolation data. This simplicity eases the entry of troubleshooting information, and increases the effectiveness of the entire system.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: In this paper, the authors present technical solutions and economic analyses showing to what extent such solutions provide a sufficient return on investment, and the role of these tools is to minimize dependence on the skills, knowledge and experience of individuals, and thus overcome costs of inaccurate, inefficient and incomplete diagnostics.
Abstract: Detecting the existence of a fault in complex systems is neither sufficient nor economical without diagnostics assisting in fault isolation and cost-effective repairs. This work attempts to put in economical terms the technical decisions involving diagnostics. It looks at the cost factors of poor diagnostics in terms of the accuracy and completeness of fault identification and the time and effort it takes to come to a final (accurate) repair decision. No Problems Found (NPF), Retest OK (RTOK), False Alarms, Cannot Duplicates (CND) and other diagnostic deficiencies can range from 30% to 60% of all repair actions. According to a 1995 survey run by the IEEE Reliability Society, the Air Transport Association (ATA) has determined that 4500 NPF events cost ATE member airlines $100 million annually. A U.S. Army study has shown that maintenance costs can be reduced by 25% if 70-80% of the items if had been repairing were to be discarded. Many of these situations can be overcome by investing in emerging technologies, such as Built-in (Self) Test (BIST) and expert diagnostic tools. The role of these tools is to minimize dependence on the skills, knowledge and experience of individuals, and thus overcome costs of inaccurate, inefficient, and incomplete diagnostics. Use of BIST can also directly reduce costs. This paper presents technical solutions and economic analyses showing to what extent such solutions provide a sufficient return on investment.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: The TPS must be capable not only of accommodating UUTs that are BIT (and software) intensive, but also of integrating internal (BIT oriented) test features with external test capability from the earliest stages of prime system (UUT) design.
Abstract: The TPS (Test Program Set) was initially conceptualized and structured to address external UUT (unit under test) testing. As the TPS evolved, it began to incorporate some interaction with UUT BIT (built in test). In view of current trends in prime system technology, it is now becoming increasingly clear that the TPS must be capable not only of accommodating UUTs that are BIT (and software) intensive, but also of integrating internal (BIT oriented) test features with external (test system oriented) test capability from the earliest stages of prime system (UUT) design. That is, the internal and external elements of the TPS, as well as the quantitative metrics through which test performance is measured, must be consolidated not only from the technical standpoint but also from the standpoint of acquisition philosophy.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: Enhanced C-17 Globemaster III propulsion data reporting using personal computer relational database (RDB) software contributes to improved avionics BIT, and introduces many other base-level advantages for the maintenance technician.
Abstract: C-17 avionics built-in test (BIT) improvement is critical to mission readiness and capability. Enhanced C-17 Globemaster III propulsion data reporting using personal computer relational database (RDB) software contributes to improved avionics BIT, and introduces many other base-level advantages for the maintenance technician; a corresponding benefit is the transformation of raw recorded aircraft data into useful maintenance information.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: The process outlined in this paper describes the system developed to meet the goals of the Next Generation Test Generator program, funded by the Office of Naval Research, which takes advantage of an unsupervised pattern classification algorithm (Adaptive Resonance Theory) and a Genetic Algorithm that is combined to form an optimizing control system.
Abstract: The process outlined in this paper describes the system developed to meet the goals of the Next Generation Test Generator program, funded by the Office of Naval Research. This system takes advantage of an unsupervised pattern classification algorithm (Adaptive Resonance Theory (ART)) and a Genetic Algorithm (GA) that is combined to form an optimizing control system. The GA generates a population of test patterns (individuals). Each individual is provided as a set of timed inputs to behavior based simulations representing good and faulty systems. The response of each model (good and faulty) is recombined in the form of an image matrix with each row representing a signature of each of the different circuits. FuzzyART (Fuzzy Logic Based ART) provides a method of image recognition, extracting those images that are distinctly different from any other. Each individual generated by the GA is provided as input to the list of models, then evaluated by FuzzyART and a fitness representing the number of separate classes is formed. New test sequences evolve with increasing fault isolation and detection. The process is repeated until a maximum number of models have been identified and separated. A selective breading algorithm was included to reduce the need for large populations, thus increasing the speed to converge to the "best test". The process was demonstrated using a commercial simulator based on Verilog HDL with a simple master/slave flip-flop and a moderately complex digital circuit (real UUT).

Proceedings Article•DOI•
R.M. Mahoney1•
22 Sep 1997
TL;DR: In this article, the authors present a test strategy for high-mix, low-volume manufacturing environments, which can offer competitive advantages in cost, quality, responsiveness and delivery performance.
Abstract: The foundation of what is known as agile competition is beginning to unfold. Rather than offering the customer a plethora of different options from which to choose, the customer works with the producer to arrive at solutions to the customer's specific problem. Information and services become a significant part of the product sold. Agile competition represents a significant departure from the lean manufacturing environments that exist today. The economics of production will no longer be defined in terms of being the low cost producer. What has emerged is on inescapable evolution to high-mix, low-volume manufacturing. An effective and efficient test strategy is critical to profitably meeting the responsiveness and delivery performance challenges of a dynamic agile, high-mix, low-volume electronics manufacturing environment. Test strategy objectives must align in on integrated way with the overall objectives established at the highest level of a manufacturing organization. In this regard, a manufacturing operations model is a competitive imperative. A manufacturing operations model will greatly assist management in gaining a better understanding of problems and serve as a focal point for systematic discussion of objectives and alternatives. Fundamental to the establishment of a sound manufacturing test strategy is the measurement of and understanding of the relevant dimensions of flexibility and complexity. Reductions in complexity coupled with increased flexibility (e.g., mix, volume) can offer competitive advantages in cost, quality, responsiveness and delivery performance for a high-mix low-volume electronics manufacturer. Various test strategy choices and the relevant issues pertaining to their use will be presented for the different types of manufacturing environments that con be encountered.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: The Fault Modeling method is presented, which has been field-proven over the past decade as flexible enough to meet the challenges of different lifecycle tasks, as well as lending itself to learning-self-improvement over time, even when starting with no knowledge.
Abstract: This paper focuses on the use of commercial off-the-shelf (COTS) expert systems in integrated diagnostics (1D) for military applications. Expert systems have developed and matured over the past several years to become a viable tool capable of functioning as a procedural tool for identifying diagnostic requirements, analyzing test system capabilities, and providing seamless diagnostic data transfer from requirement to analysis to operations. The most important differentiating characteristics of expert systems are their modeling methods, and their architecture. The modeling method drastically affects the time required to build a model, and the architecture must be open enough to integrate with the many tools used in engineering, deployment, and maintenance of the supported equipment throughout its life cycle. In this article, we present the Fault Modeling method, which has been field-proven over the past decade as flexible enough to meet the challenges of different lifecycle tasks, as well as lending itself to learning-self-improvement over time, even when starting with no knowledge. Expert systems using this model feature rapid deployment, and are able to cover the entire ID process including: capture of existing data, analysis of fault detection and isolation capabilities of the unit under test, and a means to assess diagnostic system designs early in the development phase. The systems integrate easily with simulators, automatic test equipment (ATE), and portable maintenance aid (PMA) equipment.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: A system that automatically generates tests for an analog Unit Under Test (UUT) in learning mode, and then deploys the system for fault detection and isolation in the production mode is described.
Abstract: This paper describes a system that automatically generates tests for an analog Unit Under Test (UUT) in learning mode, and then deploys the system for fault detection and isolation in the production mode. The NGTG consists of the following main components: a) minimized input test pattern generator, b) UUT simulator, and last but not least c) evaluation system. The NGTG is a process that utilizes Fuzzy Artmap neural network for fault diagnostics and detection and genetic algorithm for test generation and fault coverage optimization.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: Definitive standards for test definition will support not only demonstrable TPS quality but also TPS documentation that is both descriptive and indicative of the levels of quality that are being achieved.
Abstract: One of the chronic problems associated with Test Program Set (TPS) development has been the lack of unambiguous standards for test definition. Elements of the definition include, but are not necessarily limited to, the questions of what constitutes a test, what aspect or failure mode of the UUT is being addressed by the test, why the test is being performed at that particular point in the overall test flow, and how the test relates to other tests within the main performance verification path or any of the diagnostic branches. Moreover, test definition standards are very closely related to the way in which unambiguous fault defection and fault isolation metrics are implemented, as well as the way in which overall (TPS) test strategy is documented. Definitive standards for test definition will support not only demonstrable TPS quality but also TPS documentation that is both descriptive and indicative of the levels of quality that are being achieved.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: In this paper, the authors provide some guidelines to assist the ETMs/IETMs developers to achieve these goals and develop conversion procedures that can achieve the most cost effective, value-added, and user-friendly Electronic Technical Manual (ETM) and IETM.
Abstract: The Department of Defense is in the process of digitizing paper technical manuals (TMs). This consists, mainly, of direct copy to disk, with only a few being upgraded to Electronic Technical Manual (ETM) or Interactive Electronic Technical Manual (IETM). However, there is no clear-cut methodology for cost-effectively converting paper manual to various levels of upgrade. The Advanced Technology Office, the US Army Test, Measurement, and Diagnostics Equipment Activity the US Army Missile Command, has investigated the conversion technologies/tools and developed conversion procedures that can achieve these goals: the most cost effective, value-added, and user-friendly ETM/IETM. The authors provide some guidelines to assist the ETMs/IETMs developers to achieve these goals.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: The model development and simulation approach emphasizes the use of commercial tools such as OrCAD(R)'s Capture/sup TM/, Simucad's SILOS(R) III and Intusoft/Sup TM/'s ICAP/4.
Abstract: Next Generation Test Generator (NGTG) processes require the existence of circuit models, both good and faulty, as well as the simulation of these models in order to develop tests in an effective manner. This paper addresses the issues associated with model development and simulation within an NGTG framework. The emphasis of the model development and simulation methods is on the use of commercial tools. If test automation processes are to be successfully used throughout the design and test community, the tools that designers are familiar with and use everyday must be incorporated into the NGTG processes. This paper describes the overall system briefly and the functional elements related to model development in detail. The model development portion of the system is composed of three basic functional elements: Netlist Generator (NG), Component Model Library (CML) and Automatic Model Builder (AMB). The simulation of the models is an integral part of the Automatic Test Generation (ATG) system and is presented in detail. A circuit model development process is described which allows for the creation of the good circuit model as well as the fault models necessary as part of the ATG systems. The model development and simulation approach emphasizes the use of commercial tools such as OrCAD(R)'s Capture/sup TM/, Simucad's SILOS(R) III and Intusoft/sup TM/'s ICAP/4.

Proceedings Article•DOI•
T. Jurcak1•
22 Sep 1997
TL;DR: The ABBET framework supports a highly abstract test language that has allowed us to create generalized test methods that have greatly eased the authors' effort to reuse test algorithms among their programs.
Abstract: We are now reaping the benefits of an Object-Oriented software implementation of an ABBET (IEEE-1226) Signal-Oriented Test Framework. In our factory, we have created test software that is devoid of any knowledge of the instruments in our test set. Because of this, we can now choose whether we want a voltage measurement to be made by IEEE-488 Digital Multimeter (DMM) or VXI Oscilloscope instruments or even a hand-held DMM just prior to run-time. When an instrument fails or becomes obsolete, we can execute our test software on different testers or with different instrument types without impacting the test software. We can change the instruments in our Multiple Missile Factory standard test set without affecting the several configurations of test software that are controlled by separate organizations. One test suite, developed using the framework on a test set comprised of IEEE-488 instruments, was effortlessly moved to the VXI-based standard test set. In addition to the hardware independence, the framework supports a highly abstract test language that has allowed us to create generalized test methods that have greatly eased our effort to reuse test algorithms among our programs.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: The application of temporal logic and STeP to delay fault testability modeling and analysis is presented.
Abstract: To ensure the quality of manufactured integrated circuits, it is important that designs be delay fault testable. A formal verification technique such as temporal logic can help avoid the large cost of dynamic simulation. Temporal logic is a formalism for evaluating the temporal behavior of systems. STeP, Stanford Temporal Prover, is a system developed at Stanford University to support computer-aided formal verification of concurrent and reactive systems based on temporal logic specification. The application of temporal logic and STeP to delay fault testability modeling and analysis is presented.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: In this article, the authors used a questionnaire distributed among CASS TPS developers and managers aimed at identifying the various methods used for training and the advantages or disadvantages of each method.
Abstract: The lack of formal test education has brought about a need to examine how test engineers developing test program sets (TPSs) are able to perform their tasks. A test engineer's skill in developing TPSs usually comes from one or more of the following three sources: Training on using the ATE; continuing education courses at some universities, on-site and at conferences; on-the-job training. The effectiveness of each of these approaches is in question. While ATE training is necessary, it is usually not intended to teach TPS development. Continuing education is only sparsely available and because it is taught by individual consultants it is not standardized. On-The-Job training is the least efficient and probably the least cost-effective, but appears to be the most common. This paper focuses on TPS development for the US Navy's CASS ATE, but the issues may apply throughout the test community. The authors used a questionnaire distributed among CASS TPS developers and managers aimed at identifying the various methods used for training and the advantages or disadvantages of each method. The questionnaire was aimed at identifying problem areas and finding solutions that will enable TPS developers to create better TPSs in shorter time. One of the goals of this effort was to identify the appropriate curriculum which will best prepare TPS developers for their jobs. The paper also explains how the expected savings will outweigh the training costs.

Proceedings Article•DOI•
22 Sep 1997
TL;DR: In this paper, the authors describe the fault definition, simulation, test strategy development and validation activities that were accomplished during beta testing of a new CAE tool, Test Designer, addressing the issues of simulation convergence, component fault models, circuit model implementation, simulation run times and test strategy accuracy.
Abstract: This paper describes the fault definition, simulation, test strategy development and validation activities that were accomplished during beta testing of a new CAE tool, Test Designer. It addresses the issues of simulation convergence, component fault models, circuit model implementation, simulation run times and test strategy accuracy. To ensure a realistic test of Test Designer's capabilities, diagnostics were developed for a moderately complex analog and mixed signal UUT which was selected from the Navy's list of CASS TPS offload candidates. These diagnostics were evaluated on the CASS to measure the diagnostic sequence accuracy and to assess the ability of Test Designer to predict the nominal measurement values and fault detection characteristics of each test.