scispace - formally typeset
Search or ask a question

Showing papers presented at "AUTOTESTCON in 2004"


Journal Article•DOI•
20 Sep 2004
TL;DR: A method of encapsulating IP datagrams within MIL-STD-1553B data messages is implemented that allows for transparent use of Internet Protocol (IP) APIs at the application level, and allows legacy 1553 messages to take a higher transmission priority over IP traffic.
Abstract: Over the past several decades, the MIL-STD-1553 networking technology has found use in a number of military and aerospace platforms including applications on aircraft, ships, tanks, missiles, satellites, and even the International Space Station. In developing software applications for these platforms, the use of modern, open networking standards such as TCP/IP is often a preferable solution. The Internet Protocol (IP) provides communications routing, and the Transmission Control Protocol (TCP) provides reliable delivery to the application level. Furthermore, higher-level protocols such as HTTP, FTP, etc can be utilized in a TCP/IP environment. Though these open communications standards are preferable for many situations, the MIL-STD-1553B standard does not immediately lend itself to TCP/IP communications. One of the reasons for this is the fundamental difference between the MIL-STD-1553B networking standard, which relies on a bus controller to control communications, and other data link layer networking protocols such as IEEE 802.3 (Ethernet) which are CSMA (Carrier Sense Multiple Access) networks, and are thus decentralized. Despite differences in MIL-STD-1553B networking and more traditional data link layer networking protocols, there is nothing fundamentally preventing IP communication over a 1553 network. We have implemented a method of encapsulating IP datagrams within MIL-STD-1553B data messages that allows for transparent use of Internet Protocol (IP) APIs at the application level. Our system allows legacy 1553 messages to also be transported over the network, and even allows legacy messages to take a higher transmission priority over IP traffic. We analyze the advantages of such a system and the performance level we have achieved with our implementation of this concept.

62 citations


Proceedings Article•DOI•
20 Sep 2004
TL;DR: In this article, the authors presented a methodology to analyze raw vibration data provided to Georgia Tech from the carrier testing, which consists of selection and extraction of appropriate features from vibration data indicative of the fault condition and the construction of an optimum feature vector Test cell data sampled at 100 kHz for torque cases ranging from 20% to 100% were processed.
Abstract: Failure of flight critical components on-board a helicopter could cause an accident resulting in loss of life and/or aircraft It is imperative, therefore, that precursors of such failure modes be monitored continuously and remedial action be taken as soon as feasible in order to avoid catastrophic events A crack in the planetary carrier of a UH-60 Blackhawk main transmission has received recently extensive evaluation through analysis of vibration data (J Keller et al, 2003) The rotorcraft main transmission includes a planetary gear train comprising an inner "sun" gear surrounded by five rotating "planets" Torque is transmitted through the sun gear to the planets, which ride on a planetary carrier The planetary carrier plate, in turn, transmits torque to the main rotor shaft and blades The US Army Aviation Engineering Directorate conducted a series of experimental tests with faulted and unfaulted carrier plates on a test cell and also on-aircraft to determine if a fault (a plate crack) can be detected via vibration monitoring This paper introduces a methodology to analyze raw vibration data provided to Georgia Tech from the carrier testing The analysis approach consists of selection and extraction of appropriate features from vibration data indicative of the fault condition and the construction of an optimum feature vector Test cell data sampled at 100 kHz for torque cases ranging from 20% to 100% were processed On-aircraft data covered a limited torque range up to 30% due to safety considerations Both raw and time synchronous data were considered and features in the time, frequency and wavelet domains were investigated The analysis results indicate that a selected subset of features clearly distinguishes between the faulted and unfaulted cases

51 citations


Proceedings Article•DOI•
20 Sep 2004
TL;DR: This paper examines issues associated with the application of HDM to hierarchical systems, including: the types of diagnostic inference used to interpret the relationships between functions and failure modes, the correlation of functional and failure-based reliability data, and diagnostic assessment using hybrid diagnostic models.
Abstract: Hybrid diagnostic modeling (HDM) is an extension of diagnostic dependency modeling that allows the inter-relationships between a system or device's tests, functions and failure modes to be captured in a single representation (earlier dependency modeling approaches could represent the relationships between tests and either functions or failure modes). With hybrid diagnostic modeling, the same model can be used for early evaluations of a design's diagnostic capability, creation of hierarchical FMECAs, prediction of diagnostic performance, and generation of actual runtime diagnostics. This paper examines issues associated with the application of HDM to hierarchical systems, including: the types of diagnostic inference used to interpret the relationships between functions and failure modes, the correlation of functional and failure-based reliability data, and diagnostic assessment using hybrid diagnostic models.

26 citations


Proceedings Article•DOI•
20 Sep 2004
TL;DR: The different modular elements of the ground and flight test computers, as well as the hardware and software tuning and performance analysis tools, which have been developed around these computers are described, showing that AFDX tools are a real concept, which can be reused for other programs.
Abstract: Commercial aircraft are in the process of defining new standards with the introduction of the Airbus A380, which involves the latest digital information techniques, such as the AFDX onboard realtime network. The AFDX standard, a major innovation in aircraft technology, first used with the Airbus A380, introduces telecom Ethernet-based technology, as well as a switch connection topology, rather than part-to-part links or buses. Airbus and CES partnered to develop a general-purpose building block, which allows the simulation, test or connection of any AFDX connected equipment. Integrated into different packages, ranging from the small equipment tester, through the complete aircraft integration test bench, up to the full flight test computer. Hence the name "AFDX General-Purpose Test Platform". In the flight test applications, it provides the interface between the AFDX avionic world and the commercial Ethernet switches, through multiple AFDX inputs to a twin Ethernet output router. Redundancy and precise time control of the data transmission have been incorporated in the specification. Also of special interest, is a very advanced source synchronized datation system able to guarantee a perfect time alignment of all data directly at the entry in the flight test computer. This paper will describe the different modular elements of the ground and flight test computers, as well as the hardware and software tuning and performance analysis tools, which have been developed around these computers. All of these elements are now in operation and have shown that AFDX tools are a real concept, which can be reused for other programs.

24 citations


Proceedings Article•DOI•
20 Sep 2004
TL;DR: In this paper, a data-driven approach for real-time fault detection and isolation (FDI) in the chillers in HVAC systems is proposed to diagnose a number of faults belonging to both gradual degradation and abrupt fault classes.
Abstract: Failures in HVAC systems occur frequently and lead to loss of comfort, degradation in operational efficiency, and increased wear and tear on the system equipment Faulty HVAC systems seriously affect the energy efficiency of commercial buildings; they are oftentimes the causes for exceeding the allocated demand margins resulting in steep monetary penalties A real-time fault detection and isolation (FDI) system can ensure uninterrupted and energy-efficient operation of the HVAC systems, and thus enhance the quality of service in modern buildings In this paper, we propose a data-driven approach for real-time fault detection and isolation (FDI) in the chillers in HVAC systems Our techniques diagnose a number of faults belonging to both gradual degradation and abrupt fault classes

16 citations


Proceedings Article•DOI•
20 Sep 2004
TL;DR: The smart TPS toolset described is directly applicable to any military or commercial testing system, which employs multiple levels of maintenance.
Abstract: Smart test program set (TPS) is a research and development project implementing a key element of a network centric support concept. Network centric support utilizes information technology to provide key personnel with the network connectivity and processed information necessary to make decisions that accelerate support, increase quality and lower maintainability costs. This paper outlines the smart TPS high-level architecture as well as implementation and demonstration results based on the current navy implementation. The smart TPS toolset described is directly applicable to any military or commercial testing system, which employs multiple levels of maintenance.

14 citations


Proceedings Article•DOI•
20 Sep 2004
TL;DR: This paper decomposes the dMFD problem into a series of decoupled sub-problems, and develops a successive Lagrangian relaxation algorithm (SLRA) with backtracking to obtain a near-optimal solution for the problem.
Abstract: Fault diagnosis is the process of identifying the failure sources of a malfunctioning system by observing their effects at various test points. It has a number of applications in engineering and medicine. In this paper, we present a near-optimal algorithm for dynamic multiple fault diagnosis in complex systems. This problem involves on-board diagnosis of the most likely set of faults and their time-evolution based on blocks of unreliable test outcomes over time. The dynamic multiple fault diagnosis (dMFD) problem is an intractable NP-hard combinatorial optimization problem. Consequently, we decompose the dMFD problem into a series of decoupled sub-problems, and develop a successive Lagrangian relaxation algorithm (SLRA) with backtracking to obtain a near-optimal solution for the problem. SLRA solves the sub-problems at each sample point by a Lagrangian relaxation method, and shares Lagrange multipliers at successive time points to speed up convergence. In addition, we apply a backtracking technique to further maximize the likelihood of obtaining the most likely evolution of failure sources and to minimize the effects of imperfect tests.

12 citations


Proceedings Article•DOI•
20 Sep 2004
TL;DR: This paper provides an overview of a fibre channel avionics network and protocols being used for avionics, and discusses a practical implementation of avionics level testing and testing challenges associated with these applications.
Abstract: Fibre channel is being implemented as an avionics communication architecture for a variety of new military aircraft and upgrades to existing aircraft The fibre channel standard defines various network topologies and multiple data protocols Some of the topologies and protocols (ASM, 1553, RDMA) are suited for avionics applications, where the movement of data between devices must take place in a deterministic fashion and needs to be delivered very reliably All aircraft flight hardware needs to be tested to be sure that it communicates information properly in the fibre channel network The airframe manufacture needs to test the integrated network to verify that all flight hardware is communicating properly Continuous maintenance testing is required to insure that all communication is deterministic and reliable This paper provides an overview of a fibre channel avionics network and protocols being used for avionics The paper also discusses a practical implementation of avionics level testing and testing challenges associated with these applications

12 citations


Journal Article•DOI•
20 Sep 2004
TL;DR: This paper explores the process of implementing and integrating the system driver and instrument drivers for a PXI-based test station for the TOW2 optical sight sensor.
Abstract: New software technologies, such as VISA and IVI, continue to bring the industry toward greater standardization. The benefit to the integrator is reduced costs through reuse of the same hardware and software. The benefit to the customer end-user is lower costs by reducing modification and support through the life-cycle to the test station. However, while we position ourselves for the future with PXI and these software technologies, we must still provide support for VXI, GPIB, and instrument drivers that use current software technologies. Using a number of additional tools such as National Instrument's Measurement and Automation Explorer and Geotest's ATEasy, we can have the power of these tools today while waiting for wider acceptance and support of the newer VISA and IVI technologies. We are just now seeing the development of IVI drivers and the ink is still wet on the VISA specification for PXI. ATEasy provided the structure necessary to use these technologies with the current technology. This paper explores the process of implementing and integrating the system driver and instrument drivers for a PXI-based test station for the TOW2 optical sight sensor.

11 citations


Proceedings Article•DOI•
20 Sep 2004
TL;DR: An at-wing modular application for portable maintenance aids is described, building upon open architecture designs and utilizing reusable, modular components to enhance diagnosis and reduce ambiguity.
Abstract: Current avionics maintenance and repair is a complex process that presents many opportunities for improved diagnostic methods and better capture and retention of on-board and at-wing data to be incorporated into the maintenance and logistics chain. High occurrences of built-in-test (BIT) false alarm, cannot duplicate (CND), and no fault found (NFF) statistics indicate the need for improvements in the maintenance process. Capture and preservation of fault and maintenance data with situational context can support off-board repair processes and provide opportunities for data mining to identify rogue units and emerging or otherwise undetected patterns. Previous papers by the authors have described open-systems architecture and innovative reasoning processes to capitalize evidence sources and decrease diagnostic ambiguity while preserving information continuity through the logistics chain. In this paper, the authors describe an at-wing modular application for portable maintenance aids, building upon open architecture designs and utilizing reusable, modular components to enhance diagnosis and reduce ambiguity. ReasonPro - at Wing/spl trade/ presents a direct opportunity for increased diagnostic accuracy and ambiguity reduction through a better understanding of system dependencies and interactions. The technology is being embedded into a personal data assistant (PDA) to facilitate multiple element in the maintenance process. ReasonPro - at Wing/spl trade/ implements onboard information sources and automated reasoning techniques that extend BITs with environmental data and data maturation processing through the support of automated data warehousing and mining.

9 citations


Proceedings Article•DOI•
01 Jan 2004
TL;DR: An intelligent maintenance system for supporting crew operations (SCOPE) that supports the astronauts onboard the ISS and helps them to maximize the availability of complex payload systems.
Abstract: This paper describes an intelligent maintenance system for supporting crew operations (SCOPE). SCOPE supports the astronauts onboard the ISS and helps them to maximize the availability of complex payload systems. SCOPE detects system failures, guides the isolation of the root causes of failure, and presents the relevant repair procedures in textual and graphical formats. The diagnosis process is a joint astronaut-SCOPE activity: when needed, the system asks the astronaut to perform additional measurements in order to help resolve uncertainties, ambiguities or conflicts in the current payload status model. Usability tests show good user performance and satisfaction. The current SCOPE prototype has been applied to a portable payload for medical experimentation.

Journal Article•DOI•
G. Drenkow1•
20 Sep 2004
TL;DR: The strengths and weaknesses of the existing test system architectures including rack and stack systems with GPIB instruments and modular systems like VXI and PXI are shown and an emerging new architecture; LAN-based test systems are provided.
Abstract: Expanding number of test system architectural choices has caused confusion in the test engineering community. This paper will show the strengths and weaknesses of the existing test system architectures including rack and stack systems with GPIB instruments and modular systems like VXI and PXI. It will provide a glimpse into an emerging new architecture; LAN-based test systems. The paper will review key concerns such as costs, channel counts, footprints, !O speeds, ease-of-integration, and flexibility. The objective of the paper is to provide engineers insight into the most effective test systems for their future applications.

Proceedings Article•DOI•
20 Sep 2004
TL;DR: An approximate diagnostic importance factor measure for components is derived from the Markov chain solution of the dynamic fault tree of dynamic fault trees in order to perform reliability analysis.
Abstract: In this paper we generate diagnostic decision trees for systems that are modeled by dynamic fault trees in order to perform reliability analysis. We base the proposed testing sequence within the diagnostic decision trees on diagnostic importance factors. We derive an approximate diagnostic importance factor measure for components from the Markov chain solution of the dynamic fault tree.

Proceedings Article•DOI•
20 Sep 2004
TL;DR: The LM STS technical approach and practical experience in applying ATML into an existing system software architecture are described and how ATE test data results are collected and analyzed are explained to deliver improved diagnostics and reduced test time.
Abstract: The purpose of the Automated Test Markup Language (ATML) consortium is to define a collection of extensible Markup Language (XML) schemas that will allow the exchange of automated test equipment (ATE) and test information between compliant test environments. This paper describes the LM STS technical approach and practical experience in applying ATML into an existing system software architecture. The current system utilizes the BAE Systems TPS Wizard/spl trade/ product to generate test sequences that are executed using the National Instruments TestStand/spl trade/ test executive. The TPS Wizard/spl trade/ will have the ability to consume an XML representation of a test program that can then be used to generate a hierarchy of TestStand/spl trade/ compliant sequence files. In addition, this paper addresses the LM STS approach to data mining schemas and diagnostic ontology. It explains how ATE test data results, based on the Test Results Markup Language schema (TRML), are collected and analyzed to deliver improved diagnostics and reduced test time. The goal of ATML is to obtain interoperability between different test systems. The ATML group is developing standards for the interfaces to ATE for common software components. Utilizing ATML will result in test data being shared between all levels of maintenance (O, I, D, OEM).

Proceedings Article•DOI•
20 Sep 2004
TL;DR: The bond graph modeling approach is discussed and its application to aircraft fuel systems and reconfigurable control that has demonstrated such capabilities as fault diagnosis, isolation, and system reconfiguration as well as system prognostics are presented.
Abstract: Repairable, complex systems, whose components tend to degrade with use, require skilled professionals, supported by digital processing and decision aids to monitor and manage system maintenance. The complexity of present day aircraft systems and the increased demands on their reliability motivate the need for more capable diagnostic and control systems that not only detect component/system degradation but can estimate the capabilities of the degraded system and adapt system controls to maximize the overall performance. This paper discusses the bond graph modeling approach and its application to aircraft fuel systems and reconfigurable control that has demonstrated such capabilities. Our approach enables fault diagnosis, isolation, and system reconfiguration as well as system prognostics. The system described has the ability to dynamically update the system or component model and continue to track the system characteristics, providing important feedback information to reconfigurable control processes, and predictive performance estimators. Example aircraft fuel systems that were analyzed and evaluated with our technique are presented.

Proceedings Article•DOI•
20 Sep 2004
TL;DR: In this paper, the authors provide a brief overview of the characteristics of RF/Microwave frequency Translation Devices and associated Frequency Synthesis technologies which are commonly employed to implement their functionality, and address the importance for such frequency translation devices to embrace the emerging digital modulation paradigm in order to satisfy both current and future SI user needs in support of the Defense, Signal Intelligence, and Telecom communities.
Abstract: This paper provides a brief overview of the characteristics of RF/Microwave frequency Translation Devices and associated Frequency Synthesis technologies which are commonly employed to implement their functionality. The paper then briefly reviews the concept of Synthetic Instruments (SI) and the role of the Up Converter in the context of the SI paradigm The authors then characterise and compare traditional approaches to RF/MW stimulus generation & modulation vs. a modern day Up Converter/Frequency Synthesizer architecture employing a textile modulation capability and provide insight on the need for a new breed of frequency translation device: The Synthesized Up Converter (SUC). The authors also address the importance for such frequency Translation Devices to embrace the emerging digital modulation paradigm in order to satisfy both current and future SI user needs in support of the Defense, Signal Intelligence, and Telecom communities. The paper then introduces the reader to some critical Synthesized Up Converter functions & specifications that should be considered when satisfying a broad array of RF/Microwave CW said modulation user needs. The paper concludes with a summary statement by the authors about this critical Synthetic Instrument technology.

Proceedings Article•DOI•
20 Sep 2004
TL;DR: A high-level concurrent design error simulator that can handle various design error/fault models is presented and is able to detect all detectable and modeled design errors/faults for a given test sequence and was able to reveal valuable information about the behavior of erroneous designs.
Abstract: A high-level concurrent design error simulator that can handle various design error/fault models is presented. The simulator is a vital building block of a new promising method of high-level testing and design validation that aims at explicit design error/fault modeling, design error simulation, and model-directed test pattern generation. We first describe how signals are represented in our concurrent fault simulation and the method of performing operations on these signals. We then describe how to handle the challenges in executing conditional statements when the signals used by the statements are augmented by an error/fault list. We further describe the method in which the error models are embedded into the simulator such that the result of a concurrent simulation matches that of a sequence of HDL simulations with the set of errors/faults inserted manually one by one. We finally demonstrate the application of our concurrent design error simulator on a typical Motorola microprocessor. Our simulator was able to detect all detectable and modeled design errors/faults for a given test sequence and was able to reveal valuable information about the behavior of erroneous designs.

Proceedings Article•DOI•
20 Sep 2004
TL;DR: The overall function of integrated diagnostic, prognostic and autonomic logistics systems depends on a consistent definition of anomalies, faults, failure modes and performance failures as process drivers within the context of a concurrent intra/extra-vehicular design process as mentioned in this paper.
Abstract: The overall function of integrated diagnostic, prognostic and autonomic logistics systems depends on a consistent definition of anomalies, faults, failure modes and performance failures as process drivers Within the context of a concurrent intra/extra-vehicular design process, these drivers plus concomitant data can be used as feed forward to autonomic logistics, as the basis for diagnostic knowledge feedback to the vehicle

Proceedings Article•DOI•
20 Sep 2004
TL;DR: This paper focus on software tools for C compilers since C is the most error prone language in use today and the McCabe cyclomatic complexity metric and the Halstead complexity measures are just two of the ways to measure "software quality".
Abstract: Automatic test equipment (ATE) software is often written by test equipment engineers without professional software training. This may lead to poor designs and an excessive number of defects. The Naval Surface Warfare Center (NSWC), Corona Division, as the US Navy's recognized authority on test equipment assessment, has reviewed a large number of test software programs. As an aid in the review process, various software tools have been used such as PC-lint/sup /spl trade// or Understand for C++/sup /spl trade//. This paper focus on software tools for C compilers since C is the most error prone language in use today. The McCabe cyclomatic complexity metric and the Halstead complexity measures are just two of the ways to measure "software quality". Applying the best practices of industry including coding standards, software tools, configuration management and other practices produce better quality code in less time. Good quality code would also be easier to write, understand, maintain and upgrade.

Proceedings Article•DOI•
20 Sep 2004
TL;DR: A software architecture that integrates multiple software products through standard interfaces that relies on modern software technologies such as XML and COM and offers unique and powerful features for TPS development is described.
Abstract: The implementation of the emerging IEEE P1641 standard requires software architectures that enable the development and execution of instrument-independent TPSs. The signal interface standard developed by the IVI Foundation provides advanced instrument interchangeability capabilities, while using a signal-oriented instrument model. These characteristics make it an excellent choice for software solutions that implement the new IEEE standard. This paper describes a software architecture that integrates multiple software products through standard interfaces. The integration relies on modern software technologies such as XML and COM. The proposed software solution offers unique and powerful features for TPS development. In addition, its advanced instrument interchangeability capabilities can provide significant cost savings to organizations that must maintain test equipment over long periods of time.

Proceedings Article•DOI•
20 Sep 2004
TL;DR: The enabling photonic switch technology and a couple generic test architectures that can be applied in a variety of automated applications to increase test equipment usage and efficiency, thus lowering end costs for deployable fiber optic components and systems are explored.
Abstract: The proliferation of fiber optic systems in military and avionics platforms is driven by the ever increasing need for higher data rates to support multi-sensor data fusion Traditionally, the test systems to support these optical deployments are manual and inefficient Increasingly fast optical components require optical test equipment that is very expensive To make cost effective test suites, it is essential that these high value resources be used efficiently This is most effectively accomplished through test architectures that are remotely controlled and automatically scheduled These test architectures also enable a diverse set of testing applications to be simultaneously executed within an optical test lab or manufacturing environment The advent of optical matrix switching technology with sub 1dB insertion loss performance and repeatability measured in milli dB's opens up new doors for highly efficient, remotely controlled, automated test systems The ultra low loss aspects of these switches enable distributed test architectures that were previously unrealizable Distributed test architectures create a test environment where expensive test equipment can be leveraged over a greater number of test samples in a more timely and automated fashion This allows the lab manager to prioritize and schedule tests across many users, DUTs, and test equipment bays in an operation that can run 24/7 This paper explores the enabling photonic switch technology and a couple generic test architectures that can be applied in a variety of automated applications to increase test equipment usage and efficiency, thus lowering end costs for deployable fiber optic components and systems

Proceedings Article•DOI•
L.F. Wang1, S. Liao1•
20 Sep 2004
TL;DR: In the paper, various issues in designing an effective SCADA system for industrial condition monitoring and fault diagnostics is discussed in detail, especially on its crucial issues and state of the art in the recent decade.
Abstract: Internet-enabled supervisory control and data acquisition (SCADA) for condition monitoring and fault diagnosis have been widely used in modern industrial manufacturing, as they are capable of providing more efficient and reliable decision making support More recently, with the appearance of inexpensive sensors, microprocessors, and actuators, distributed and embedded systems are under rapid development, where real-time constraints should be carefully considered when designing such networked safety-critical systems In the paper, various issues in designing an effective SCADA system for industrial condition monitoring and fault diagnostics is discussed in detail, especially on its crucial issues and state of the art in the recent decade

Proceedings Article•DOI•
20 Sep 2004
TL;DR: In this paper, a variable reflection standard was built to evaluate ORLM models from three different manufacturers in the range of [10 dB to 45 dB] of return loss with an accuracy of +/- 0.5 dB.
Abstract: This paper discusses methods to evaluate optical time domain reflectormeter (OTDR) and optical return loss meter (ORLM) for field applications. Variable reflectance references for multimode and single mode fibers were built to evaluate the attenuation dead zones of OTDRs. Evaluation of the OTDR attenuation dead zone against a reflectance event of -40 dB with recovery to within 0.5 dB of backscatter reflection is discussed. Methods of measurement related to the IEC 61746 standard are discussed. Optical return loss is another important parameter to determine the quality of a single mode fiber network. A variable reflection standard was built to evaluate ORLM models from three different manufacturers in the range of [10 dB to 45 dB] of return loss with an accuracy of +/-0.5 dB. The standard was verified down to -45 dB return loss within +/-0.5 dB. Testing showed that equipment adapter and fiber connector geometries, fiber contact cleanliness, and skill are also critical to the evaluation. Schematics depict setups are presented. Data collection, data analysis, skills, and difficulties in evaluation process are discussed.

Proceedings Article•DOI•
20 Sep 2004
TL;DR: The development and management procedure of PATS's TPS is discussed in detail based on the TPS development of missiles PATS, which uses the COTS software-TestStandTM to manage parallel testing tasks easily.
Abstract: With current generation and next generation test systems focusing on testing efficiency, it is critical to develop test strategies that maximize testing throughput, make better use of the increasingly expensive instruments used in test station and drive down test costs. Parallel test, which means multiple units-under-test (UUTs) can undergo testing simultaneously, improves testing strategy by enhancing product flow, reducing aggregate test times, and improving instrument usage. The research and development of parallel automatic test systems (PATS) discussed in this paper is on the basis of common automatic test systems (CATS). Under the condition that PATS possesses all testing resources of CATS, how to make better use of instruments to increase throughput and cut down testing costs? The TPS (test program set) development of PATS is regarded as the key in this paper. The main tasks of the TPS development of PATS include the following four themes. The first is how to analyze test requirements. The second is how to design test unit adapter supporting parallel test. The third is to design and develop test program supporting multiple tasks and threads, which is the core of PATS. The last theme is how to manage multiple tests of PATS. We use the COTS software-TestStandTM to manage parallel testing tasks easily. In the paper, the development and management procedure of PATS's TPS is discussed in detail based on the TPS development of missiles PATS.

Proceedings Article•DOI•
20 Sep 2004
TL;DR: Traditional instrument developers are now faced with the challenge of determining how their existing and future modules, for example a digitizer or waveform generator, satisfy a portion of the requirements of a synthetic instrument.
Abstract: A current focus of many instrumentation and automatic test equipment (ATE) designers is the area of synthetic instrumentation. This instrumentation, which consists of physically separate building blocks or modules, provides the opportunity to reduce the system's overall hardware content by eliminating common hardware functionality located within each traditional instrument. This modular design provides opportunities for lower lifecycle cost, smaller physical packages and increased capabilities through module upgrades. However, the design of synthetic instrumentation and its integration into an ATE system poses technical challenges not typically seen with traditional instrumentation. Traditional instrument developers are now faced with the challenge of determining how their existing and future modules, for example a digitizer or waveform generator, satisfy a portion of the requirements of a synthetic instrument. ATE developers are challenged to identify the proper set of modules, primarily commercial off the shelf (COTS), that together satisfy the system level requirements. In addition, they must address instrumentation concurrency, synchronization, switching, and develop a software and hardware architecture that supports future upgrades as technology advances. Each of these issues requires a strong system engineering discipline that can develop a robust system architecture.

Proceedings Article•DOI•
20 Sep 2004
TL;DR: Each of these challenges emphasizes the need for an open test software architecture in next generation test systems that is capable of supporting tests developed in any test language, delivering a highly adaptable test architecture for multiple configurations, support for concurrent test development across multiple suppliers, and rapid insertion of new technologies such as ATML.
Abstract: Nearly all of the automated test systems developed today are required to integrate with or at a minimum support the existing tests and test systems deployed in the depot and field This involves supporting a wide range of tests developed in a variety of modern and legacy test development languages including LabWindows/CVI, LabVIEW, C/C++, Visual Basic, ATLAS, TCL, and HT-BASIC The convergence of functionality across products and organizations for joint development and feature sets is also driving the movement towards highly reconfigurable test systems that are capable of adapting to multiple configurations In addition, the convergence of products and functionality is also changing the development landscape to large contracts involving multiple suppliers working together to concurrently develop tests Lastly, new test schemas, such as Automatic Test Markup Language (ATML), for defining test routines and results data in XML format are of growing concern to many test engineers regarding the integration of this technology among existing and new test systems Each of these challenges emphasizes the need for an open test software architecture in next generation test systems that is capable of supporting tests developed in any test language, delivering a highly adaptable test architecture for multiple configurations, support for concurrent test development across multiple suppliers, and rapid insertion of new technologies such as ATML This paper discusses the benefits of creating an open test software architecture and the latest test management software tools for designing an open test software architecture to meet each of these needs The paper also features a case study of the recent Lockheed Martin Simulation, Training, & Support (LM STS) LM-STAR/spl reg/ open test software architecture designed for supporting military avionics, including the F-35 Joint Strike Fighter aircraft

Journal Article•DOI•
K. Ham1•
20 Sep 2004
TL;DR: This paper describes one alternative to rehosting TPS software to newer computer platforms, and describes the solution that Southwest Research Institute (SwRI) used to solve a problem with re-engineering legacy software into a modern object-oriented language.
Abstract: This paper describes one alternative to rehosting TPS software to newer computer platforms. It describes the solution that Southwest Research Institute (SwRI) used to solve a problem with re-engineering legacy software into a modern object-oriented language. The advantages gained by this method are more advanced instruction to the operator (such as pictures and movies), flexible reporting scheme to diagnose system problems, and ease of software maintenance. This solution uses commercially available products including National Instruments/sup /spl trade// TestStand/sup /spl trade// and LabVIEW/sup /spl trade// and Microsoft/spl reg/ Visual Basic/spl reg/. The system is a four-tiered architecture to drive all test execution. The user interface is written in Visual Basic and allows the user to interact with the test execution when needed. This user interface in turn calls tests that are written in TestStand, and finally the individual tests call driver functions written in LabVIEW. A database serves as a repository for all test results, displays, and test limits. The use of this database allows for easy querying of measurement data to analyze trends in failures that help diagnose specific problems. By using this database, all test limits and test displays are contained in one central location that enables engineers to change these displays and limits on the fly without needing to change or even understand any test code. One other benefit of this data containment is that all government classified data is separated out of the code and stored as one file, instead of in various locations as it was in the original source code.

Journal Article•DOI•
20 Sep 2004
TL;DR: How the ATS testing philosophy impacts the ease of TPS transportability from legacy ATE to modern day platforms and what SEI has done to address the issues that arise out of T PS transportability are examined.
Abstract: Sustainment of legacy automatic test systems (ATS) saves cost through the reuse of software and hardware. The ATS consists of the automatic test equipment (ATE), the test program sets (TPSs) and associated software. The associated software includes the architecture the TPSs run on, known as the control software or test station test executive. In some cases, to sustain the legacy ATS, it is more practical to develop a replacement ATE with the latest instrumentation, often in the form of commercial off the shelf (COTS) hardware and software. The existing TPSs, including their hardware and test programs, will then need to be transported, or translated, to the new test station. In order to understand how to sustain a legacy ATS by translating TPSs, one must realize the full architecture of the legacy ATS to be replaced. It must be understood that TPS transportability does not only include translating the original TPS from an existing language (such as ATLAS) to a new language (such as 'C') to run on a new test station, but includes transporting the run-time environment created by the legacy ATS. This paper examines the similarities and differences of legacy ATE and modern COTS ATE architectures, how the ATS testing philosophy impacts the ease of TPS transportability from legacy ATE to modern day platforms and what SEI has done to address the issues that arise out of TPS transportability.

Proceedings Article•DOI•
20 Sep 2004
TL;DR: Integrating the many facets of ATE that range from instrumentation, switching, test station descriptions, test program definition, unit under test (UUT) data, UUT test requirements, all the way to built-in-test of the platform improves life cycle support, cost and maintenance of the UUT and the supporting test equipment.
Abstract: As a large scale integrator The Boeing Company is always looking for ways to incorporate open systems into new and existing products Automatic test equipment (ATE) is no exception; our military customers are demanding the incorporation of open standards into many of the systems associated with ATE This means integrating the many facets of ATE that range from instrumentation, switching, test station descriptions, test program definition, unit under test (UUT) data, UUT test requirements, all the way to built-in-test of the platform Integrating all these items into a test system improves life cycle support, cost and maintenance of the UUT and the supporting test equipment

Proceedings Article•DOI•
20 Sep 2004
TL;DR: In this paper, the authors describe research and development efforts in the application of advanced optical techniques for prognostic analysis of printed circuit boards and their components, and present the results of their investigation into the combined use of these techniques for fault diagnosis, as well as their relative potential in the electronics test industry.
Abstract: This paper describes research and development efforts in the application of advanced optical techniques for prognostic analysis of printed circuit boards and their components Current methods of automated electronic testing require development of costly unique test program sets (TPSs) for each type of board, and only return information related to the current performance characteristics The use of laser diagnostics in circuit board testing can eliminate the need for TPSs, while identifying compromises in material integrity that leads to hard and soft component failures Additionally, they may be applied to solve instances of retest OK (RTOK) and no-fault-found events Our investigation has focused on terahertz (T-Ray) imaging, laser acoustics, and near-infrared (NIR) laser imaging T-Ray imaging is an emerging laser-based technology characterized by the ability to "see through" layers of plastic to the embedded metal traces of a circuit board or to the die of an encapsulated microchip Laser acoustics may be applied to monitor the integrity of solder joints, and NIR laser imaging may be used to identify damage within an integrated circuit (IC) We present the results of our investigation into the combined use of these techniques for fault diagnosis, as well as their relative potential in the electronics test industry