scispace - formally typeset
Search or ask a question

Showing papers presented at "AUTOTESTCON in 2008"


Proceedings ArticleDOI
31 Oct 2008
TL;DR: In this paper, a data-driven approach was proposed to estimate three critical characteristics of the battery (SOC, SOH, and RUL) using a data driven approach, which is based on an equivalent circuit battery model consisting of resistors, capacitor, and Warburg impedance.
Abstract: Battery management system (BMS) is an integral part of an automobile. It protects the battery from damage, predicts battery life and maintains the battery in an operational condition. The BMS performs these tasks by integrating one or more of the functions, such as protecting the cell, controlling the charge, determining the state of charge (SOC), the state of health (SOH), and the remaining useful life (RUL) of the battery, cell balancing, as well as monitoring and storing historical data. In this paper, we propose a BMS that estimates three critical characteristics of the battery (SOC, SOH, and RUL) using a data-driven approach. Our estimation procedure is based on an equivalent circuit battery model consisting of resistors, capacitor, and Warburg impedance. The resistors usually characterize the self-discharge and internal resistance of the battery, the capacitor generally represents the charge stored in the battery, and the Warburg impedance represents the diffusion phenomenon. We investigate the use of support vector machines to predict the capacity fade and power fade, which characterize the SOH of a battery, as well as estimate the SOC of the battery. The circuit parameters are estimated from electrochemical impedance spectroscopy (EIS) test data using nonlinear least squares estimation techniques. Predictions of remaining useful life (RUL) of the battery are obtained by support vector regression of the power fade and capacity fade estimates.

98 citations


Proceedings ArticleDOI
31 Oct 2008
TL;DR: A platform for the aging, characterization, and scenario simulation of gate controlled power transistors and includes an acquisition and aging hardware system, an agile software architecture for experiment control and a collection of industry developed test equipment.
Abstract: To advance the field of electronics prognostics, the study of transistor fault modes and their precursors is essential. This paper reports on a platform for the aging, characterization, and scenario simulation of gate controlled power transistors. The platform supports thermal cycling, dielectric over-voltage, acute/chronic thermal stress, current overstress and application specific scenario simulation. In addition, the platform supports in-situ transistor state monitoring, including measurements of the steady-state voltages and currents, measurements of electrical transient response, measurement of thermal transients, and extrapolated semiconductor impedances, all conducted at varying gate and drain voltage levels. The aging and characterization platform consists of an acquisition and aging hardware system, an agile software architecture for experiment control and a collection of industry developed test equipment.

94 citations


Proceedings ArticleDOI
31 Oct 2008
TL;DR: How standards currently under development within the IEEE can be used to support PHM applications is explored, with particular emphasis on the role of PHM and PHM-related standards with Department of Defense automatic test systems-related research.
Abstract: Recently, operators of complex systems such as aircraft, power plants, and networks, have been emphasizing the need for online health monitoring for purposes of maximizing operational availability and safety. The discipline of prognostics and health management (PHM) is being formalized to address the information management and prediction requirements for addressing these needs. In this paper, we will explore how standards currently under development within the IEEE can be used to support PHM applications. Particular emphasis will be placed on the role of PHM and PHM-related standards with Department of Defense (DOD) automatic test systems-related research.

93 citations


Proceedings ArticleDOI
31 Oct 2008
TL;DR: In this article, the authors describe a tester that was specifically designed to detect and isolate the intermittent circuits in an electronic box chassis, which is called the intermittent fault detection and isolation system (IFDIS).
Abstract: Aging aircraft electronic boxes often pose a maintenance challenge in that often after malfunctioning during flight in the aircraft, they test good, or ldquoNo Fault Foundrdquo (NFF) during ground test. The reason many of these boxes behave in this manner is that they have intermittent faults, which are momentary opens in one or more circuits due to a cracked solder joint, corroded contact, sprung connector receptacle, or any number of other reasons. These NFF boxes often account for a substantial number of boxes processed through a maintenance facility, where no repair can be performed, because no problem can be detected. Conventional test equipment is designed to test the electronic box for nominal operation, and usually ldquoaverages out,rdquo and hence hides, any short term anomalous event. This paper describes a tester that was specifically designed to detect and isolate the intermittent circuits in an electronic box chassis. This new and innovative tester has been designated the intermittent fault detection and isolation system (IFDIS). The IFDIS very effectively compliments conventional testers. The IFDIS includes an environmental chamber and shake table to subject the box to simulated operational conditions, which greatly enhances the probability the intermittent circuit will manifest itself. The IFDIS also includes an intermittent fault detector which continuously and simultaneously monitors every electrical path in the chassis under test, while the box is exposed to a simulated operational environment. To determine the effectiveness of this new tester in detecting and isolating intermittent circuits, several dozen electronic boxes, identified by serial number, that had been to the repair facility and tested NFF multiple times were selected for IFDIS testing. One or more intermittent faults were detected, isolated and repaired in nearly every box. These boxes were then tested on the conventional tester and returned to service. We are currently monitoring their performance to determine their increased service life and reduced number of NFF incidents.

58 citations


Proceedings ArticleDOI
31 Oct 2008
TL;DR: The proposed method based on fuzzy theory and ant colony algorithm for test point selection of analog circuits was used in the fault diagnosis system of time delay circuit board and in the remote-control system of some marine engine.
Abstract: Optimum selection of test points can reduce the test cost by eliminating redundant measurements and is important to reduce the computation cost. In order to select the optimum test points of analog circuits, a method based on fuzzy theory and ant colony algorithm is proposed. The traditional ant colony algorithm model was updated for test point selection of analog circuits. The basic model is given and the general rule to use this method to select the optimum test points of analog circuits is described in detail. The proposed method was used in the fault diagnosis system of time delay circuit board used in the remote-control system of some marine engine. The results show that this method can educe the optimum test points and has high practical value.

22 citations


Proceedings ArticleDOI
31 Oct 2008
TL;DR: In this article, the advantages and disadvantages of JTAG testing and proposed advanced JTAG test methodologies, including remote testing and diagnostics, are discussed. But the authors do not address the advantages of using boundary-scan and system-level testing.
Abstract: Todaypsilas complex printed circuit boards and high-density ball-grid array and other chip-size package ICs have led to the standardization and wide-spread use of JTAG (Joint Test Action Group) boundary-scan technology for test and debug. Topics include the evolution of JTAG standards, basic fundamentals of boundary-scan architecture, board testability using boundary-scan and system-level testing. Additionally, this paper will address the advantages and disadvantages of JTAG testing and propose advanced JTAG test methodologies including remote testing and diagnostics.

18 citations


Proceedings ArticleDOI
31 Oct 2008
TL;DR: This modified LS-SVM algorithm is a one-class novelty detector which can differentiate between a normal and faulty condition based only on the normal samples which are easily available and Comparisons drawn with other contemporary approaches lean favorably towards the viability of the suggested novelty detector.
Abstract: A paradigm shift in the standard operating procedures (SOP) is underway in the reliability and health management industry. As the community transitions from traditional preventive maintenance procedures to modern predictive or health-based management systems, areas such as efficient online monitoring and diagnosis schemes based on real-time observations have emerged as key research subjects for engineers. Most diagnostic systems require data from both healthy and faulty conditions in order to properly train their classification algorithms. However, in many situations, normal signals are acquired easily while fault samples are difficult to be gained. In this paper, we present a diagnosis scheme based on the least squares support vector machines (LS-SVM). Our modified LS-SVM algorithm is a one-class novelty detector which can differentiate between a normal and faulty condition based only on the normal samples which are easily available. We diagnose a growing crack fault on a planetary gear plate mounted aboard a UH-60 Blackhawk aircraft using this approach. Comparisons drawn with other contemporary approaches lean favorably towards the viability of the suggested novelty detector.

14 citations


Proceedings ArticleDOI
31 Oct 2008
TL;DR: The process involved in achieving the objectives from the point of view of the TPS developer and integrator is discussed, and the complete process, from requirements capture, via TPS integration, to program operation and results verification is covered.
Abstract: This year, new products have been introduced onto a Royal Air Force (RAF) test system, which provide an integrated IEEE 1641 development and run-time system. The program involved the integration of Commercial of the Shelf (COTS) products to provide a facility to enable the creation of a TPS using 1641 signal definitions, through to the running of the test program using established instruments and driver software. This paper discusses the process involved in achieving the objectives from the point of view of the TPS developer and integrator, and will cover the complete process, from requirements capture, via TPS integration, to program operation and results verification. The paper will also outline the lessons learned and the resulting feedback to the COTS product developers and 1641 working groups.

14 citations


Proceedings ArticleDOI
31 Oct 2008
TL;DR: In this article, the authors analyzed the changing effects of a wide range of parameter combinations on two different types of defects and determined that there exist optimal reference signal parameters for particular defect types.
Abstract: In order to maintain the integrity and safe operation of a power system, a state-of-the-art wiring diagnostic technique is imperative. Joint time-frequency domain reflectometry (JTFDR) is proposed as an ideal solution due to its customizable reference signal and unique time-frequency cross-correlation function. The reference signal depends on three parameters: center frequency, bandwidth, and time duration. Previously, these parameters were chosen based on the frequency characteristics of the cable under test. This paper will fully analyze the changing effects of a wide-range of parameter combinations on two different types of defects. It is determined that there exist optimal reference signal parameters for particular defect types. With this knowledge, JTFDR is able to more sensitively detect various defect types over a longer distance than previously possible.

14 citations


Proceedings ArticleDOI
31 Oct 2008
TL;DR: The SEAL module includes GPS, Lithium-ion polymer battery, high-density flash memory and a USB interface, and an optional Wi-Fi interface for Internet-enabled data for interoperable post-processing in every modern computer operating system.
Abstract: This paper presents design details of SEAL, a general-purpose low-cost spatial-temporal data logger. The SEAL module includes GPS, Lithium-ion polymer battery, high-density flash memory and a USB interface. The main features include: low cost, very large size XML-based data logs, low power consumption, flexible sensor attachment, and an optional Wi-Fi interface for Internet-enabled data. It is compact, self-contained and light-weight-suitable for a UAV (unmanned aerial vehicle) payload. In fact, our general-purpose SEAL module can turn any mobility platform into a mobile sensor. For example, we have interfaced CO2 and NH3 sensors with a SEAL module and attached the pack to a car which can be driven to map the CO2 and NH3 in a region of interest. The XML-based data logs allow interoperable post-processing in every modern computer operating system and display in desktop mapping applications such as Google Earth and NASA WorldWind.

13 citations


Proceedings ArticleDOI
31 Oct 2008
TL;DR: A compact, high speed, Ethernet-enabled interrogator that consumes less than 10 Watts is developed and the conventions used to convert from the optical domain to a sensor network are described, then integrated system test data acquired from sensors in dynamic temperature and strain environments are presented.
Abstract: Optical fiber Bragg grating sensors exhibit specialized sensing characteristics for harsh environments. The most common interrogation methods for FBGs require high resolution spectrometers that are not well suited to some embedded test situations. We have developed a compact, high speed, Ethernet-enabled interrogator that consumes less than 10 Watts. We describe the conventions used to convert from the optical domain to a sensor network, then present integrated system test data acquired from sensors in dynamic temperature and strain environments. Fiber optic system and sensor performance signal a maturity level capable of mainstream usability.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: This document describes how the various components fit under the over arching IEEE 1671-2006 framework document and how ATML draws on other test standards to provide a comprehensive set of related standards within a common framework.
Abstract: This document provides an overview of the extensible markup language (XML) test information interchange standards known collectively as automatic test markup language or ATML. The document describes how the various components fit under the over arching IEEE 1671-2006 framework document and how ATML draws on other test standards to provide a comprehensive set of related standards within a common framework.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: Improving regression testing utilizing parametric metadata for large scale automated measurement systems over existing regression testing techniques which provides engineers, developers and management increased confidence that mission performance is not compromised is proposed.
Abstract: Automated measurement systems are dependent upon successfully application of multiple integrated systems to perform measurement analysis on various units-under-tests (UUT)s. Proper testing, fault isolation and detection of a UUT are contingent upon accurate measurements of the automated measurement system. This paper extends previous presentation from 2007 AUTOTESTCON on the applicability of measurement system analysis for automated measurement systems. The motivation for this research was to reduce risk of transportability issues from legacy measurement systems to emerging systems. Improving regression testing utilizing parametric metadata for large scale automated measurement systems over existing regression testing techniques which provides engineers, developers and management increased confidence that mission performance is not compromised. The utilization of existing software statistical tools such as MinitabR provides the necessary statistical techniques to evaluate measurement capability of automated measurement systems. By applying measurement system analysis to assess the measurement variability between the US Navypsilas two prime automated test systems the Consolidated Automated Support System (CASS) and the Reconfigurable-Transportable Consolidated Automated Support System (RTCASS). Measurement system analysis shall include capability analysis between one selected CASS and RTCASS instrument to validate measurement process capability; general linear model to assess variability between stations, multivariate analysis to analyze measurement variability of UUTs between measurement systems, and gage repeatability and reproducibility analysis to isolate sources of variability at the UUT testing level.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: In this article, the authors highlight a number of failure modes that can create CND and RTOK, and examine tools that can be used instead of or in conjunction with ATE to help diagnose and repair units that experience such phenomenon.
Abstract: Automatic Test Equipment (ATE) has been traditionally tasked with supporting field returns. This task includes verification of proper operation and in the alternative to diagnose and provide repair instructions. While ATE has done reasonably well to certify a unit under test as ready for issue (RFI), it has been hampered by diagnostic complications in many instances. Two such complications, Cannot Duplicates (CNDs) and Retest OKs (RTOKs) are particularly troublesome because of the ATEpsilas limited vocabulary of Pass/Fail or Good/Bad determination. When a unit under test (UUT) is returned from the field labeled faulty, yet passes on the ATE, a RTOK, additional testing and diagnosis is required, despite the ldquoPassrdquo result. Similarly, when multiple runs on an ATE provide different results, a CND, the ATE (in conjunction with the test program set -TPS) is not sufficient to unravel the conflict. In such instances, the ATE must be supplemented with various tools that characterize the physics of the circuit in order to assist in diagnoses. This paper will highlight a number of failure modes that can create CND and RTOK, and examine tools that can be used instead of or in conjunction with ATE to help diagnose and repair units that experience such phenomenon.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: How HIL simulation is being used today is shown and a general architecture for building a HIL simulator is discussed, which allows the engineer to begin testing their electronic controller earlier in the development process and with greater flexibility compared to physical testing alone.
Abstract: Customer expectations and governmental requirements have changed the way engineers develop products. From automobiles and airplanes to industrial equipment and national defense systems, designers are adding intelligence to their products in the form of electronic controllers. A direct result of this evolution of product design; the testing complexity for these products is growing at an exponential rate. To address this challenge, many engineers have turned to a technique called hardware-in-the-loop (HIL) Simulation. HIL simulation is a test technique that allows the engineer to begin testing their electronic controller earlier in the development process and with greater flexibility compared to physical testing alone. While not a replacement for physical testing; in many situations, such as the testing of a flight control system, HIL simulation is the only viable option for development testing due to the potential consequences that may result from a failed test. In this paper, we will show how HIL simulation is being used today and discuss a general architecture for building a HIL simulator. We will also comment on many considerations that should be taken when specifying a HIL simulator.

Proceedings ArticleDOI
M. Santoro1
31 Oct 2008
TL;DR: In this paper, the authors explore the fundamentals of NTFs and NFFs and show developments in several areas that will allow depots to dramatically reduce these types of errors (results) with innovative solutions.
Abstract: New methodologies for eliminating No Trouble Found (NTF), No Fault Found (NFF) and other non repeatable failures in depot (or other) repair settings. Trying to find NTFs or NFFs has been as elusive as catching a leprechaun (and with the price of gold these days, who wouldn't want to catch a leprechaun and capture his pot of gold!). In fact, in some instances getting to the root cause has become the largest area of investment for a test strategy. In this paper we explore the fundamentals of NTFs and NFFs and show developments in several areas that will allow depots to dramatically reduce these types of errors (results) with innovative solutions.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: Laser Doppler vibrometry has been demonstrated to have significant potential to detect a wide range of faults in printed circuit boards and components, such as loose or aging solder joints, which can change the resonant frequency of the unit under test as discussed by the authors.
Abstract: Laser Doppler vibrometry has been demonstrated to have significant potential to detect a wide range of faults in printed circuit boards and components. Large scale structural defects, such as loose or aging solder joints, will change the resonant frequency of the unit under test. The physical changes in components or PC board due to thermal overstress will also change the encapsulating material frequency response. Impulse stimulation excites a broad range of frequencies simultaneously and allows observation of ldquoring downrdquo behavior. Optical impulse generation is a completely non-contact test system with a high degree of spatial resolution. Laser vibrometry is applicable to passive and active mode testing. The information obtained from vibrometric inspection provides complimentary data suitable for integration into a large suite of automated test systems. Optimization of a number of hardware parameters must be tuned to achieve consistent results that correlate actual faults with statistically significant data.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: A versatile block of signal conditioning options that process raw data samples from the A/D converter and deliver a filtered and resampled version of those data samples to a conventional FFT based spectrum analyzer is described.
Abstract: Spectrum analysis of an input signal is one of the most common processing tasks of a Synthetic Instrument (SI). Instruments that perform spectrum analysis have evolved over the years from parallel filter bank implementations through sequential swept frequency implementations to modern windowed Fast Fourier Transform (FFT) implementations. Frequency range, desired bandwidths, dynamic range, and other considerations dictate which of the three techniques or combination of techniques are best suited to a particular application. The parameters of a spectral analysis for acceptance testing are usually specified and at minimum include resolution bandwidth, spectral span, and dynamic range. A system designer can select various combinations of anti-alias filters, A-to-D converter sample clocks, and firmware-based FFT size to precisely meet these requirements. More often than not, strange design decisions or compromises are invoked. Examples include selecting sample rates which are multiples of a power of 2 (4.096 MHz for instance) to achieve a specified spectral resolution in a 4096 point FFT. An engineering compromise may be that 1024 Hz is close enough to 1000 Hz that it does not matter because no one will notice the difference. This is incorrect. A more cost effective and versatile option, free from capricious design numbers and questionable engineering compromises, is based on the flexibility and capability of embedded DSP engines. These engines are applied to the task of performing arbitrary sample rate changes in the DSP domain, thus obtaining precise matching of system parameters to specified parameters.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: The challenge of implementing multiple legacy instruments and arbitrating multiple legacy instrument calls within a single synthetic platform is addressed.
Abstract: Modern test systems employ dynamically configurable synthetic instruments to meet the measurement requirements of legacy test systems. A synthetic instrument contains all of the hardware and software building blocks and components required to implement the functionality of multiple legacy instruments, as well as the associated measurements. This paper addresses the challenge of implementing multiple legacy instruments and arbitrating multiple legacy instrument calls within a single synthetic platform. The approach discussed has been successfully implemented and fielded in the Aeroflex SMART^E family of products. This paper starts with a discussion of the requirements associated with emulating legacy instruments within a synthetic test environment. The software architecture associated with this environment can successfully support function calls to legacy instruments. The software architecture description is then expanded to show how this single architecture also supports a signals and measurement based approach to synthetic instrument control. This approach ensures the most efficient use of the systempsilas hardware and software resources and allows the overall test environment to realize the full potential of a synthetic architecture. Some examples of both legacy instrument emulation and signals based control of a synthetic test environment are also presented.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: Techniques for running tests in parallel for different test tasks are discussed and the factors that affect each type of taskpsilas performance are covered.
Abstract: Reducing test time continues to be a priority for test program developers as the complexity of next generation products and devices increases. Test program developers must provide complete test coverage while maintaining or reducing the test time of previous versions as complexity and feature concentration increases. Developers can use different techniques for reducing productspsila test times, such as running tests in parallel which reduces test time without sacrificing test coverage or quality. Other methods include decreasing test coverage by omitting certain lower priority tests or decreasing the quality of tests by only covering a subset of the ranges across which functionality is tested. This paper will discuss techniques for running tests in parallel for different test tasks and cover the factors that affect each type of taskpsilas performance.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: In this article, the authors presented the design, manufacture, and testing of a battery-free, wireless threshold accelerometer based on a fully compliant bistable mechanism (FCBM), which stores threshold acceleration measurements mechanically, eliminating the need for electrical power.
Abstract: This paper presents the design, manufacture, and testing of a battery-free, wireless threshold accelerometer based on a fully compliant bistable mechanism (FCBM). The FCBM stores threshold acceleration measurements mechanically, eliminating the need for electrical power. The sensorpsilas state can be read wirelessly via a passive RFID tag. Because information can be stored in these tags for over 25 years, the sensor can be left unattended for long periods of time. The FCBM portion of the sensor is laser cut from a single sheet of plastic (Delrin) and integrated with an Atmel ATA5570 RFID chip. Both elements can be manufactured at low cost. The G-force needed to exceed the shock threshold can be varied by changing the mass of the FCBM. Multiple sensors were tested using three different methods. The first method was a centrifuge, providing a constant force input. The second method was a drop test that gave an impulse input to the sensor. The final method used a shaker table to provide a sinusoidal input. In each of these tests, it was found that the FCBM sensed the correct acceleration and retained its mechanical state. A number of prototype sensors were constructed with different masses resulting in threshold accelerations between 15 and 180 Gpsilas. The overall size of these sensors was approximately 28 mm x 26 mm. The RFID tags operate at 150 kHz and were read using a commercial off-the-shelf reader with a range of approximately 3 cm. Longer range readers are readily available at higher operating frequencies.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: The significance of 1641 to the Policy and the work undertaken by UK MOD in promoting the standard to UK Industry is explained by describing the function of the working groups and Industrial Liaison Groups.
Abstract: The UK MOD has closely monitored and supported the development of IEEE Std 1641?-2004 [3] through its Standards Liaison Group for Automatic Test and Standards Technical Working Group for Automatic Test, which meet regularly with UK industry. Feedback from those open meetings is provided as technical input directly to IEEE Standards Coordinating Committee 20 (SCC20). IEEE 1641 and related test standards are key in the pan MOD Policy for ATS and the MOD continues to sponsor trial use of the Standard through a number of demonstrator projects. These projects encourage industry participation and lead to feedback that is fed into the aforementioned working groups to create proposals which enhance 1641 and related standards. This paper discusses the main features of the UK MOD Policy for ATS. It explains the significance of 1641 to the Policy and the work undertaken by UK MOD in promoting the standard to UK Industry by describing the function of the working groups and Industrial Liaison Groups. A summary is provided of the demonstrator projects undertaken on behalf of the MOD, their outcome and the proposals in store for future work.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: In this article, two types of diagnostic strategies optimization generation methods with unreliable tests are introduced, emphasizing particularly on test cost and diagnostic accuracy respectively, and the proposed strategies can be applied to guide the option of the diagnostic strategies for maintenance engineers scientifically.
Abstract: With the increased recognition of importance of design for testability, there is an increasing trend toward the development of efficient test strategies for system maintenance. An important issue in practical situation is the imperfect nature of tests, i.e., the tests can have missed detections and false alarms. An error diagnosis may result in false removal and missed repair of the components, thus, increases the cost of maintenance. The previous works have been studied on the sequential test strategy problem but are commonly lack of scientific evaluation system for diagnostic strategies. Furthermore, the expected testing cost is treated as the unique criterion to evaluate test strategies, but the diagnostic accuracy is another important criterion when considering test uncertainty (unreliable tests). The paper mainly considers the mentioned problems, and presents several evaluation functions for sequential test strategies, introduces two types of diagnostic strategies optimization generation methods with unreliable tests, emphasized particularly on test cost and diagnostic accuracy respectively. The simulation computational results validate the reasonability and practicability of the division. The proposed strategies can be applied to guide the option of the diagnostic strategies for maintenance engineers scientifically.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: This paper model the revenue generation capability of a product through its life cycle and demonstrates that the impact on profits by poor quality products can be much greater.
Abstract: Design for testability (DFT) has been accepted as a cost saving approach in many applications. The rationale has been demonstrated in several papers by calculations showing how DFT benefits derived outweigh costs. In those cases benefits dealt with cost savings in producing, testing and deploying a product. Apparently, such benefits are bound by an upper limit as the savings could not be greater than the cost of making the product. In this paper, however, we model the revenue generation capability of a product through its life cycle and demonstrate that the impact on profits by poor quality products can be much greater. DFT can mitigate many of the causes for poor quality products. We provide examples of recent economic harm and disaster caused by defective electronics. For these examples we postulate what DFT could have done to mitigate the defects and how much money the companies could have saved if DFT prevented the problems. Benefits derived from DFT in the product life cycle combines with traditionally demonstrated DFT benefits in production, making DFT even more compelling.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: In this article, the authors describe an approach to answer the question of where one should invest to achieve the greatest return in an iterative continuous improvement process and present a common reference model to coordinate partner actions as well as estimate partner return on investment across organization (inter and intra) boundaries.
Abstract: The emergence of technologies that support the implementation of condition based maintenance and autonomic logistics systems motivate the examination of the support systems associated with existing fleets of military and commercial aircraft with the goal of improving fleet performance. This paper describes an approach to answer the question of where one should invest to achieve the greatest return in an iterative continuous improvement process. Key activities or process features that may yield return are the implementation of: diagnostic and prognostic strategies for line replacement units (LRUs) in major platform subsystems. Maintenance, logistics and operations planning and management. Intermediate/depot test and repair strategies, implementation of infrastructure functions to move data off of the platform as well as capture crew and maintenance personnel input. This paper builds on earlier work in applying the theory of constraints as the core of a continuous improvement process to direct investments in legacy support systems. The paper casts fleet support as a process where throughput is fleet readiness; inventory is the platforms, support equipment, spares and maintainers; and work in process is the current or predicted faults/degradations that must be addressed to return the platform to service requirements or opportunistically maintain its serviceability. In comparison to the earlier work, the process model has been improved and is more comprehensive. The fleet simulation tool that is used as the basis for examining potential bottlenecks and in estimating the return of TOC compliant improvement strategies (those that simultaneously increase throughput while reducing inventory and work in process) has been upgraded. Also, the role and use of an analysis/data mining tool to identify process bottlenecks based on fleet data are described. A defined and accepted approach for the systematic improvement of fleet support is made all the more crucial by the active involvement of multiple partners (that may include the customer) in ventures such as performance based logistics. It provides the common reference model to coordinate partner actions as well as estimate partner return on investment across organization (inter and intra) boundaries.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: A three-tiered methodology for testing FPGA user designs for space-readiness using a particle accelerator, as well as two methods using fault injection and modeling.
Abstract: Using reconfigurable, static random-access memory (SRAM) based field-programmable gate arrays (FPGAs) for space-based computation has been an very active area of research for the past decade. Since both the circuit and the circuitpsilas state are stored in radiation-tolerant memory, both could be altered by the harsh space radiation environment. Both the circuit and the circuitpsilas state can be protected by triple-modular redundancy (TMR), but applying TMR to FPGA user designs is often an error-prone process. Faulty application of TMR could cause the FPGA user circuit to output incorrect data. This paper will describe a three-tiered methodology for testing FPGA user designs for space-readiness. We will describe the standard approach to testing FPGA user designs using a particle accelerator, as well as two methods using fault injection and modeling. While accelerator testing is the current ldquogold standardrdquo for pre-launch testing, we believe the use of fault injection and modeling tools allows for easy, cheap and uniform access for discovering errors earlier in the design process.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: This approach makes full use of such technologies as the unified modeling language (UML) and design patterns and can greatly reduce the effort required for all kinds of hydraulic simulation test framework development and achieve higher productivity, better reusability, extend ability and maintainability.
Abstract: Aircraft hydraulic systems are used to actuate flight control surfaces, thrust vectoring and reversing mechanisms, landing gear, cargo doors, and in some cases, weapon systems. In hydraulic system simulation test, a test and measurement system was developed and programmed on a prototype. It is a best option that research and development distributed test and measurement system for aircraft hydraulic system simulation test which was used to collect many channel and many type signals which separated in differently position. In order to improve the software reusability and development efficiency of test and measurement software, after analyzing the features and requirements of the test object, one framework development approach for aircraft hydraulic simulation test was presented. This approach makes full use of such technologies as the unified modeling language (UML) and design patterns. This method can greatly reduce the effort required for all kinds of hydraulic simulation test framework development and achieve higher productivity, better reusability, extend ability and maintainability.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: This paper will propose an implementation of ATML in the A TS development and execution workflow in order to motivate the use of this standard in ATS development and generate discussion in the ATML community.
Abstract: The lack of standards for automatic test information has stifled the development of more efficient and interoperable test systems to better meet next generation automatic test challenges. In response, the Naval Air Systems Command led the creation of the automatic test markup language (ATML) to standardize the exchange medium for sharing information between components of an automatic test system. ATML defines component standards that represent the components of an automatic test system (ATS), such as test results, test description and instrument description, and the interoperability between these standards. The ATML specification standardizes how components are documented but doesnpsilat elaborate on how ATS developers should use the standards in the design and execution of an ATS. This paper will propose an implementation of ATML in the ATS development and execution workflow in order to motivate the use of this standard in ATS development and generate discussion in the ATML community.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: A novel approach to printed circuit board (PCB) testing that fuses the products of individual, non-traditional sensors to draw conclusions regarding overall PCB health and performance is described.
Abstract: This paper describes a novel approach to printed circuit board (PCB) testing that fuses the products of individual, non-traditional sensors to draw conclusions regarding overall PCB health and performance. This approach supplements existing parametric test capabilities with the inclusion of sensors for electromagnetic emissions, laser Doppler vibrometry, off-gassing and material parameters, and X-ray and Terahertz spectral images of the PCB. This approach lends itself to the detection and prediction of entire classes of anomalies, degraded performance, and failures that are not detectable using current automatic test equipment (ATE) or other test devices performing end-to-end diagnostic testing of individual signal parameters. This greater performance comes with a smaller price tag in terms of non-recurring development and recurring maintenance costs over currently existing test program sets. The complexities of interfacing diverse and unique sensor technologies with the PCB are discussed from both the hardware and software perspective. Issues pertaining to creating a whole-PCB interface, not just at the card-edge connectors, are addressed. In addition, we discuss methods of integrating and interpreting the unique software inputs obtained from the various sensors to determine the existence of anomalies that may be indicative of existing or pending failures within the PCB. Indications of how these new sensor technologies may comprise future test systems, as well as their retrofit into existing test systems, will also be provided.

Proceedings ArticleDOI
31 Oct 2008
TL;DR: The reasoning behind the successful design of a multiprocessor program, the relationship between multi-core architectures and program performance, and several techniques for implementing synchronization and coordination methods without any special tools or packages are discussed.
Abstract: The computer industry is undergoing a continuing paradigm shift from ever increasingly faster single-core processor systems to the hyper-threaded and multi-core systems that we are seeing today. To continue leveraging the advantage of these systems, the programmers must also undergo a paradigm shift in the way that they design and develop software for these systems. The availability of additional cores and threads does not in itself guarantee increased performance, and in some cases may actually impede it. Concurrency, a software term for using resources at the same time, is the most important factor in achieving optimum performance in today's computing systems. Multi-core systems provide parallelism in addition to concurrency by providing additional processing elements (CPUs) that allow multiple threads to run simultaneously. This comes at a cost though, because the threads must be synchronized with the overall program flow. This paper discusses the reasoning behind the successful design of a multiprocessor program, the relationship between multi-core architectures and program performance, and provides several techniques for implementing synchronization and coordination methods without any special tools or packages.