scispace - formally typeset
Search or ask a question

Showing papers presented at "AUTOTESTCON in 2010"


Proceedings ArticleDOI
28 Oct 2010
TL;DR: In this article, an accelerated aging system for gate-controlled power transistors is presented for the understanding of the effects of failure mechanisms, and the identification of leading indicators of failure which are essential in the development of physics-based degradation models and RUL prediction.
Abstract: Prognostics is an engineering discipline that focuses on estimation of the health state of a component and the prediction of its remaining useful life (RUL) before failure. Health state estimation is based on actual conditions and it is fundamental for the prediction of RUL under anticipated future usage. Failure of electronic devices is of great concern as future aircraft will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. Therefore, development of prognostics solutions for electronics is of key importance. This paper presents an accelerated aging system for gate-controlled power transistors. This system allows for the understanding of the effects of failure mechanisms, and the identification of leading indicators of failure which are essential in the development of physics-based degradation models and RUL prediction. In particular, this system isolates electrical overstress from thermal overstress. Also, this system allows for a precise control of internal temperatures, enabling the exploration of intrinsic failure mechanisms not related to the device packaging. By controlling the temperature within safe operation levels of the device, accelerated aging is induced by electrical overstress only, avoiding the generation of thermal cycles. The temperature is controlled by active thermal-electric units. Several electrical and thermal signals are measured in-situ and recorded for further analysis in the identification of leading indicators of failures. This system, therefore, provides a unique capability in the exploration of different failure mechanisms and the identification of precursors of failure that can be used to provide a health management solution for electronic devices.

69 citations


Proceedings ArticleDOI
28 Oct 2010
TL;DR: In this article, a model based approach to studying degradation phenomena enables combining the physics based modeling of the DC-DC converter with physics of failure models of capacitor degradation, and predict using stochastic simulation methods how system performance deteriorates with time.
Abstract: This paper proposes the experiments and setups for studying diagnosis and prognosis of electrolytic capacitors in DC-DC power converters. Electrolytic capacitors and power MOS-FET's have higher failure rates than other components in DC-DC converter systems. Currently, our work focuses on experimental analysis and modeling electrolytic capacitors degradation and its effects on the output of DC-DC converter systems. The output degradation is typically measured by the increase in Equivalent series resistance and decrease in capacitance leading to output ripple currents. Typically, the ripple current effects dominate, and they can have adverse effects on downstream components. A model based approach to studying degradation phenomena enables us to combine the physics based modeling of the DC-DC converter with physics of failure models of capacitor degradation, and predict using stochastic simulation methods how system performance deteriorates with time. Degradation experiments were conducted where electrolytic capacitors were subjected to electrical and thermal stress to accelerate the aging of the system. This more systematic analysis may provide a more general and accurate method for computing the remaining useful life (RUL) of the component and the converter system.

55 citations


Proceedings ArticleDOI
28 Oct 2010
TL;DR: The innovations in test automation in NCR, the potential adaptation of the NCR technology to network-centric system support systems and the implication to mission readiness are discussed.
Abstract: Network Centric System operation is the core of our military environment today. While much research and development has been accomplished in technology to support creating and exploiting these increasing complex, interdependent systems, testing technology has not kept pace with the rate of technology advancement. As our dependence on network centric operation grows, the limitations of our ability to rapidly and accurately test a distributed information system is a key challenge to mission readiness. The National Cyber Range (NCR) is a Defense Advanced Research Project Agency (DARPA) program that is currently focusing on addressing the challenge of testing cyber technologies. The NCR will be a scalable (to thousands of nodes), secure, reconfigurable, high fidelity test range to rapidly assess emerging cyber technology. Key innovations include automation for test range configuration and validation, test instrumentation, and test data analysis and a scientific testing methodology for large scale cyber systems. The vision of the NCR program is to create a general purpose test range that can be quickly repurposed to conduct evaluations of cyber technology or architectures in much the same way that the general purpose automated test systems like USN CASS are used to support the test and diagnostics of a wide range of electronic, electro-optical and electro-mechanical devices. NCR is advancing the mission of automated test beyond the production test and maintenance of fielded weapon systems or equipment, however, by extending the advantages of automated test to the front end of the product lifecycle, where it has been absent but desperately needed as a design aid. The automated cyber range will be used to support experimentation, evaluate early prototypes and directly conduct design verification testing. This paper will discuss the innovations in test automation in NCR, the potential adaptation of the NCR technology to network-centric system support systems and the implication to mission readiness.

24 citations


Proceedings ArticleDOI
28 Oct 2010
TL;DR: Simulations on Ad hoc On-Demand Distance Vector ZigBee routing protocol with different traffic scenarios like CBR, FTP, and Poisson; and having different network topologies have been performed to calculate parameters like end-to-end packet delay and jitter.
Abstract: In this paper, ZigBee is used as a communication medium in home automation and networking. Simulations on Ad hoc On-Demand Distance Vector (AODV) ZigBee routing protocol with different traffic scenarios like CBR, FTP, and Poisson; and having different network topologies have been performed. Trace analysis is carried out to calculate parameters like end-to-end packet delay and jitter, which are key parameters in determining quality of service (QoS). Various types of queue types such as Drop Tail, Stochastic Fair Queue (SFQ), and Random Early Detection (RED) are used to calculate delay and jitter, by taking various traffic scenarios into consideration.

18 citations


Proceedings ArticleDOI
28 Oct 2010
TL;DR: This paper will describe the forthcoming investigation exemplifying how the data warehouse holding various sources of data about ALRE systems will be utilized to improve the education of maintainers and to enhance maintenance practices, to understand the cause of component failures, as well as provide solutions to diagnose these failures.
Abstract: For Aircraft Launch and Recovery Equipment (ALRE), the goal is to get planes in the air and ensure they land safely. Consequently, a high operational availability (Ao) is crucial to ALRE operations. In order to ensure high Ao, it is crucial that the amount of maintenance, both corrective and preventative, is kept to a minimum. Historically, improvements have been reactive in nature to satisfy the Fleet's needs of the moment and are never implemented across the Fleet. One approach to improving maintenance practices is to use historical data in combination with data mining to determine where and how maintenance procedures can be changed or enhanced. For example, if a maintenance manual says to remove three electronics boxes based on a built-in test (BIT) code, but historically, the data shows that removing and replacing two of the boxes never fix the problem, then the maintainer can be directed to first remove and replace the box which the data suggests is the most-likely cause of failure. This type of improvement is where data mining can be used to enhance or modify maintenance procedures. The Integrated Support Environment (ISE) team and the Integrated Diagnostics and Automated Test Systems (IDATS) team of NAVAIR Lakehurst are jointly investigating the use of data mining as an important tool to enhance ALRE systems and to potentially decrease preventive maintenance on-board Navy vessels, thereby reducing the total cost of ownership. The authors' approach is to use maintenance actions, system performance data, and supply information to draw a clear picture of the failures, diagnoses and repair actions for specific components of ALRE systems. The authors are using a commercial off-the-shelf (COTS) data mining suite, called SPSS Clementine, alongside custom software tools to detect the meaningful, yet hidden, patterns within the mountain of data associated with ALRE systems. SPSS Clementine is one of the data mining industry's premier tools, allowing rapid development of models for data mining. Additionally, ALRE subject matter experts (SMEs) were consulted to ensure the validity of the teams' findings. The combination of modern data mining practices and expert knowledge of ALRE systems will be leveraged to improve the maintenance performed at the O-level and to possibly understand why the failure happened in the first place. This paper will describe the forthcoming investigation exemplifying how the data warehouse holding various sources of data about ALRE systems will be utilized to improve the education of maintainers and to enhance maintenance practices, to understand the cause of component failures, as well as provide solutions to diagnose these failures. Utilizing the knowledge and expertise of database systems and data mining which the ISE team provides, combined with SME knowledge, non-trivial solutions to ALRE maintenance practices shall be uncovered to improve the maintenance environment on-ship

16 citations


Proceedings ArticleDOI
28 Oct 2010
TL;DR: Examples of how FPGAs are being used for sensor simulation to create better, more adaptable HIL test systems are discussed.
Abstract: As embedded control devices become more common in today's electro-mechanical systems, HIL Simulation is growing in its importance to the success of these systems. HIL testing provides a simulated environment for the unit under test, simulating the parts of the system that are not physically present. As these systems grow in complexity, traditional HIL simulation techniques are falling short. Fortunately, technologies such as Field Programmable Gate Arrays (FPGAs) are being applied to produce the next generation of HIL simulators. FPGAs enable test system developers to create custom hardware that can be easily reconfigured without physically modifying the device. In addition to being reconfigurable, for certain applications, FPGAs can offer superior performance compared to microprocessors. More specifically for HIL test systems, FPGA-based I/O devices provide superior determinism, on the order of nanoseconds, enabling realistic simulation of plant components not typically realizable with microprocessor-only based systems. They are also used to off-load some of the processing that would otherwise be required of the test system microprocessor increasing the total system bandwidth. Because of the ease with which their personalities can be reconfigured, FPGAs are also used in HIL test systems to create custom IO interfaces as well as IO interfaces that can adapt to multiple UUT types or changes to UUT interfaces that evolve during product development. In this paper, we will discuss examples of how FPGAs are being used for sensor simulation to create better, more adaptable HIL test systems.

16 citations


Proceedings ArticleDOI
28 Oct 2010
TL;DR: A .NET framework as the integrating software platform linking all constituent modules of the fault diagnosis and failure prognosis architecture is presented, with the use of Bayesian estimation theory.
Abstract: This paper presents a .NET framework as the integrating software platform linking all constituent modules of the fault diagnosis and failure prognosis architecture. The inherent characteristics of the .NET framework provide the proposed system with a generic architecture for fault diagnosis and failure prognosis for a variety of applications. Functioning as data processing, feature extraction, fault diagnosis and failure prognosis, the corresponding modules in the system are built as .NET components that are developed separately and independently in any of the .NET languages. With the use of Bayesian estimation theory, a generic particle-filtering-based framework is integrated in the system for fault diagnosis and failure prognosis. The system is tested in two different applications — bearing spalling fault diagnosis and failure prognosis and brushless DC motor turn-to-turn winding fault diagnosis. The results suggest that the system is capable of meeting performance requirements specified by both the developer and the user for a variety of engineering systems.

13 citations


Proceedings ArticleDOI
28 Oct 2010
TL;DR: A distributed diagnosis approach for complex systems is introduced based on the TFPG model settings and a high level diagnoser integrates the diagnosis results of the local subsystems using an abstract high level model to obtain a globally consistent diagnosis of the system.
Abstract: Timed failure propagation graph (TFPG) is a directed graph model that represents temporal progression of failure effects in physical systems. In this paper, a distributed diagnosis approach for complex systems is introduced based on the TFPG model settings. In this approach, the system is partitioned into a set of local subsystems each represented by a subgraph of the global system TFPG model. Information flow between subsystems is achieved through special input and output nodes. A high level diagnoser integrates the diagnosis results of the local subsystems using an abstract high level model to obtain a globally consistent diagnosis of the system.

13 citations


Proceedings ArticleDOI
Gerald Emmert1
28 Oct 2010
TL;DR: Testability modeling has been performed for many years as discussed by the authors and has been used to assess whether a product meets a requirement to achieve a desired level of test coverage, but has little pro-active effect on making the design more testable.
Abstract: Testability modeling has been performed for many years. Unfortunately, the modeling of a design for testability is often performed after the design is complete. This limits the functional use of the testability model to determining what level of test coverage is available in the design. This information may be useful to help assess whether a product meets a requirement to achieve a desired level of test coverage, but has little pro-active effect on making the design more testable. This paper will layout the reasons for adding testability modeling to design effort and the process by which modeling can be effectively utilized to improve an electrical designs' testability. It will cover some of the assumptions that must be made initially about testability at a programs onset, the phases in a programs lifecycle that testability modeling should be performed, the level of detail that a testability model should contain, the review and other processes that should be used to validate the model and determine if design modification/improvements should be performed, as well as the way testability should be incorporated into a program's overall test strategy. The information related in this document is based upon the personal experience of the author as the company he works for attempts to better utilize new and existing design tools and processes to improve the manufacturability of its designs and reduce the overall cost of its products in an ever more challenging defense acquisition environment.

12 citations


Proceedings ArticleDOI
28 Oct 2010
TL;DR: A verification methodology based on exact filtering and Monte Carlo method is proposed to verify a user defined particle-filtering based prognostic algorithm and can be extended to verification of other prognostic algorithms straightforward.
Abstract: Prognosis is a fundamental enabling technique for condition-based maintenance (CBM) systems and prognostics and health management (PHM) systems and therefore, plays a critical role in the successful deployment of these systems. The purpose of prognosis is to predict the remaining useful life of a system/subsystem or a component when a fault is detected. Although different prognostic algorithms have been developed and tentatively applied to various mechanical and electrical systems in the past decade, the verification and validation (V&V) remains a challenging open problem. The difficulties lie in the facts that first, there is usually no statistically sufficient data to do V&V and second, there is no rigorous and general V&V framework available. In this paper, a verification methodology based on exact filtering and Monte Carlo method is proposed to verify a user defined particle-filtering based prognostic algorithm. The methodology is a general one that can be extended to verification of other prognostic algorithms straightforward. When statistically sufficient data are available, validation can be implemented under the similar framework. The verification methodology is demonstrated on the prognosis of a seeded-fault planetary helicopter gearbox carrier.

11 citations


Proceedings ArticleDOI
28 Oct 2010
TL;DR: Some of the fundamental design parameters and specifications of modern spectrum analyzers such as dynamic range, instantaneous bandwidth, and image rejection are presented and explored with a focus on maintaining system performance without sacrificing flexibility.
Abstract: Digital Signal Processing (DSP) has revolutionized spectral analysis. Where the swept spectrum analyzer dominated the market in the past, the Fast Fourier Transform (FFT) based spectrum analyzer is now gaining acceptance as the method of choice. This is due in part to the prevalence of high speed, high dynamic range Analog-to-Digital Converters (ADC) and high speed signal processing devices such as Field Programmable Gate Arrays (FPGA). Because the FFT-based spectrum analyzer is readily implemented with a limited set of generic hardware, it is an attractive technique for Synthetic Instruments (SI), where the goal is to form multiple measurement functions from a limited set of generic hardware modules. In this paper some of the fundamental design parameters and specifications of modern spectrum analyzers such as dynamic range, instantaneous bandwidth, and image rejection are presented. These parameters are explored with a focus on maintaining system performance without sacrificing flexibility.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: A new product family from National Instruments offers commercial-off the shelf (COTS) analog and digital I/O coupled to highperformance FPGA modules which can be programmed with the common, more abstract NI LabVIEW graphical programming language on the PXI platform.
Abstract: Of all types of test applications, none require higher reliability, greater customization, more application-specific protocols and interfaces, and higher performance for more complete test coverage than military and aerospace test systems. These requirements often dictate custom hardware and HDL-based FPGA programming which drive higher costs, greater development time, more maintenance, and require specialized development knowledge. A new product family from National Instruments offers commercial-off the shelf (COTS) analog and digital I/O coupled to highperformance FPGA modules which can be programmed with the common, more abstract NI LabVIEW graphical programming language on the PXI platform. These NI FlexRIO products bring the prospect of customized hardware to a wider audience through COTS components which reduce costs and decrease long-term support requirements.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: In this article, the authors present an innovative design for a prognostics and health management (PHM) data recorder that will facilitate sense-and-response logistics, and provide a small and inexpensive package.
Abstract: Novel prognostic sensors and reasoner algorithms are the core technology for detecting defects caused by accumulation of fatigue damage in electrical and mechanical systems over time. However, serious technical challenges to implementing a general health management strategy for helicopters and military aircraft still exist. For example, severe heat and vibration make it difficult to distinguish fault signatures from environmental noise. Moreover, bearing loads are very dynamic, making it difficult to distinguish subtle wear-out signatures from normal acoustic patterns. Detection can be improved by increasing the number of sensor locations, but this option is unattractive from the standpoint of added cost, weight, and data overhead of such a system. Our approach is to integrate MEMS sensors with a standard commercial microcontroller and measurement electronics. In this way, prognostic sensors can be positioned closer to the stressed components and provide higher fidelity data with lower cost. We present an innovative design for a prognostics and health management (PHM) data recorder that will facilitate sense-and-response logistics, and provide a small and inexpensive package. This low-cost, low-power, and lightweight solution is based largely on COTS components; it is implemented using a standard low-power lightweight microcontroller core and COTS MEMS sensors to record and process local temperature and vibration data, and status reporting is implemented using a short range wireless transceiver.

Proceedings ArticleDOI
Charles D. Bishop1
28 Oct 2010
TL;DR: DSOs (Digital Sampling Oscilloscopes) generally allow the use of averaging to increase vertical resolution and lower uncorrelated noise.
Abstract: DSOs (Digital Sampling Oscilloscopes) generally allow the use of averaging to increase vertical resolution and lower uncorrelated noise. While averaging is a useful tool, it is important to remember that it is a type of filtering. Applying averaging successfully is easier if the user understands the characteristics of the filters being applied.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: Algorithm and hardware required to make vector network measurements are discussed and then the measurements on a simple low pass filter are compared to measurements taken by a vector network analyzer.
Abstract: This paper is about using sampling oscilloscopes to perform Vector Network Analysis measurements. This paper discusses algorithms and hardware required to make vector network measurements and then compares the measurements on a simple low pass filter to measurements taken by a vector network analyzer. Oscilloscopes and digitizers have been steadily increasing in bandwidth. At the same time Interchangeable Virtual Instrument drivers have made porting software between different scopes a simple matter. These technologies make investments in algorithm development more cost effective. The algorithms have a longer life, lower costs to maintain, and ease of porting the algorithms to future scopes.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: The purpose of this paper is to provide information about the benefits using Commercial Off-the-Shelf (COTS) business intelligence software tools to support aircraft and automated test system maintenance environments.
Abstract: The purpose of this paper is to provide information about the benefits using Commercial Off-the-Shelf (COTS) business intelligence software tools to support aircraft and automated test system maintenance environments. Aircraft and automated test system parametric and maintenance warehouse-based data can be shared and used for predictive data mining exploitation which will enable better decision support for War Fighters and back shop maintenance. When utilizing common industry business intelligence Predictive Modeling Processes, engineering designers can create initial business intelligence aircraft and automated test system maintenance environment engineering cluster models. This is a process of grouping together engineering data that have similar aggregate patterns. By using these engineering cluster models produced earlier to develop and build more accurate predictive models, predictive algorithms are utilized to make use of the cluster results to improve predictive accuracy. Common industry business intelligence Decision Trees and Neural Network models are developed to determine which algorithm produces the most accurate models (as measured by comparing predictions with actual values over the testing set). After an initial mining structure and mining model is built (specifying the input and predictable attributes), the analyst can easily add other mining models. COTS business intelligence software tools provide for a more cost effective support and predictive role for War Fighter support personnel in a time of decreased defense spending. Having access to applicable engineering data at the time of need will; decrease troubleshooting time on production aircraft and back shop maintenance, increase the ability of the technical user to better understand the diagnostics, reduce ambiguities which drive false removals of system components, decrease misallocated spares, and maintain/increase knowledge management.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: A set of scoring attributes and a methodology to achieve this goal without the need for schematic level information is proposed.
Abstract: Commercial off the shelf (COTS) equipment is generally preferred because of its low acquisition costs, wider user base and implied support from the vendor. Because it is treated as a black box, however, it is often complicated to measure the testability and diagnosability of the COTS equipment and therefore its supportability is difficult to predict. While some metrics for testability, diagnosability and supportability exist, they usually require examination of schematic details — and require design changes to improve these attributes. This is not practical for COTS. The best that a designer of COTS based systems can expect is to evaluate various competing COTS and assess their testability attributes. We propose a set of scoring attributes and a methodology to achieve this goal without the need for schematic level information.

Journal ArticleDOI
26 Jul 2010
TL;DR: BIST is often used to detect faults before the system is shipped and is potentially a very efficient way to implement on-line testing, and error latency is the elapsed time between the activation of an error and its detection.
Abstract: Due to the high cost of failure, verification and testing now account for more than half of the total lifetime cost of an integrated circuit (IC). Increasing emphasis needs to be placed on finding design errors and physical faults as early as possible in the life of a digital system, new algorithms need to be devised to create tests for logic circuits, and more attention should be paid to synthesis for test and on-line testing. On-line testing requires embedding logic that continuously checks the system for correct operation. Built-in self-test (BIST) is a technique that modifies the IC by embedding test mechanisms directly into it. BIST is often used to detect faults before the system is shipped and is potentially a very efficient way to implement on-line testing. Error latency is the elapsed time between the activation of an error and its detection. Reducing the error latency is often considered a primary goal in on-line testing.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: In this paper, an Automated Test System (ATS) for Proximity Sensor Electronic Units (PSEUs) in the aircraft industry required the implementation of a variable inductor, which simulated inductive proximity sensors so the ATS can measure and verify the switch points of the electronics as the sensors move between their near and far states.
Abstract: Our development of an Automated Test System (ATS) for Proximity Sensor Electronic Units (PSEUs) in the aircraft industry required the implementation of a variable inductor. The variable inductor simulates inductive proximity sensors so the ATS can measure and verify the switch points of the electronics as the sensors move between their near and far states. Though developed for PSEU testing, the presented methodology and technology may be applied to other test equipment and applications that require a variable inductor. The paper begins by looking at different techniques for implementing a variable inductor: moving cores, switched decade boxes, gyrator circuitry and saturable core reactors. The paper presents the pros and cons of the different technologies and then focuses on the development of a saturable core reactor as the chosen technology. The paper presents fundamental formulae used during the development of the variable inductor and test results for a number of developed prototypes. The presentation includes the development of a highly-accurate control loop to precisely hold the value of the controlled inductance. Finally, the paper concludes with a brief discussion of the ATS that ultimately housed the variable inductor.

Proceedings ArticleDOI
Nathan Yang1
28 Oct 2010
TL;DR: This paper will provide an architectural view of modern structural test and monitoring systems, diving into both COTS hardware and software technologies that enable the most advanced and flexible systems available.
Abstract: With virtual instrumentation and graphical system design, the application requirements ultimately dictate the shape and form of the structural test and monitoring system. Virtual instrumentation offers more complete, more capable, and lower cost structural test and monitoring systems. This paper will provide an architectural view of modern structural test and monitoring systems, diving into both COTS hardware and software technologies that enable the most advanced and flexible systems available. A case study will also be presented.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: This paper details the on-board JTAG programming of a Flash Memory using typical digital test instrumentation found in test systems rather than various boundary scan vendor proprietary hardware.
Abstract: This paper details the on-board JTAG programming of a Flash Memory using typical digital test instrumentation found in test systems rather than various boundary scan vendor proprietary hardware. Advantages to this technique, such as reduction in required equipment and reduced integration and support costs, are discussed.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: In this paper, the authors lay out the PLC/PBL costs of test equipment and walk through a TCO model that can be used for making trade off decisions between different program options.
Abstract: Cost of ownership is always a hot topic when making a program decision for any new, upgrade or sustainment option. The criteria for developing a Total Cost of Ownership (TCO) model quickly turns into debates with many facets and lots of emotion. When it comes to the cost of acquiring and operating test equipment, the answer is not any easier to determine. However, if looked at from a Product Life Cycle (PLC) cost or a Performance Based Logistic (PBL) view point, a more accurate cost model can be developed. By understanding and using the attributes of direct and indirect costs for acquiring, operating, maintaining, migrating and disposing of these assets, an accurate model of the total cost of ownership can be obtained. This paper will lay out the PLC/PBL costs of test equipment and walk through a TCO model that can be used for making trade off decisions between different program options.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: Algorithm for the detection of anomalous events that can be identified from the analysis of monochromatic stationary ship surveillance video streams is presented and is suitable for application in shipboard environments where high-quality, color video may not be available.
Abstract: Anomalous indications in monitoring equipment onboard U.S. Navy vessels must be handled in a timely manner to prevent catastrophic system failure. The development of sensor data analysis techniques to assist a ship's crew in monitoring machinery and summon required ship-to-shore assistance is of considerable benefit to the Navy. In addition, the Navy has a large interest in the development of distance support technology in its ongoing efforts to reduce manning on ships. In this paper, we present algorithms for the detection of anomalous events that can be identified from the analysis of monochromatic stationary ship surveillance video streams. The specific anomalies that we have focused on are the presence and growth of smoke and fire events inside the frames of the video stream. The algorithm consists of the following steps. First, a foreground segmentation algorithm based on adaptive Gaussian mixture models is employed to detect the presence of motion in a scene. The algorithm is adapted to emphasize gray-level characteristics related to smoke and fire events in the frame. Next, shape discriminant features in the foreground are enhanced using morphological operations. Following this step, the anomalous indication is tracked between frames using Kalman filtering. Finally, gray level shape and motion features corresponding to the anomaly are subjected to principal component analysis and classified using a multilayer perceptron neural network. The algorithm is exercised on 68 video streams that include the presence of anomalous events (such as fire and smoke) and benign/nuisance events (such as humans walking the field of view). Initial results show that the algorithm is successful in detecting anomalies in video streams, and is suitable for application in shipboard environments. One of the principal advantages of this technique is that the method can be applied to monitor legacy shipboard systems and environments where high-quality, color video may not be available.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: A survey of diagnostic program managers is described in an attempt to characterize, and suggest remedies for, the time and budget-constrained fashion in which avionics diagnostics systems are functionally demonstrated today.
Abstract: Demonstrations of avionics system and subsystem diagnostic capability are performed before a system or subsystem is verified. This ordinarily happens during the system design and demonstration phase of a program. In the case of aircraft or ground vehicles, there are several subsystem demonstrations, followed by a single system-level event. By the time a system or subsystem is ready for a functional demonstration of its diagnostic capability, there is already significant programmatic inertia towards achieving the next programmatic or contractual milestone. There is typically not enough available manpower, time-on-system, or even funding to test every possible fault in a given system or subsystem. Indeed, testing only the "relevant" faults, which the system's diagnostics have been built to address, can be a hugely time-consuming effort. Due to these constraints, diagnostic demonstrations are sometimes not conducted in a scientifically robust manner. Sometimes, certain testing techniques are used in an effort to expedite testing. These techniques include: emulating hardware faults and their detection circuitry in software, selecting only those faults which are easy to test or guaranteed to work, and choosing faults which do not significantly stress the diagnostics system. This paper describes a survey of diagnostic program managers in an attempt to characterize, and suggest remedies for, the time and budget-constrained fashion in which avionics diagnostics systems are functionally demonstrated today.

Proceedings ArticleDOI
S. Narciso1
28 Oct 2010
TL;DR: This paper will give an overview of the architectural elements of AXIe, the compatibility model with AdvancedTCA®, and measured performance of many of theAXIe structures.
Abstract: A new emerging test and measurement standard called AXIe, AdvancedTCA eXtensions for Instrumentation, is expected to find wide acceptance within the Automatic Test Equipment community as it offers many key benefits. It is expected that a large number of stimulus, measurement, signal conditioning, acquisition and processing modules will become available from a range of different suppliers. AXIe uses AdvancedTCA® as its base standard, but then borrows from test and measurement industry standards such as PXI, IVI, and LXI, which were designed to facilitate cooperation and plug-and-play interoperability between instrument suppliers. This enables AXIe systems to easily integrate with other test and measurement equipment. AXIe's large board footprint, available power and efficient cooling to the module payload allows high density in a 19-inch rack space, enabling the development of high-performance instrumentation in a density unmatched by other instrumentation form factors. Channel synchronization between modules is flexible and provided by AXIe's dual triggering structures: a parallel trigger bus, and radially-distributed, time-matched point-to-point trigger lines. Inter-module communication is also provided with a local bus between adjacent modules allowing data transfer rates up to 10 Gbits/s in each direction, for example between front-end digitizer modules and DSP banks. The AXIe form factor provides the power and cooling necessary to embed high performance computing. A range of compute blades are available today in an AdvancedTCA® form factor that provide low cost alternatives to the development of custom signal processing modules. The availability of both LAN and PCIe (PCI Express) fabrics allow the interconnection between modules, as well as high industry-standard highperformance data paths to external host computer systems. AXIe delivers a powerful environment for custom module development for specific and unique applications. As in the case of VXIbus and PXI before it, commercial development kits are expected to be offered from the industry. This paper will give an overview of the architectural elements of AXIe, the compatibility model with AdvancedTCA®, and measured performance of many of the AXIe structures.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: An approach for collecting data from the early stages of compilation and translation of a test program set (TPS) all the way through TPS execution on the ATE.
Abstract: Test forensics is a systematic approach for evaluating execution results of a Unit Under Test (UUT) over a broad set of criterion. Execution results include the traditional test numbers, measured values and limits, but also include timing relationships and configuration information about the UUT and Automatic Test Equipment (ATE) used while conducting the test. Occasionally a test may pass on station (A) but fail on station (B), even though both stations are identical, and manufactured from the same production line. Finding the root cause of this type of failure is often tedious and time-consuming. The complexity of solving this issue increases significantly if station (B) is a replacement for station (A), containing a different hardware and software architecture, subject only to the requirement of being functionally equivalent. For either the production case or the transportability case, engineering could benefit from automated tools and data collection. Presented in this paper is an approach for collecting data from the early stages of compilation and translation of a test program set (TPS) all the way through TPS execution on the ATE. Test forensics is a design approach of the runtime system coupled with automated tools for analyzing and presenting the data to the operator. This paper will briefly touch on how the test-forensics implementation would benefit net-centric and distributive network efforts to achieve TPS transportability.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: This paper is a case study based on the requirement for a PXI-based instrument that can generate simple color bar signals in NTSC and PAL formats to support the Mini-Samson/Katlanit Remote Controlled Weapon Station.
Abstract: The theme for Autotestcon 2010 is “45 Years of Support Innovation — Moving Forward at the Speed of Light." This theme is particularly relevant for military ATE systems because it highlights the dichotomy of striving to maintain state-of-the-art testing capabilities, while at the same time needing to support legacy technologies that may be decades old — indeed, as old as Autotestcon itself. The need to support discrete transistor-based electronics, TTL, CMOS and other technologies developed in the 1960's and 1970's, using test systems built around custom ASIC's, high performance FPGA's and logic levels whose peak-to-peak amplitudes were once considered “noise”, presents unique challenges. Systems deployed in the last century used CRT monitors to display information to a technician or operator. These monitors were based on analog video transmission standards such RS170, NTSC (National Television Standards Council), PAL (Phase Alternating Line) and other similar standards. Today, with the widespread use of DVI and HDMI digital video, it is rare to find CRT monitors in commercial use. But they are still widely used in older deployed systems. This paper is a case study based on the requirement for a PXI-based instrument that can generate simple color bar signals in NTSC and PAL formats to support the Mini-Samson/Katlanit Remote Controlled Weapon Station. By integrating an off-the-shelf PXI FPGA card, with an intellectual property (IP) core available in the public domain and a handful of commercially available support components, a cost effective solution was developed which supports the generation of both analog and digital video signals for testing CRT and LCD monitors. The flexibility of this approach allowed the extension of the original requirement for generating color bar patterns to include more complex test patterns.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: In this paper, two main types of measurement that may be described using IEEE 1641 and how they relate to real measurement facilities are discussed. But they are not discussed in detail in this paper.
Abstract: A new revision of IEEE Std 1641 has been completed and should be available before the end of the year. This standard, which provides a method of accurately defining signals and tests, has been further developed with many new improvements. The standard provides for libraries of pre-defined signals which may be re-used within a technology or project. There have been many examples of stimulus signals, both as stand-alone signals and in libraries, but fewer examples of measurement models. An exercise has recently been completed to provide a sample measurement library for the CASS system. This has resulted in a library containing many examples of measurement Test Signal Frameworks (TSFs). This paper explains the two main types of measurement that may be described using IEEE 1641 and provides examples of each. This is supported by examples of TSFs for several of the measurement facilities provided in the CASS system. Test Programs for this system are defined using a subset of ATLAS, so the measurement TSFs in the library relate directly to the ATLAS nouns and noun modifiers used by the system. These examples, illustrated with simulations of the measurements described, further clarify the difference between intrinsic and generic measurements in IEEE 1641 and how they relate to real measurement facilities.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs.
Abstract: We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.

Proceedings ArticleDOI
28 Oct 2010
TL;DR: This paper is a technology update of the newest vehicle diagnostics system, the Smart Wireless Internal Combustion Engine (SWICE) "At Platform" Test System interface, which can be used as a mini-vehicle computer system (Mini-VCS).
Abstract: Our last paper covered the background, current spiral developments, roll out, and sustainment of the US Army's newest At-Platform Automatic Test Systems (APATS) equipment for TWVs (Tactical Wheeled Vehicles). The equipment, called the SWICE (Smart Wireless Internal Combustion Engine) system, was developed for vehicle diagnostics systems in at-platform and embedded applications, including prognostics. An overview of the SWICE system operation was described, including the Smart Wireless Diagnostic Sensor (SWDS) device, features of the Vehicle Integrated Diagnostics Software-Field (VIDS-F) implementation, and the vehicle Diagnostics Software (DS) application. We also covered the functions of the Prognostics Client "plug-in" module and integrated support for the Common Logistic Operating Environment (CLOE) implementation, along with the concept of leveraging the SWICE/SWDS as a “Mini-Vehicle Control System (VCS)”. Having built and delivered much of the SWICE system, we have found several practical considerations as we moved from design to true implementation. Beginning with an overview of the SWICE system, the primary focus of this paper summarizes two examples of bridging the gap between design and implementation, namely, wireless security and data logging. While the former required the SWICE's wireless networking to be secured with Federal Information Processing Standards, the latter provided for a new application for leveraged use of the SWICE/SWDS as a Mini-VCS. The objective is to further enhance Conditioned Based Maintenance Plus (CBM+) secure diagnostics, data logging, prognostics and sensor integration to support improvement of the US military ground vehicle fleet's uptime to enhance operational readiness. Benefits include TWVs increased readiness and operational availability, reduced maintenance costs, lower repair part inventory levels, reduced cost of consumables, and an overall reduction in maintenance process errors.