scispace - formally typeset
Search or ask a question

Showing papers presented at "AUTOTESTCON in 2016"


Proceedings ArticleDOI
Jerry Murphree1
01 Sep 2016
TL;DR: An approach to anomaly detection using neural networks for the specific problems in large systems to efficiently determine system health is described.
Abstract: We have a need for methods to efficiently determine the health of a system. Diagnostics and prognostics determine system heath through analysis of data from sensors. Anomalies in the data can help us determine if there is a failure or a pending failure. There are common statistical methods to detect anomalies in individual measurements. For systems with many measurements, the anomalies may occur as specific combinations of values. Large systems have various associated states and modes which define the valid measurements. The amount of data to analyze grows very quickly as the system becomes more complex. In recent years techniques have been developed to address large data analysis. Machine Learning encompasses a broad selection of tools to optimize a statistical model of the data. These tools include supervised learning techniques, such as linear regression and logistic regression, in which training data exists to tune the model. Unsupervised learning, such as clustering, is used to explore data which does not have a defined output label associated with inputs data. Standard approaches to training supervised learning systems require a large sample of positive and negative outcome data. Some uses of machine learning involve data where there are very few cases of negative outcomes. There are machine learning algorithms defined as Anomaly Detection which are designed to deal with this type of data. Simple algorithms include Gaussian Distribution Analysis, which assumes independence in distributions of data. Large Systems with anomalies defined in the dependent combinations of data require either a manual creation of combinations of independent variables, or Multivariate Gaussian Distribution Analysis, which does not scale well for large systems. A further complication is the mixture of linear and discrete data. Neural Networks are a type of learning system which has been applied to each of the individual needs addressed above. This paper describes an approach to anomaly detection using neural networks for the specific problems in large systems to efficiently determine system health.

29 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: A tool called the JTAG Configuration Manager (JCM) that enables high-speed programmable access to the configuration memory of FPGAs through JTAG, which optimizes the speed and timing of JTAG transactions over cables of any length using an automatic speed calibration process.
Abstract: Since most FPGAs use the universal JTAG port to support configuration memory access, hardware and software tools are needed to maximize the speed of FPGA configuration management over JTAG. This paper introduces a tool called the JTAG Configuration Manager (JCM) that enables high-speed programmable access to the configuration memory of FPGAs through JTAG. This tool consists of a linux-based software library running on an embedded ARM processor paired with a hardware JTAG controller module implemented in programmable logic. This JTAG controller optimizes the speed and timing of JTAG transactions over cables of any length using an automatic speed calibration process. This JTAG interface enables custom configuration sequences to be sent at high speeds. The JCM also has access to all JTAG interfaces of the FPGA including temperature monitoring and internal Boundary SCAN, making it useful for many testing and verification applications.

25 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: The paper describes a cross-layer framework capable of handling soft and hard faults as well as the system's degradation, and discusses the dependability properties of the Fault Management framework itself and related infrastructure.
Abstract: Semiconductor products manufactured with latest and emerging processes are increasingly prone to wear out and aging. While the fault occurrence rate in such systems increases, the fault tolerance techniques are becoming even more expensive and one cannot rely on them alone. Rapid emergence of embedded instrumentation as an industrial paradigm and adoption of respective IEEE 1687 standard by key players of semiconductor industry opens up new horizons in developing efficient on-line health monitoring frameworks for prognostics and fault management. The paper describes a cross-layer framework capable of handling soft and hard faults as well as the system's degradation. In addition to mitigating/correcting the faults, the system may systematically monitor, detect, localize, diagnose and classify them (manage faults). As a result of such fault management approach, the system may continue operating and degrade gracefully even in case if some of the system's resources become unusable due to intolerable faults. The main focus of this paper is however to discuss the dependability properties of the Fault Management framework itself and related infrastructure.

15 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: In this paper, the authors discuss the automation of quality assurance and produced part testing for additive manufacturing systems and the process of identifying defects, determining their impact and potentially taking corrective action is discussed.
Abstract: This paper discusses the automation of quality assurance and produced part testing for additive manufacturing systems. The process of identifying defects, determining their impact and potentially taking corrective action is discussed. Algorithms for this purposes are presented. Examples of assessment are considered. The correction of both incidental and deliberately introduced defects is discussed.

14 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: In this article, the effects of using principle component analysis in a vibration-based fault detection process and to understand the capability of this method of maintenance is studied. And the results from this study demonstrated that the proposed method successfully identified healthy, unbalance and parallel misalignments of rotary rotor.
Abstract: Current vibration based maintenance methods can be improved by using principle component analysis to identify fault patterns in rotating machinery. The intent of this paper is to study the effects of using principle component analysis in a vibration based fault detection process and to understand the capability of this method of maintenance. Because vibration-based maintenance practices are capable of identifying motor faults based on their respective vibration patterns, principle component analysis observed in frequency domain can be used to automate the fault detection process. To test this theory, an experiment was set up to compare health conditions of a motor and determine if their patterns could be grouped using principle component analysis. The result from this study demonstrated that the proposed method successfully identified healthy, unbalance and parallel misalignments of rotary rotor. Therefore, it is capable of detecting faults in early stages and reducing maintenance costs.

13 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: In this article, the authors describe a real-world case study of newly developed noncontact NILM sensors installed aboard the USCGC SPENCER, a famous class (270 ft) cutter.
Abstract: Modernization in the U.S. Navy and U.S. Coast Guard includes an emphasis on automation systems to help replace manual tasks and reduce crew sizes. This places a high reliance on monitoring systems to ensure proper operation of equipment and maintain safety at sea. Nonintrusive Load Monitors (NILM) provide low-cost, rugged, and easily installed options for electrical system monitoring. This paper describes a real-world case study of newly developed noncontact NILM sensors installed aboard the USCGC SPENCER, a Famous class (270 ft) cutter. These sensors require no ohmic contacts for voltage measurements and can measure individual currents inside a multi-phase cable bundle. Aboard the SPENCER, these sensors were used to investigate automated testing applications including power system metric reporting, watchstander log generation, and machinery condition monitoring.

12 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: In this article, an integrated AR-based framework is presented to monitor an HVAC system (or any cyber-physical system) in real-time, access system health information remotely and act on that information proactively to prevent or minimize the system down time.
Abstract: Maintaining Heating, ventilating and air conditioning (HVAC) systems in buildings and vehicles in superior condition is essential to energy waste minimization, increased equipment availability and improved thermal comfort of occupants. HVAC systems are complex interconnected systems. Consequently, early detection and diagnosis of incipient faults in such systems using robust methodologies and tools is salient for achieving thermal comfort. The increased complexity, cross-subsystem fault propagation, and the associated information propagation delays in networked HVAC systems makes fault diagnosis and maintenance a challenging task. This motivates us to incorporate the emerging technologies, such as real-time monitoring, remote diagnosis and augmented reality (AR) technologies, for efficient fault diagnosis and troubleshooting in such systems. This paper presents an integrated AR-based framework to monitor an HVAC system (or any cyber-physical system) in real-time, access system heath information remotely and act on that information proactively to prevent or minimize the system down time.

10 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: In this article, the spectral components of the arc in the high frequency band, typically between 1 and 30 MHz, were studied, and a Thevenin generator equivalent to the arc was proposed allowing to determine the disturbing current and voltage spectral density at any point of the power network.
Abstract: High voltage direct current networks are now being implemented in the new generation of civil aircrafts. If a short circuit occurs between wires of the power cable, an “arc tracking” may happen. Such an arc can sustain over a time which can reach one second or more and propagate along the cable. New kind of intelligent breakers have thus been developed based either on the time domain or the low frequency characteristics of the pulses associated with this arc. In this paper, the spectral components of the arc in the high frequency band, typically between 1 and 30 MHz are studied. A Thevenin generator equivalent to the arc is proposed allowing to determine the disturbing current and voltage spectral density at any point of the power network. Such an approach can be used to improve the efficiency of actual breakers and also to be able to predict interferences between the disturbing signals due to this arc and sensitive control-command systems.

6 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: In this paper, the authors explore the balance of the three fundamental aspects that make up asset management and focus on how to implement strategies to lower the total cost of ownership for test.
Abstract: For most aerospace and defense companies, test and measurement equipment is one of the largest, if not the largest, capital expenses on their balance sheets. With that said, few companies have a comprehensive, corporate wide program to effectively manage and maximize the utilization of test and measurement equipment over its projected lifetime. Other industries, such as power generation, airlines and foundries, have been able to master optimization and utilization of their capital to maximize their return on investment. This paper will explore the balance of the three fundamental aspects that make up asset management and will focus on how to implement strategies to lower the total cost of ownership for test. The three areas addressed in this paper are: 1. Management of the “real” asset profile — the number and capabilities of assets across an enterprise. 2. The ability to maximize the optimization and utilization of the assets on a continuous basis. 3. Schemes to develop and implement life cycle strategies for test and measurement assets. The implementation and usage of an asset management program can have huge positive implications, not only on reducing capital costs, but on faster throughput, lower operational expenses, shorter time to market, and even better quality; all of these allow a company to be more competitive in the new firm fixed contract world.

5 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: It is concluded that clock frequency offset and network transmission delay are the main influence factors for synchronization and the feasibility of maintaining submicrosecond-level synchronization accuracy within multi-level switch topology is proved.
Abstract: This paper proposed a universal method for implementing PTP (Precision Time Protocol) for test systems with the 1000M Ethernet Interface. To achieve the synchronization accuracy of sub-microsecond, the configurable real-time clock and the time stamp module were realized in the programmable logic which makes the PHY and MAC to be free of time stamp functions in the communication link. PTPd (Precision Time Protocol deamon, an open source implementation) was modified and transplanted into the embedded Linux system to realize PTP state machine while the IEEE 1588 IP core device driver was developed to provide the application layer with access to the accurate time-stamp obtained in link layer by IEEE 1588 IP core. This project structure makes the transplantation process concentrate on the time adjustment algorithm design in the application layer regardless of obtaining a precise time stamp in the hardware. The proposed method was evaluated on the Xilinx Zynq-7000 SOC platform by outputting PPS (Pulse Per Second) which can verify the synchronization accuracy of all nodes (master and slaves) in the network. After quantifying the accuracy and stability of the synchronization offset, we concluded that clock frequency offset and network transmission delay are the main influence factors for synchronization and proved the feasibility of maintaining submicrosecond-level synchronization accuracy within multi-level switch topology.

5 citations


Proceedings ArticleDOI
Edward Dou1
01 Sep 2016
TL;DR: In this article, the authors present a cost model, Cost Model for Verifying Requirements (CMVR), to assist program managers in quickly assessing the financial impact of verifying requirements as a result of changing (e.g. adding, modifying, and deleting) requirements.
Abstract: Testable requirements are the foundation to any development program. The number of requirements and the technical difficulty of satisfying those requirements are factors that drive program cost and schedule. Being able to quickly assess the scope of requirement verification and costing that activity is essential to the proposal process. For awarded programs, controlling and costing requirements volatility is critical to ensuring sufficient resources to execute the program and meet customer need dates. When considering requirements verification, to include regression testing, a balance is often needed between the cost and the coverage provided. These challenges are commonly encountered during program startup and execution. This paper presents a cost model, Cost Model for Verifying Requirements (CMVR), to assist program managers in quickly assessing the financial impact of verifying requirements as a result of changing (e.g. adding, modifying, and deleting) requirements. Of note, this paper focuses on more formal testing and verification activities, but does not address development and integration aspects. For the CMVR model to provide accurate results, the test team should first fully map requirements to test events. In doing so, requirements should be traced from the stakeholder (e.g. customer requirements) through derived requirements to test objectives and ultimately to test scripts/procedures. Each test script and procedure will need to be assessed to determine the cost (man-hours and duration) to complete the test objective. With the linkage between requirements and test events established, programs can then use the cost model for bidding, evaluating requirements volatility, and developing test sets that optimize the cost-benefit ratio. Bidding: During bidding, requirements are often not fully developed. The CMVR model addresses these ambiguities by providing a portfolio mix (easy, moderate, difficult) based on historical data, enabling program managers to select or alter — similar to tailoring ones 401K plan. Requirements Volatility: Evaluating the impact of requirements volatility on test costs requires assessing development, test setup, execution, and analysis of potential efficiencies that can be leveraged from overlapping tests. Developing Test Sets: With limited time and resources, programs may need to identify a subset of tests to execute (such as for regression testing). Programs will need to determine the focus areas of requirements (Depth), the test requirement coverage (breadth), and the critical must-test requirements. This paper also includes examples of utilizing the CMVR model and demonstrating how this capability enables quickly assessing cost and schedule impacts due to a change in requirements. In summary, The CMVR cost model provides program managers with an important tool to quickly assess the testing cost of requirements.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: A plurality of graph-based methods which are combined in a novel way for the automated analysis of a system's alarms (or any other observable discrepancies) to determine the most appropriate maintenance.
Abstract: Large and complex systems such as space vehicles, power plants, manufacturing facilities, oil refineries, gas delivery systems, among others often have networks of alarms monitoring basic parameters (e.g. high or low temperature, voltage out-of-tolerance, power loss, etc.) which are correlated to failure modes, but not necessarily in a very direct way. In this paper, we present a plurality of graph-based methods which are combined in a novel way for the automated analysis of a system's alarms (or any other observable discrepancies) to determine the most appropriate maintenance. Specifically: (i) Timed Failure Propagation Graphs (TFPG) and/or Bayesian Networks (BN) read alarms as evidence for conducing backward root-cause diagnosis and forward failure effects analysis while (ii) Influence Diagrams (ID) select optimal maintenance operations considering the likely causes and effects as well as the utility of available maintenance options. Innovative contributions to these individual techniques include an automated BN instantiation methodology and system/sensor TFPG diagnostic algorithms. The overall proposed system then determines optimal maintenance paths suggested to be conducted by personnel.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: A supplemental unit that can be added to the VXI chassis in the CASS family of testers and conduct ultra-low latency active tests and act as hardware-in-the-loop to perform real-time tests including a new capability to measure jamming response time from DECM avionics.
Abstract: This research project aims to expand the capability of current US Navy Automated Test Equipment (ATE) family of testers known as the Consolidated Automated Support System (CASS). Industry research is now focused on breaking the historical construct of test equipment. Advances in the field of Synthetic Instruments have opened the door to test avionics in new ways. Every year new capabilities are developed using core hardware and increasingly capable software modules to create complex waveforms. This research creates a Digital Radio Frequency Memory (DRFM) Synthetic Instrument that can be programmed to perform a wide array of low latency Radio Frequency (RF) tests. Synthetic Instruments are defined as a concatenation of hardware and software modules used in combination to emulate a traditional piece of electronic instrumentation. This Synthetic Instrument couples high speed Analog-to-Digital Converters (ADC) to high speed Digital-to-Analog Converters (DAC) with Field Programmable Gate Arrays (FPGA) in between for digital signal processing. An RF front end is used to down convert the RF to baseband where it is sampled, modified, and up converted back to RF. The FPGA performs Digital Signal Processing (DSP) on the signal to achieve the desired output. Application of this DRFM in automated testing is demonstrated using a Reconfigurable Transportable Consolidated Automated Support System (RTCASS) tester at Naval Air Systems Command (NAVAIR) Jacksonville, FL. The Unit Under Test (UUT) is an ALQ-162 Defensive Electronic Countermeasures (DECM) receiver-transmitter. Ultra-low latency signals are generated to simulate enemy jamming stimulus. As the ALQ-162 detects and responds to the input, the DRFM switches to a new frequency. The time taken by the ALQ-162 to acquire, respond, and re-acquire is measured. This test confirms the internal Yttrium Iron Garnet (YIG) oscillator meets slew specifications. Currently Navy ATE can only test RF units using high latency steady state tests. This research project developed a supplemental unit that can be added to the VXI chassis in the CASS family of testers and conduct ultra-low latency active tests. The instrument acts as hardware-in-the-loop to perform real-time tests including a new capability to measure jamming response time from DECM avionics. Demonstrated performance capabilities include: latency 80 dBc, input SFDR > 60 dBc, frequency tuning resolution < 2 Hz, and frequency settling time < 0.5 ns. New RF capabilities developed by this effort parallel similar research ongoing for digital test instruments like the Teradyne High Speed Subsystem. Incorporating this Digital RF Memory synthetic instrument into current and future ATE will improve readiness and supportability of the fleet. Improvements demonstrated by this research project will expand the type and quantity of assets able to be tested by current and future ATE.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: This paper introduces a new category of software-defined, headless wireless signal analyzer platform that enables a range of new test applications that could not be effectively addressed by legacy benchtop, modular and handheld spectrum analyzers.
Abstract: The past decade has seen an exponential proliferation of wideband radio communication technologies. The drive toward wider bandwidths and increasingly complex modulations presents unique challenges from a test and measurement perspective. At the same time, device manufacturers typically have to contend with decreasing product margins and shrinking test equipment budgets. This trend is going to accelerate with the rapid proliferation of newer Internet of Things, 5G and their unique test requirements. This paper introduces a new category of software-defined, headless wireless signal analyzer. Distinguished by its cost-effectiveness, small form-factor, networkability and enhanced performance specifications, this product enables a range of new test applications that could not be effectively addressed by legacy benchtop, modular and handheld spectrum analyzers. The software-defined aspect enables the user to take advantage of an external host processor such as that in a laptop or desktop. Additionally, software modules for specific modulation formats can be utilized with a common hardware platform. This paper describes key attributes and architecture of the wireless signal analyzer platform and how it differs from conventional benchtop, modular and handheld test equipment.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: This paper details how SAMPLE and BOLDR® can be used without major changes in legacy or new avionics.
Abstract: Design for testability (DFT) should go beyond simply assisting manufacturing test or even beyond fielded unit troubleshooting. Boundary scanned components can be controlled to collect real time snapshots of signals capable of assessing circuit health in situ without interfering with normal operation or flight. Vehicle Health Management (VHM) frameworks can utilize the information gained from SAMPLE instructions gathered by JTAG/IEEE-1149.1 boundary scan compatible ICs using Boeing On-Line Diagnostic Reporting (BOLDR®) techniques to enable this data collection for assessing what maintenance actions should be taken. Boundary scan data at or around the time that failures take place can be collected as historical information and retained as “evidence” during a call for line replaceable unit (LRU) maintenance actions. First, it can help assess whether built-in test (BIT) or embedded test indications are persistent, continuous for certain operational modes, intermittent, or simply spurious. In other words, it can help determine false alarms (FAs). Second, once LRUs are in the repair facility and a No Fault Found (NFF) situation is encountered, the historical evidence can help determine the root cause and direct repair actions. Distributed and Centralized BIT for VHM data acquisition can be enhanced by the information boundary scan data provides. Many LRUs that already have boundary scan hardware can utilize embedded software updates coupled with BOLDR® VHM techniques with minimal if any hardware changes to take advantage of this added information source. This paper details how SAMPLE and BOLDR® can be used without major changes in legacy or new avionics. [1]

Proceedings ArticleDOI
01 Sep 2016
TL;DR: In this paper, a distributed wireless system with optical E-field sensor is designed for collecting and monitoring the electric field under HVDC transmission lines, which has been used in China's state grid state grid test base and power transmission projects.
Abstract: The area of high voltage direct current (HVDC) transmission line is very large, so using cable for electric field monitoring system is very inconvenient. Wireless sensor network(WSN) can solve this problem. Compared with the traditional communication network, WSN has the advantages of small volume, high flexibility, strong self-organization. So it's more suitable for the construction of distributed electric field monitoring system which has long distance and high mobility. On the other hand, optical E-field sensors are passive devices and they have such advantages as compact structure, wide-band response and wide measuring range which mechanical sensors lack. A distributed wireless system with optical E-field sensor is designed for collecting and monitoring the electric field under HVDC transmission lines. This measurement system has been used in China's state grid HVDC test base and power transmission projects. Based on the experimental results, this measurement system demonstrates that it can adapt to the complex electromagnetic environment under the transmission lines and can accomplish the accurate, flexible, and stable demands of the electric field measurement.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: Many aspects associated with running a TPS on a different ATE utilizing the same ITA will be covered, including the ITA transition adapter and translation software, which will convert an existing TPS from one ATE to another ATE.
Abstract: There have been great strides in the open system plug and play concept for Automatic Test Equipment (ATE) in the Department of Defense (DoD). An ideal test system can be thought of as the sum of its parts: measurement and stimulus hardware, signal switching, power supplies, cabling and interconnect system (Interface Test Adapter — ITA), external PC or embedded controller, Operating System (OS), control and support software, and the programming environment. Each part is selected based on parameters such as Unit Under Test (UUT) test parameters, physical dimensions, test times, and cost. UUT test requirements are the crucial aspect of instrument selection and functionality. The open system plug and play concept gives rise to the possibility to run a test program on a different ATE, that is, taking your ITA and your Test Program Set (TPS) from its programmed ATE and running the TPS on a different ATE utilizing the same ITA. The main components for running a TPS on different ATE are an ITA Transition Adapter and Translation Software to convert or compile the Test Program to run on another ATE. The ITA hardware configuration and the Interface Connection Assembly (ICA) variations between different ATE are critical factors. If instruments have compatible features then UUT test requirements might not require examination, however if there are distinct differences in instrument capability between ATEs then UUT Test Requirements become a critical factor. Also, there will be ATE switching variances between ATE designs so this is a prime consideration. In pursuit of merging ATE TPSs from its programmed ATE to a different ATE, an ITA transition adapter can be developed. The transition adapter is the hardware between the ITA and the different ATE ICA. The transition adapter is wired to route signals from one ICA configuration to another ICA configuration. The transition adapter design requires an ICA to ITA evaluation that consists of a pin-to-pin comparison between each ATE. Each ICA connection must be traced to the instrument or instruments which can be connected to that pin. In addition, instrument specifications must be evaluated and compared between different ATE. One thing of critical importance is the instrument driver compatibility. The translation software will convert an existing TPS from one ATE to another ATE. That is, the translation software must compile the existing test program to run on a different ATE test executive. At this point, many factors come into focus; the re-compiled Test Program must be analyzed for capability to run on the new platform or ATE. Remember, instruments from different manufacturers don't always perform completely inter-changeably. Quirks between instruments, which theoretically have the same specifications, can be a major setback. During the ITA hardware development signal integrity is analyzed not only by signal evaluation and noise but also by actual test program execution. Every design detail is important to minimize signal integrity problems. Signal degradation factors analysis is vital to signal health and proper UUT testing. This paper will cover many aspects associated with running a TPS on a different ATE utilizing the same ITA.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: In this paper, a shared self-repair design that uses Context Addressable Memory (CAM) as the operation units of fault information is proposed to improve the repair rate of RAMs and the resource utilization of redundancies, as well as reducing the silicon area overhead of BISR circuits.
Abstract: As transistors' sizes of embedded memory continue to shrink and the valuable silicon is becoming the draining resources, it is the prevalent trend that multi-memory structures exist in the current SOC design to achieve better performance. Due to the imperfect manufacturing processes, it may introduce the faults to the designs. Built-In Self-Test(BIST) and Self-Repair(BISR) are the better test and repair methods for embedded memory, however, to the single embedded memory, both BIST and BISR are unacceptable in multi-memory design and the redundancies resources in memories which manufacturers provide are very limited. It is inefficient to use the traditional redundancy resource allocation algorithms, instead of using a more precise BISR structure to improve both the repair rate of RAMs and the resource utilization of redundancies, as well as, reducing the silicon area overhead of BISR circuits. For these aims, this paper proposes a shared self-repair design that uses Context Addressable Memory(CAM) as the operation units of fault information. In the paper, it clearly presents the special components of design and the corresponding working principles. We implemented this structure in real industrial microprocessors. Experimental results demonstrate the effectiveness of the proposed structure.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: The fundamentals of AESA radars and trends in radar systems are discussed and the impact of these trends on test system architecture is analyzed and how the advances in PXI modular instrumentation can meet these challenging requirements are explained.
Abstract: Active Electronically Scanned Array (AESA) technology will enable next generation radars achieve better jamming resistance capability and low probability of intercept by spreading their emissions over a wide frequency range. These radars systems consist of a large number of transmit/receive modules (TRMs) which are electronically scanned in a tight time-synchronized manner. This requires digital control to move closer to the radio front end on the antennas. Other emerging technologies, such as cognitive radars and MIMO radars, will continue to drive the need for complex timing, synchronization, and high mix RF and digital measurement requirements. To meet these challenges, radar engineers will need a platform based approach which delivers capabilities such as multi-channel phase aligned measurements over wide bandwidths and high-throughput streaming. This paper discusses the fundamentals of AESA radars and trends in radar systems. It analyzes the impact of these trends on test system architecture and explains how the advances in PXI modular instrumentation can meet these challenging requirements.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: Novel run-time reconfigurable (RTR) instruments are presented, which are distributed as pre-compiled readyto-use bitstreams, and study their applicability for boardlevel test tasks and improve quality of tests for printed circuit board assemblies.
Abstract: In recent years embedded instrumentation becomes a cutting-edge technology in the field of testing and measurements. In this paper, we propose a classification of different implementations of FPGA-based embedded instruments based on the format they are delivered to an end-user. Up to now, instruments provided as soft core IPs and hard macro blocks only were proposed. In this work, we present novel run-time reconfigurable (RTR) instruments, which are distributed as pre-compiled readyto-use bitstreams, and study their applicability for boardlevel test tasks. These instruments are designed in a special way that allows on-the-fly adaptation of the instrument to test the particular product. With the help of these RTR instruments one can considerably improve quality of tests for printed circuit board assemblies as well as reduce test time. Being integrated to the test setup, the instruments represent an automated and low-cost complementary solution for testing of complex high-performance boards and systems.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: In this paper, the authors present a web service architecture to enable either a closed or networked system topology to track individual items by their part numbers, which can identify obsolescence of the items and to plan for future investments to mitigate against deficiencies in the equipment.
Abstract: Aircraft maintenance managers encounter significant pressure to maintain the operational readiness of their aircraft fleet. In the commercial domain, the demands result from financial pressure to remain competitive with peers. In the military domain, maintenance managers must meet operational targets to achieve mission success. For daily operations, many managers use printed tabular sheets or manually updated spreadsheets to track aircraft and support equipment status. While this affords expediency to the maintenance managers, the approach limits the immediate communication of status changes to other levels of supervision and to others in the organization who have interest in the information. The status sheet runs the risk of being lost or being annotated inadvertently. This tracking method adds additional time to the overall maintenance production process because subordinate staff have to exchange the information with the maintenance manager. The approach discards information because the documentation is destroyed daily or as needed. Improvements to maintenance production could benefit from this data, or the maintenance manager could use the information to identify trends in the fleet. This research describes the initial considerations in developing a maintenance production tool for tracking the status of support equipment. The tool uses a web service architecture to enable either a closed or networked system topology. The system tracks individual items by their part numbers. Reported information for the support equipment includes quantity status (availability and required amount), problem reports, safety violations, etc. The tool provides the ability to identify obsolescence of the items and to plan for future investments to mitigate against deficiencies in the equipment. A method to numerically aggregate the issues allows the maintenance manager and management to use the data to analytically rank the support equipment, which most severely affects the maintenance production. The flexible framework with which the tool was developed will allow for extensions to support other facets of maintenance production. Future work could include integration with the tracking process for individual aircraft to monitor the configuration and status. As the data for the support equipment will be consolidated in one location, the trend based and predictive health maintenance analysis of the assets will be possible.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: This paper formulation of three strategies for dealing with the problem of having to differentiate between NFFs of good units under test (UUTs) and of faulty UUTs and concludes by tabulating the formulas and calculating NFF costs for an example situation.
Abstract: False Alarms (FAs) that occur in a fielded system and No Fault Found (NFF) events that are discovered after line replaceable units (LRUs) have been returned to repair are costly situations whose full impact is difficult to put into monetary terms. For that reason, pragmatic economic models of NFFs are difficult to develop. In this paper, we deal with the problem of having to differentiate between NFFs of good units under test (UUTs) and of faulty UUTs. While we cannot tell which UUT is good and which is faulty, we can determine using probabilities what percentage of the NFFs are faulty and what percentage are good. Based on these probabilities, we can evaluate various strategies. Assigning cost factors that are knowable, such as the cost of testing a UUT, the cost we incur for good UUTs vs. costs we incur for faulty UUTs and various test and repair costs, we can calculate the performance of various strategies and assumptions. In this paper, we formulate three strategies: 1) We assume all NFF UUTs are good and are willing to endure the cost of bad actors (i.e. faulty UUTs) sent back to the aircraft. 2) We assume all NFF UUTs are faulty and we environmentally stress all NFF UUTs, hoping to fix some and avoid bad actors. 3) We rely on the technician to reasonably select some NFF UUTs and perform appropriate repair. We formulate each of these strategies for a case when NFF is 70%. The formulation is similar with any NFF distribution, but the coefficients in each formula will be different. With proper cost data, we can actually decide which strategy works best. We conclude by tabulating the formulas and calculate NFF costs for an example situation. The numbers we picked for this example may be appropriate for some operations, but not for others. As a follow-up to this paper we would like to validate the model with real data, which may be available in some military and commercial avionics maintenance departments.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: This paper will focus on implementing application whitelisting software (AWS) to protect aircraft equipment from malware and exploitation by adversaries within the United States Air Force.
Abstract: Within the United States Air Force (USAF), dedicated non-networked computer systems are used to maintain aircraft electronic systems. Traditional security practices like anti-virus (AV) software have been used to protect the maintenance equipment from malware and exploitation by adversaries. Malware sophistication and prevalence from well financed digital adversaries is rising. New layers of digital security must be applied to these computer systems so that both maintenance equipment and aircraft are protected. This paper will focus on implementing application whitelisting software (AWS).

Proceedings ArticleDOI
01 Sep 2016
TL;DR: The proposed method uses multi-carrier reflectometry: MCTDR (Multi-Carrier Time Domain reflectometry) to detect and localization of defects with a good accuracy and can be used in maintenance mode or embedded mode allowing preventive maintenance and avoiding aircraft on ground (AOG) situation.
Abstract: In most aircrafts, hot air leak detection loops are formed by thermo-sensitive cables having temperature dependent characteristics. These wires are installed along air ducts in order to be able to react to temperature changes induced by leaks. Hence, an alert is sent to the cockpit. However, with old configurations, this alert does not include the leak localization information. Classic methods based on load measurement allowing defect localization are not accurate enough as they do not take into account the cable aging and junction degradation. This may cause false alerts. The proposed method uses multi-carrier reflectometry: MCTDR (Multi-Carrier Time Domain reflectometry). Advantageously, the MCTDR measures allow our device to be superimposed on the already installed systems without interfering with current signals. Moreover, we can detect and locate precisely any abnormality or change on cable. The reflectometer measures received signal and compares it to a given reference in terms of peaks magnitudes. A hot point is detected when the peak magnitudes of a given number of successive reflectograms are increasingly smaller than reference. This is being caused by a decrease in local value of impedance. The device allows the detection and localization of defects with a good accuracy. Moreover, we deduce the value of the temperature at the hot area by computing its impedance. This tool can be used in maintenance mode or embedded mode allowing preventive maintenance and avoiding aircraft on ground (AOG) situation.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: In this article, a nonintrusive and electromagnetically self-powered embedded system with vibration sensor for condition monitoring of electromechanical machinery is presented, which can be installed inside the terminal block of a motor or generator and support wireless communication for transferring data to a mobile device or computer for subsequent performance analysis.
Abstract: This paper presents a nonintrusive and electromagnetically self-powered embedded system with vibration sensor for condition monitoring of electromechanical machinery. This system can be installed inside the terminal block of a motor or generator and supports wireless communication for transferring data to a mobile device or computer for subsequent performance analysis. As an initial application, the sensor package is configured for automated condition monitoring of resiliently mounted machines. Upon detecting a spin-down event, e.g. a motor turnoff, the system collects and transmits vibration and residual backemf data as the rotor decreases in rotational speed. This data is then processed to generate an empirical vibrational transfer function (eVTF) rich in condition information for detecting and differentiating machinery and vibrational mount pathologies. The utility of this system is demonstrated via lab-based tests of a resiliently mounted 1.1 kW three-phase induction motor, with results showcasing the usefulness of the embedded system for condition monitoring.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: The CTBN framework allows for the representation of complex performance objectives, which can be evaluated quickly using a mathematically sound approach and can also be used to predict likely system behavior, making this approach extremely useful for PHM as well.
Abstract: When awarding contracts in the private sector, there are a number of logistical concerns that agencies such as the Department of Defense (DoD) must address. In an effort to maximize the operational effectiveness of the resources provided by these contracts, the DoD and other government agencies have altered their approach to contracting through the adoption of a performance based logistics (PBL) strategy. PBL contracts allow the client to purchase specific levels of performance, rather than providing the contractor with the details of the desired solution in advance. For both parties, the difficulty in developing and adhering to a PBL contract lies in the quantification of performance, which is typically done using one or more easily evaluated objectives. In this work, we address the problem of evaluating PBL performance objectives through the use of continuous time Bayesian networks (CTBNs). The CTBN framework allows for the representation of complex performance objectives, which can be evaluated quickly using a mathematically sound approach. Additionally, the method introduced here can be used in conjunction with an optimization algorithm to aid in the process of selecting a design alternative that will best meet the needs of the contract, and the goals of the contracting agency. Finally, the CTBN models used to evaluate PBL objectives can also be used to predict likely system behavior, making this approach extremely useful for PHM as well.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: In this paper, the authors present a tool for evaluating the cost of replacing or maintaining test equipment and evaluate the consequences, risks, and costs associated with each choice, which can be used as part of the decisional process.
Abstract: Test platforms age, the components within the test systems degrade, become obsolete and wear out over time. Manufacturing companies must continuously evaluate the expected lifespan of their test equipment and determine the risk and tradeoffs associated with replacing the equipment vs. maintaining the equipment. Both industry and government entities continually struggle with how to best evaluate and address the issues of aging test equipment and systems. This paper reviews the various options available to test engineers when faced with replacing or maintaining a test system. Specifically, the manufacturing / test community must evaluate the consequences, risks and costs associated with each choice: 1. Do nothing and continue to maintain equipment until equipment failure. 2. Rejuvenate equipment by replacing components / instruments. 3. Replace existing equipment with modern automated test equipment. 4. Outsource manufacturing and test of product to a supplier. To help quantify the decision making process the use of an evaluation tool can help analyze the factors that influence the “replace or maintain” question. Each of the options listed above carries with it its own list of questions that must be addressed. These questions are encoded into the tool with responses then interpreted and results collated with user historical data, providing the test engineer with quantifiable and meaningful data for evaluating the cost of replacing or maintain factory test equipment. The following sections detail how this tool can be developed and utilized as part of the “replace or maintain” decisional process.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: This paper will concentrate on a specific example use case utilizing select standards and tools to aid in producing compliant IEEE SCC20/ATML standard products that will result in the reuse and interoperability of these products.
Abstract: The IEEE SCC20/Automatic Test Markup Language (ATML) standards are currently being used to describe a host of Automatic Test Equipment (ATE) related documents. These standards cover test descriptions, requirements and specifications of ATE instruments and UUTs in an all-encompassing test environment. These standards provide the necessary elements needed to achieve the goals of reducing the logistic footprint associated with complex system testing through data portability and reuse. The IEEE SCC20/ATML standards provide ability to capture electronic products design/specification test data required for life cycle support. However, in order to achieve the full benefits of these standards one must recognize the tasks of implementing the standards to provide the information necessary to achieve the goal of reduced support equipment proliferation and cost reduction. While these standards go a long way in achieving these objectives, a number of issues must be addressed. In order to support this environment, the IEEE SCC20/ATML standards provide for a number of ways to develop IEEE compliant documents. However, without a set of comprehensive procedures and supporting tools the optimum reuse and data integrity of these products may not be achieved. This situation is caused by the scope of the testing environment which utilizes the integration of many elements and events that occur over a products life cycle [1]. This situation leads to a data provenance issue resulting in data that may be inconsistent with IEEE SCC20/ATML documents. This paper will discuss how to handle the data issues by describing an approach and methodology addressing the data reuse and portability issues. The recommended methods focus on insuring that the IEEE SCC20/ATML developed products results in the highest degree of reuse, interchangeability and data integrity throughout the different use cases of both government and industry. The way to apply these methods starts with the source of the data. In this case the source would be a semantical taxonomy that describes how the IEEE SCC20/ATML documents should be structured for supporting the data required by the use cases. Due to the large scope of this effort, this paper will concentrate on a specific example use case utilizing select standards and tools to aid in producing compliant IEEE SCC20/ATML standard products that will result in the reuse and interoperability of these products. It will focus on the data needed to test a UUT and how that data is defined and utilized in the resulting documentation. The activities requiring this data, the events and resources acting on this data will be covered. The intent is to maintain the integrity and validity of the data throughout the products (UUT) testing life cycle. It is intended that paper will lead to improved use and enhancements of these standards. This information is intended to be used in developing a recommended practice approach that will support the use of these standards in the acquisition of test products required during a products life cycle.

Proceedings ArticleDOI
Troy Troshynski1
01 Sep 2016
TL;DR: In this article, the authors provide a brief technical overview of the common principals of high speed avionics Ethernet and Fibre Channel networks and switch fabrics and also address several key items that must be addressed when designing a test and simulation system targeted to support high speed switch UUTs.
Abstract: Modern avionics systems are increasingly employing the use of high speed serial data networks. High capacity Ethernet and Fibre Channel switch fabrics are commonly found at the core of theses avionics networks and it is typical to find a mix of both copper and optical media interfaces as well as multiple data link bit rates within a single aircraft system. As these switching fabrics become integral pieces of avionic suites, functional test system must be developed with the capacity to replicate the combined data streams of multiple avionics end points when the fabric is off the aircraft and becomes the Unit Under Test (UUT). This paper provides a brief technical overview of the common principals of high speed avionics Ethernet and Fibre Channel networks and switch fabrics and also addresses several key items that must be addressed when designing a test and simulation system targeted to support high speed switch UUTs.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: The fundamental shift in business practice required to address these critical issues is discussed, the specific benefits that can result from the integration of Big Data and advanced analytics in ATE, including enabling Prognostics and Health Management (PHM).
Abstract: Big Data and advanced analytics capabilities are delivering value in many commercial sectors. The motivation for implementing this new technology is having the ability to conduct analysis of big data to achieve cost reductions, business process improvements, faster and better decisions, and new offerings for customers. These key business objectives also apply to the domain of Automatic Test Equipment (ATE). It is clear that big data and advanced analytics technologies have the potential to bring dramatic improvements to the DoD ATE Community of Interest (COI). However, in order to unlock the potential of Big Data and advanced analytics in ATE, we have to deal with some fundamental issues that impede their implementation. For example, currently there is no connectivity or integration of Unit Under Test (UUT) test results or health monitoring data produced by the system itself to the troubleshooting, test and repair data produced throughout the maintenance process or test data produced by the ATE. Also, there is no standard format or interface employed for capturing, storing, managing and accessing the health state data produced by the ATE. Data collected across operational maintenance activities is in numerous non-standard formats, making it difficult to correlate and aggregate to support advanced analytics. This paper discusses the fundamental shift in business practice required to address these critical issues, the specific benefits that can result from the integration of Big Data and advanced analytics in ATE, including enabling Prognostics and Health Management (PHM). The paper also provides an overview description of a specific case study, the application of ATML standards in the approach, and some critical design and implementation issues based on current (actual) development efforts.