scispace - formally typeset
Search or ask a question

Showing papers presented at "IEEE Aerospace Conference in 2006"


Proceedings Article•DOI•
04 Mar 2006
TL;DR: The United States has successfully landed five robotic systems on the surface of Mars as discussed by the authors, all of which had landing mass below 0.6 metric tons (t), had landing footprints on the order of hundreds of km and landing at sites below -1 km MOLA elevation due to the need to perform entry, descent and landing operations in an environment with sufficient atmospheric density.
Abstract: The United States has successfully landed five robotic systems on the surface of Mars. These systems all had landed mass below 0.6 metric tons (t), had landed footprints on the order of hundreds of km and landed at sites below -1 km MOLA elevation due the need to perform entry, descent and landing operations in an environment with sufficient atmospheric density. Current plans for human exploration of Mars call for the landing of 40-80 t surface elements at scientifically interesting locations within close proximity (10's of m) of pre-positioned robotic assets. This paper summarizes past successful entry, descent and landing systems and approaches being developed by the robotic Mars exploration program to increased landed performance (mass, accuracy and surface elevation). In addition, the entry, descent and landing sequence for a human exploration system will be reviewed, highlighting the technology and systems advances required.

282 citations


Proceedings Article•DOI•
24 Jul 2006
TL;DR: In this paper, a model-based pose refinement algorithm was proposed to estimate the relative pose between the host platform and a resident space object. But the accuracy of the pose estimation was not evaluated.
Abstract: Autonomous rendezvous and docking is necessary for planned space programs such as DARPA ASTRO, NASA MSR, ISS assembly and servicing, and other rendezvous and proximity operations. Estimation of the relative pose between the host platform and a resident space object is a critical ability. We present a model-based pose refinement algorithm, part of a suite of algorithms for vision-based relative pose estimation and tracking. Algorithms were tested in high-fidelity simulation and stereo-vision hardware test bed environments. Testing indicated that in most cases, the model-based pose refinement algorithm can handle initial attitude errors up to about 20 degrees, range errors exceeding 10% of range, and transverse errors up to about 2% of range. Preliminary point tests with real camera sequences of a 1/24 scale Magellan satellite model using a simple fixed-gain tracking filter showed potential tracking performance with mean errors of < 3 degrees and < 2% of range.

121 citations


Proceedings Article•DOI•
24 Jul 2006
TL;DR: The software that has driven these rovers more than a combined 11,000 meters over the Martian surface is described, including its design and implementation, and current mobility performance results from Mars are summarized.
Abstract: NASA's Mars exploration rovers' (MER) onboard mobility flight software was designed to provide robust and flexible operation. The MER vehicles can be commanded directly, or given autonomous control over multiple aspects of mobility: which motions to drive, measurement of actual motion, terrain interpretation, even the selection of targets of interest (although this mode remains largely underused). Vehicle motion can be commanded using multiple layers of control: motor control, direct drive operations (arc, turn in place), and goal-based driving (goto waypoint). Multiple layers of safety checks ensure vehicle performance: command limits (command timeout, time of day limit, software enable, activity constraints), reactive checks (e.g., motor current limit, vehicle tilt limit), and predictive checks (e.g., step, tilt, roughness hazards). From January 2004 through October 2005, Spirit accumulated over 5000 meters and Opportunity 6000 meters of odometry, often covering more than 100 meters in a single day. In this paper we describe the software that has driven these rovers more than a combined 11,000 meters over the Martian surface, including its design and implementation, and summarize current mobility performance results from Mars.

121 citations


Proceedings Article•DOI•
24 Jul 2006
TL;DR: The Mars Science Laboratory (MSL) mission as mentioned in this paper was the first to achieve an EDL system capable of landing at altitudes as high as 2 km above the reference areoid, defined by the Mars Orbiting Laser Altimeter (MOLA) program.
Abstract: In 2010, the Mars Science Laboratory (MSL) mission will pioneer the next generation of robotic entry, descent, and landing (EDL) systems by delivering the largest and most capable rover to date to the surface of Mars. In addition to landing more mass than prior missions to Mars, MSL will offer access to regions of Mars that have been previously unreachable. By providing an EDL system capable of landing at altitudes as high as 2 km above the reference areoid, as defined by the Mars Orbiting Laser Altimeter (MOLA) program, MSL will demonstrate sufficient performance to land on a large fraction of the Martian surface. By contrast, the highest altitude landing to date on Mars has been the Mars Exploration Rover (MER) MER-B at 1.44 km below the areoid. The coupling of this improved altitude performance with latitude limits as large as 60 degrees off of the equator and a precise delivery to within 10 km of a surface target will allow the science community to select the MSL landing site from thousands of scientifically interesting possibilities. In meeting these requirements, MSL is extending the limits of the EDL technologies qualified by the Mars Viking, Mars Pathfinder, and MER missions. This paper discusses the MSL EDL architecture, system, and subsystem design and discusses some of the challenges faced in delivering such an unprecedented rover payload to the surface of Mars.

108 citations


Proceedings Article•DOI•
J.T. Adams1•
24 Jul 2006
TL;DR: This paper will look closely at the IEEE standard and the features that are natively part of the standard, including ZigBee networking and IPV6, and some of the various networking protocols that are proposed for or being used on top of this standard will be discussed.
Abstract: The concept of simple sensor nets, devices the size of ping-pong balls, sprinkled liberally on the ground, has been around for a long time. Some of the big challenges have always been cost and complexity, as well as power consumption. While there have been a plurality of proprietary wireless systems developed over the past decade or so for application to this problem, these systems have suffered from an inability to scale well in cost and network complexity. In 2003, the IEEE 802.15.4 standard was ratified, and almost immediately silicon manufacturers began producing compliant single-chip radios. Now, the next generation of transceiver is on the horizon, complete with microcontroller and FLASH memory, as well as the potential for various environmental sensors to be built right into the silicon itself. IEEE STD 802.15.4 specifies the RF, PHY and MAC layers, and there are a variety of custom and industry-standards based networking protocols that can sit atop this IEEE stack. These networking protocols allow the rapid creation of mesh networks that are also self-healing. With energy-saving features designed into the basic IEEE standard, and other possibilities applied by the applications developer, IEEE 802.15.4 radios have the potential to be the cost-effective communications backbone for simple sensory mesh networks that can effectively harvest data with relatively low latency, high accuracy, and the ability to survive for a very long time on small primary batteries or energy-scavenging mechanisms like solar, vibrational, or thermal power. This paper will look closely at the IEEE standard and the features that are natively part of the standard. Some of the various networking protocols that are proposed for or being used on top of this standard will be discussed, including ZigBee networking and IPV6. Practical sensor devices employing the technology will be analyzed and power consumption investigated. In addition, the ongoing updates to the standard taking place now within the IEEE will be discussed in light of their potential to make products developed to this standard even more useful to the sensor community.

99 citations


Proceedings Article•DOI•
24 Jul 2006
TL;DR: In this article, a physics-based model for bearing spall propagation is presented. But the model is not suitable for all bearing applications and it is readily adaptable to most bearing applications, and a reduced order version has also been developed which is efficient enough to run on board with little or no loss in accuracy.
Abstract: Diagnostic technologies for rolling element bearings are relatively well developed, but accurate prediction of remaining life once an incipient fault has been detected is considerably more difficult. This paper describes a comprehensive experimental study of bearing spall progression and a physics-based model being developed for bearing prognostics. The model computes the spall growth trajectory and time to failure based on operating conditions, and uses diagnostic feedback to self-adjust and reduce prediction uncertainty. The predictions compare very well to fault progression tests on both subscale bearings and full-scale turbine engine bearings. The experimental data has demonstrated that spall propagation is better behaved than once thought and can be predicted with high confidence. For turbine engine core thrust bearings with a typical mission mix, the prognostic window (first detection to failure) is on the order of 100 flight hours, which provides ample opportunity to plan future missions and maintenance activities with considerable safety margin. Since the model is physics-based, it is readily adaptable to most bearing applications, and a reduced order version has also been developed which is efficient enough to run on-board with little or no loss in accuracy.

89 citations


Proceedings Article•DOI•
24 Jul 2006
TL;DR: In this article, a passive imaging based, multi-cue hazard detection and avoidance (HDA) system was proposed for Mars and other lander missions that seamlessly integrates multiple algorithms -crater detection, slope estimation, rock detection and texture analysis, and multi-cues $crater morphology, rock distribution, to detect these hazards in real time.
Abstract: Accurate assessment of potentially damaging ground hazards during the spacecraft EDL (entry, descent, and landing) phase is crucial to insure a high probability of safe landing A lander that encounters a large rock, falls off a cliff, or tips over on a steep slope can sustain mission-ending damage Guided entry is expected to shrink landing ellipses from 100-300 km to /spl sim/10 km radius for the second-generation landers as early as 2009 Regardless of size and location, however, landing ellipses will almost always contain hazards such as craters, discontinuities, steep slopes, and large rocks It is estimated that an MSL (Mars Science Laboratory)-sized lander should detect and avoid 16-150m diameter craters, vertical drops similar to the edges of 16m or 375m diameter crater, for high and low altitude HDA (Hazard Detection and Avoidance) respectively It should also be able to detect slopes 20/spl deg/ or steeper, and rocks 075m or taller In this paper we will present a passive imaging based, multi-cue hazard detection and avoidance (HDA) system suitable for Martian and other lander missions This is the first passively imaged HDA system that seamlessly integrates multiple algorithms - crater detection, slope estimation, rock detection and texture analysis, and multi-cues $crater morphology, rock distribution, to detect these hazards in real time

70 citations


Book Chapter•DOI•
24 Jul 2006
TL;DR: This paper outlines five interrelated ontologies that support a complete semantic geospatial system and encourages the development of these ontologies into useful standards for further exploitingGeospatial data and services.
Abstract: An effective ontology architecture using the semantic Web enables the development of a semantic geospatial system that forges multiple geospatial data sources and services into a powerful cross-discipline knowledge tool. This paper outlines five interrelated ontologies that support a complete semantic geospatial system. The ontologies contribute to a working example that illustrates the advantages of semantic technologies in addressing geospatial challenges. The outlined advantages include complex query decomposition, seamless integration of non-semantic services, and dynamic customization to a specific domain of interest. We encourage the development of these ontologies into useful standards for further exploiting geospatial data and services.

69 citations


Proceedings Article•DOI•
24 Jul 2006
TL;DR: In this paper, an approach to fuse competing prediction algorithms for prognostics is presented, where multiple bearings are first seeded with small defects, then exposed to a variety of speed and load conditions similar to those encountered in aircraft engines, and run until the ensuing material liberation accumulated to a predetermined damage threshold or cage failure, whichever occurred first.
Abstract: Two fundamentally different approaches can be employed to estimate remaining life in faulted components. One is to model from first principles the physics of fault initiation and propagation. Such a model must include detailed knowledge of material properties, thermodynamic and mechanical response to loading, and the mechanisms for damage creation and growth. Alternatively, an empirical model of condition-based fault propagation rate can be developed using data from experiments in which the conditions are controlled or otherwise known and the component damage level is carefully measured. These two approaches have competing advantages and disadvantages. However, fusing the results of the two approaches produces a result that is more robust than either approach alone. In this paper, we introduce an approach to fuse competing prediction algorithms for prognostics. Results presented are derived from rig test data wherein multiple bearings were first seeded with small defects, then exposed to a variety of speed and load conditions similar to those encountered in aircraft engines, and run until the ensuing material liberation accumulated to a predetermined damage threshold or cage failure, whichever occurred first.

67 citations


Proceedings Article•DOI•
24 Jul 2006
TL;DR: A real-time communication framework to support event detection, reporting, and actuator coordination of wireless sensor-actuator network is designed and two self-organized and distributed algorithms for event reporting and actuators coordination are proposed.
Abstract: Wireless sensor-actuator network (WSAN) comprises of a group of distributed sensors and actuators that communicate through wireless links. Sensors are small and static devices with limited power, computation, and communication capabilities responsible for observing the physical world. On the other hand, actuators are equipped with richer resources, able to move and perform appropriate actions. Sensors and actuators cooperate with each other: While sensors perform sensing, actuators make decisions and react to the environment with the right actions. WSAN can be applied in a wide range of applications, like environmental monitoring, battlefield surveillance, chemical attack detection, intrusion detection, space missions, etc. Since actuators perform actions in response to the sensed events, real-time communications and quick reaction are necessary. To provide effective applications by WSAN, two major problems remain: how to minimize the transmission delay from sensors to actuators, and how to improve the coordination among the actuators for fast reaction. To tackle these problems, we designed a real-time communication framework to support event detection, reporting, and actuator coordination. This paper explores the timely communication and coordination problems among the sensors and actuators. Moreover, we proposed two self-organized and distributed algorithms for event reporting and actuator coordination. Some preliminary results are presented to demonstrate the advantages of our approach.

65 citations


Proceedings Article•DOI•
24 Jul 2006
TL;DR: In this paper, the authors define a reference system design for guidance, navigation, and control in future pinpoint landing missions, and assesses uncertainties and performance penalties associated with pinpoint landing using this reference system.
Abstract: Previous Mars landers have been able to land only within tens to hundreds of km of a target site. Principal sources of uncertainty are approach navigation, atmospheric modeling, and vehicle aerodynamics; additional (lesser) uncertainty sources are map-tie error and wind drift. The Mars Science Laboratory mission scheduled for 2009 launch will use guidance during hypersonic entry to improve this to /spl sim/10 km. To achieve "pinpoint landing" (within 100m) for future missions, ways of addressing the remaining error sources (approach navigation, wind drift and map-tie error) must be found. This work defines a "reference system design" for guidance, navigation, and control in future pinpoint landing missions, and assesses uncertainties and performance penalties associated with pinpoint landing using this reference system design.

Proceedings Article•DOI•
24 Jul 2006
TL;DR: The proposed particle filter (PF) embeds a data association technique based on the joint probabilistic data association (JPDA) which handles the uncertainty of the measurement origin and is able to cope with partial occlusions and to recover the tracks after temporary loss.
Abstract: The particle filtering technique with multiple cues such as colour, texture and edges as observation features is a powerful technique for tracking deformable objects in image sequences with complex backgrounds. In this paper, our recent work (Brasnett et al., 2005) on single object tracking using particle filters is extended to multiple objects. In the proposed scheme, track initialisation is embedded in the particle filter without relying on an external object detection scheme. The proposed scheme avoids the use of hybrid state estimation for the estimation of number of active objects and its associated state vectors as proposed in (Czyz et al., 2005). The number of active objects and track management are handled by means of probabilities of the number of active objects in a given frame. These probabilities are shown to be easily estimated by the Monte Carlo data association algorithm used in our algorithm. The proposed particle filter (PF) embeds a data association technique based on the joint probabilistic data association (JPDA) which handles the uncertainty of the measurement origin. The algorithm is able to cope with partial occlusions and to recover the tracks after temporary loss. The probabilities calculated for data associations take part in the calculation of probabilities of the number of objects. We evaluate the performance of the proposed filter on various real-world video sequences with appearing and disappearing targets.

Proceedings Article•DOI•
24 Jul 2006
TL;DR: The primary underlying cognitive processes and issues that are common to most, if not all, decision making models are described, with a focus on attention, working memory, and reasoning.
Abstract: From a general cognitive perspective, decision making is the process of selecting a choice or course of action from a set of alternatives. A large number of time critical decision making models have been developed over the course of several decades. This paper reviews both the underlying cognitive processes and several decision making models. In the first section, we briefly describe the primary underlying cognitive processes and issues that are common to most, if not all, decision making models, with a focus on attention, working memory, and reasoning. The second section reviews several of the most prominent high-level models of decision making, especially those developed for military contexts.

Proceedings Article•DOI•
24 Jul 2006
TL;DR: The F-35 Joint Strike Fighter (JSF) prognostics and health management (PHM) program is redefining the baseline for aircraft PHM. as mentioned in this paper provides a top-level description of the debris monitoring technology, its implementation and integration for F -35 and the development route planned to achieve the maturity level required for initial service release.
Abstract: The F-35 Joint Strike Fighter (JSF) prognostics and health management (PHM) program is redefining the baseline for aircraft PHM. The objective is a management system that enables the F-35 aircraft to identify and report its own maintenance requirements, maximising aircraft use and minimising logistical overhead. JSF PHM offers aircraft operational cost and safety benefits above and beyond any air-vehicle strategy currently employed. For the propulsion system, the required input to F-35 PHM is achieved through monitoring a range of engine subsystems, using both mature and new technologies, and combining the information to form the engine health status and prognosis. The propulsion system PHM sensor suite incorporates several emerging technologies, including gas path debris monitoring. This paper provides a top-level description of the debris monitoring technology, its implementation and integration for F-35 and the development route planned to achieve the maturity level required for initial service release.

Proceedings Article•DOI•
24 Jul 2006
TL;DR: Simulations of the lightweight framework applied to realistic non-holonomic tricycle vehicles highlight the swarm's ability to form arbitrary formations from random initial vehicle distributions and formation morphing capabilities, as well as navigate complex obstacle fields while maintaining formation.
Abstract: Multi-vehicle swarms offer the potential for increased performance and robustness in several key robotic and autonomous applications. Emergent swarm behavior demonstrated in biological systems show performance that far outstrips the abilities of the individual members. This paper discusses a lightweight formation control methodology using conservative potential functions to ensure group cohesion, yet requiring very modest communication and control requirements for each individual node. Previous efforts have demonstrated distributed methods to navigate a vehicle swarm through a complex obstacle environment while remaining computationally simple and having low bandwidth requirements. It is shown that arbitrary formation can be held and morphed within the lightweight framework. Simulations of the lightweight framework applied to realistic non-holonomic tricycle vehicles highlight the swarm's ability to form arbitrary formations from random initial vehicle distributions and formation morphing capabilities, as well as navigate complex obstacle fields while maintaining formation. The non-holonomic constraints are used to implement realistic controls.

Proceedings Article•DOI•
04 Mar 2006
TL;DR: The marginalized particle filter as discussed by the authors is a powerful combination of the particle filter and the Kalman filter, which can be used when the underlying model contains a linear sub-structure, subject to Gaussian noise.
Abstract: The marginalized particle filter is a powerful combination of the particle filter and the Kalman filter, which can be used when the underlying model contains a linear sub-structure, subject to Gaussian noise This paper will illustrate several positioning and target tracking applications, solved using the marginalized particle filter Furthermore, we analyze several properties of practical importance, such as its computational complexity and how to cope with quantization effects

Proceedings Article•DOI•
04 Mar 2006
TL;DR: JPL's approach to advancing the practice of systems engineering at the Lab is described, including the general approach used and how they addressed the three key aspects of change: people, process and technology.
Abstract: In FY 2004, JPL launched an initiative to improve the way it practices systems engineering. The Lab's senior management formed the systems engineering advancement (SEA) project in order to "significantly advance the practice and organizational capabilities of systems engineering at JPL on flight projects and ground support tasks". The scope of the SEA project includes the systems engineering work performed in all three dimensions of a program, project, or task: 1) the full life-cycle, i.e., concept through end of operations; 2) the full depth, i.e., program, project, system, subsystem, element (SE Levels 1 to 5); 3) the full technical scope, e.g., the flight, ground and launch systems, avionics, power, propulsion, telecommunications, thermal, etc. The initial focus of their efforts defined the following basic systems engineering functions at JPL: systems architecture, requirements management, interface definition, technical resource management, system design and analysis, system verification and validation, risk management, technical peer reviews, design process management and systems engineering task management. They also developed a list of highly valued personal behaviors of systems engineers, and are working to inculcate those behaviors into members of their systems engineering community. The SEA project is developing products, services, and training to support managers and practitioners throughout the entire system life-cycle. As these are developed, each one needs to be systematically deployed. Hence, the SEA project developed a deployment process that includes four aspects: infrastructure and operations, communication and outreach, education and training, and consulting support. In addition, the SEA project has taken a proactive approach to organizational change management and customer relationship management - both concepts and approaches not usually invoked in an engineering environment. This paper describes JPL's approach to advancing the practice of systems engineering at the Lab. It describes the general approach used and how they addressed the three key aspects of change: people, process and technology. It highlights a list of highly valued personal behaviors of systems engineers, discusses the various products, services and training that were developed, describes the deployment approach used, and concludes with several lessons learned.

Proceedings Article•DOI•
24 Jul 2006
TL;DR: The aircraft electrical power systems prognostics and health management (AEPHM) program, presently being worked by Air Force Research Laboratories (AFRL), Boeing, and Smiths Aerospace, has developed and demonstrated health management algorithms as discussed by the authors.
Abstract: Military and commercial aircraft, spacecraft and ground vehicles are increasingly dependent on electrical power. It has become common place for vehicles to rely on electrical power, in whole or in part, for all systems, including critical systems such as flight control and fuel delivery. Microprocessors embedded in digitally controlled power distribution systems, as well as in the digital controllers within these systems, provide an unprecedented, affordable and inherent opportunity to monitor an electrically powered vehicle's systems health. Data transmitted to and from these controllers can be used to characterize the system and component operating signatures, thereby enabling advanced diagnostic and prognostic capabilities. These capabilities will ensure a high mission reliability rate as well as reduce life cycle ownership costs. The aircraft electrical power systems prognostics and health management (AEPHM) program, presently being worked by Air Force Research Laboratories (AFRL), Boeing, and Smiths Aerospace, has developed and demonstrated health management (diagnostics, prognostics and decision aids) algorithms. The first phase of the program, which ended in July of 2005, addressed electric actuation, fuel pumps/valves and arc fault protection. The second phase is addressing power generation. Algorithm development is based on data collected from seeded and accelerated run-to-failure laboratory testing. The AEPHM architecture supports system level fusion of evidence and state information from multiple sources to improve estimates of degradation. The robustness of health management as a function of possible data sources and data rates is being determined. The product of the research will be adaptable to a range of platforms, including military, space and commercial vehicles. Phase I of the program was completed with an end to end, hardware-in-the-loop (electric actuator, fuel pump, fuel valve, arc fault, and power distribution unit) demonstration with on-line data generation to show the integration of the technology into a realistic setting.

Proceedings Article•DOI•
04 Mar 2006
TL;DR: Research is currently being performed by Sensis Corporation in cooperation with NASA Glenn research center to provide enhancements to the ADS-B UAT (universal access transceiver) data link to encourage user acceptance by improving upon existing capability and usability.
Abstract: Automatic dependent surveillance-broadcast (ADS-B) is emerging as an advanced aviation technology that provides situational awareness within the aircraft that was previously available only on the ground. Pilots and ground personnel have begun to benefit from this technology but further benefits from technological improvements can still be realized. These improvements include security, increased data capacity, and advanced applications (4D trajectory and data exchange). To this end research is currently being performed by Sensis Corporation in cooperation with NASA Glenn Research Center to provide enhancements to the ADS-B UAT (universal access transceiver) data link. The research goal is to encourage user acceptance by improving upon existing capability and usability along with providing a roadmap and demonstrations of future data link capability.

Proceedings Article•DOI•
04 Mar 2006
TL;DR: This second paper in a series continue to explore background, benefit impacts, and architectures; highlight some additional design challenges and issues; discuss prognostic capabilities for electronic systems; review strategies for prognostic capability verification and validation; and draw heavily on previous and current prognostic development efforts.
Abstract: The desire and need for real predictive prognostic capabilities have been around for as long as man has operated complex and expensive machinery. This has been true for both mechanical and electronic systems. There has been a long history of trying to develop and implement various degrees of prognostic and useful life remaining capabilities. Recently, stringent Diagnostic, Prognostic, and Health Management (PHM) capability requirements are being placed on new applications, like the Joint Strike Fighter (JSF), in order to enable and reap the benefits of new and revolutionary Logistic Support concepts. While fault detection and fault isolation effectiveness with very low false alarm rates continue to improve on these new applications; prognostics requirements are even more ambitious and present very significant challenges to the system design teams. These prognostic challenges have been aggressively addressed for mechanical systems for some time; but are only recently being fully explored for electronics systems. This second paper in a series will continue to explore background, benefit impacts, and architectures; highlight some additional design challenges and issues; discuss prognostic capabilities for electronic systems; review strategies for prognostic capability verification and validation; and draw heavily on other related lessons learned from previous and current prognostic development efforts.

Proceedings Article•DOI•
24 Jul 2006
TL;DR: The Hilbert-Huang transform (HHT) as discussed by the authors is a spectral analysis tool for nonlinear and nonstationary data spectral analysis based on the empirical mode decomposition (EMD) algorithm.
Abstract: One of the main traditional tools used in scientific and engineering data spectral analysis is the Fourier integral transform and its high performance digital equivalent - the fast Fourier transform (FFT). Both carry strong a-priori assumptions about the source data, such as being linear and stationary, and of satisfying the Dirichlet conditions. A recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang transform (HHT), proposes a novel approach to the solution for the nonlinear class of spectral analysis problems. Using a-posteriori data processing based on the empirical mode decomposition (EMD) sifting process (algorithm), followed by the normalized Hilbert transform of the decomposed data, the HHT allows spectral analysis of nonlinear and nonstationary data. The EMD sifting process results in a non-constrained decomposition of a source numerical data vector into a finite set of intrinsic mode functions (IMF). These functions form a nearly orthogonal, derived from the data basis (adaptive basis). The IMFs can be further analyzed for spectrum content by using the classical Hilbert Transform. A new engineering spectral analysis tool using HHT has been developed at NASA GSFC, the HHT data processing system (HHT-DPS). As the HHT-DPS has been successfully used and commercialized, new applications pose additional questions about the theoretical basis behind the HHT EMD algorithm. Why is the fastest changing component of a composite signal being sifted out first in the EMD sifting process? Why does the EMD sifting process seemingly converge and why does it converge rapidly? Does an IMF have a distinctive structure? Why are the IMFs nearly orthogonal? We address these questions and develop the initial theoretical background for the HHT. This will contribute to the development of new HHT processing options, such as real-time and 2D processing using field programmable gate array (FPGA) computational resources, enhanced HHT synthesis, and will broaden the scope of HHT applications for signal processing.

Proceedings Article•DOI•
24 Jul 2006
TL;DR: An on-line path planning scheme that intelligently plans the vehicle's trajectory while exploring unknown terrain in order to maximise the quality of both the resulting SLAM map and localisation estimates necessary for the autonomous control of the UAV.
Abstract: Future unmanned aerial vehicle (UAV) applications will require high-accuracy localisation in environments in which navigation infrastructure such as the Global Positioning System (GPS) and prior terrain maps may be unavailable or unreliable. In these applications, long-term operation requires the vehicle to build up a spatial map of the environment while simultaneously localising itself within the map, a task known as simultaneous localisation and mapping (SLAM). In the first part of this paper we present an architecture for performing inertial-sensor based SLAM on an aerial vehicle. We demonstrate an on-line path planning scheme that intelligently plans the vehicle's trajectory while exploring unknown terrain in order to maximise the quality of both the resulting SLAM map and localisation estimates necessary for the autonomous control of the UAV. Two important performance properties and their relationship to the dynamic motion and path planning systems on-board the UAV are analysed. Firstly we analyse information-based measures such as entropy. Secondly we perform an observability analysis of inertial SLAM by recasting the algorithms into an indirect error model form. Qualitative knowledge gained from the observability analysis is used to assist in the design of an information-based trajectory planner for the UAV. Results of the online path planning algorithm are presented using a high-fidelity 6-DoF simulation of a UAV during a simulated navigation and mapping task.

Proceedings Article•DOI•
J. Greco1, Grzegorz Cieslewski1, Adam Jacobs1, Ian A. Troxel1, Alan D. George1 •
04 Mar 2006
TL;DR: A framework that allows Earth and space scientists to use FPGA resources through an abstraction layer is explored, and a synthetic aperture radar application is used to demonstrate the power of the system architecture.
Abstract: Complex real-time signal and image processing applications require low-latency and high-performance hardware to achieve optimal performance. Building such a high-performance platform for space deployment is hampered by hostile environmental conditions and power constraints. Custom space-based FPGA coprocessors help alleviate these constraints, but their use is typically restricted by the need for TMR or radiation-hardened components. This paper explores a framework that allows Earth and space scientists to use FPGA resources through an abstraction layer. A synthetic aperture radar application is used to demonstrate the power of the system architecture. The performance of the application is shown to achieve a speedup of 19 when compared to a software solution and is able to maintain comparable data reliability. Projected speedups, for the same case study executing on the proposed flight system architecture, are several times better and also discussed. This work supports the Dependable Multiprocessor project at Honeywell and the University of Florida, a mission for the Space Technology 8 (ST-8) satellite of NASA's New Millennium Program.

Proceedings Article•DOI•
24 Jul 2006
TL;DR: The objective of this NMP ST8 effort is to combine high-performance, fault tolerant, COTS-based cluster processing and fault tolerant middleware in an architecture and software framework capable of supporting a wide variety of mission applications.
Abstract: With the ever-increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's new millennium program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power-efficient, high-performance, highly dependable, fault-tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor (Ramos et al., 2005), and is now working on the development of a TRL5 prototype. For the present effort Honeywell has teamed up with the University of Florida via its high-performance computing and simulation (HCS) research laboratory, and together the team has demonstrated major elements of the Dependable Multiprocessor TRL5 system. This paper provides a detailed description of the basic Dependable Multiprocessor technology, and the TRL5 technology prototype currently under development.

Proceedings Article•DOI•
24 Jul 2006
TL;DR: In this paper, a discrete event simulation can be applied to assess the first order requirements for integrated vehicle health management (IVHM) implementation on systems and an example of a performance improvement illustrated from a simulation run using notional system and scenario data.
Abstract: Support requirements and health management are significant operational drivers on large military weapons systems and large commercial aircraft. The integration of health management into the up-front design of these systems should include a detailed benefit analysis that includes all of the benefactors of operational performance that a truly integrated health management system can bring. These benefactors are the Original Equipment Manufacturers (OEMs), the mission operators, command/control elements, fleet management, and maintenance operations. Each of these functional areas has unique processes that can be identified and measured. The performance improvement on a system can be evaluated before design dollars are ever committed or contracts signed. By identifying the processes, measures of effectiveness (MOE), and input drivers, a discrete event simulation can be applied to assess the first order requirements for Integrated Vehicle Health Management (IVHM) implementation on systems. Some of the basic input approaches are discussed, as well as an example of a performance improvement illustrated from a simulation run using notional system and scenario data. This type of analysis enables a larger business case to be developed to aid designers and planners in their decisions of how to implement IVHM. This paper describes some of the initial approaches to modeling the above problems as part of the on going effort to develop a simulation to assess IVHM.

Proceedings Article•DOI•
G. Brat1, Ewen Denney1, Dimitra Giannakopoulou1, Jeremy Frank1, Ari K. Jonsson1 •
04 Mar 2006
TL;DR: This work explores how advanced V&V techniques, such as static analysis, model checking, and compositional verification, can be used to gain trust in model-based systems.
Abstract: Autonomous software, especially if it is based on model, can play an important role in future space applications. For example, it can help streamline ground operations, or, assist in autonomous rendezvous and docking operations, or even, help recover from problems (e.g., planners can be used to explore the space of recovery actions for a power subsystem and implement a solution without (or with minimal) human intervention). In general, the exploration capabilities of model-based systems give them great flexibility. Unfortunately, it also makes them unpredictable to our human eyes, both in terms of their execution and their verification. The traditional verification techniques are inadequate for these systems since they are mostly based on testing, which implies a very limited exploration of their behavioral space. In our work, we explore how advanced V&V techniques, such as static analysis, model checking, and compositional verification, can be used to gain trust in model-based systems. We also describe how synthesis can be used in the context of system reconfiguration and in the context of verification.

Proceedings Article•DOI•
24 Jul 2006
TL;DR: A decision support system for use in operational decision making with PHM-specific data that enables the user to make optimal decisions based on his expression of rigorous trade-offs between different prognostic and external information sources is described.
Abstract: This paper describes a decision support system (DSS) for use in operational decision making with PHM-specific data. Challenges arise from the large amount of different information pieces upon which a decision maker has to act. Conflicting information from on-board and off-board PHM modules, seemingly contradictory and changing requirements from operations as well as maintenance for a multitude of different systems within strict time constraints make operational decision-making a difficult undertaking. The DSS enables the user to make optimal decisions based on his expression of rigorous trade-offs between different prognostic and external information sources. This is accomplished through guided evaluation of different optimal decision alternatives under operational boundary conditions using user-specific and interactive collaboration. We present some preliminary results of the use of such a DSS for post-prognostics decision-making.

Proceedings Article•DOI•
24 Jul 2006
TL;DR: The results of a significant research and development effort conducted at NASA Ames Research Center to develop new text mining algorithms to discover anomalies in free-text reports regarding system health and safety of two aerospace systems are described.
Abstract: This paper describes the results of a significant research and development effort conducted at NASA Ames Research Center to develop new text mining algorithms to discover anomalies in free-text reports regarding system health and safety of two aerospace systems. We discuss two problems of significant import in the aviation industry. The first problem is that of automatic anomaly discovery concerning an aerospace system through the analysis of tens of thousands of free-text problem reports that are written about the system. The second problem that we address is that of automatic discovery of recurring anomalies, i.e., anomalies that may be described in different ways by different authors, at varying times and under varying conditions, but that are truly about the same part of the system. The intent of recurring anomaly identification is to determine project or system weakness or high-risk issues. The discovery of recurring anomalies is a key goal in building safe, reliable, and cost-effective aerospace systems. We address the anomaly discovery problem on thousands of free-text reports using two strategies: (1) as an unsupervised learning problem where an algorithm takes free-text reports as input and automatically groups them into different bins, where each bin corresponds to a different unknown anomaly category; and (2) as a supervised learning problem where the algorithm classifies the free-text reports into one of a number of known anomaly categories. We then discuss the application of these methods to the problem of discovering recurring anomalies. In fact, because recurring anomalies tend to have very small cluster sizes, we explore new methods and measures to enhance the original approach for anomaly detection. We present our results on the identification of recurring anomalies in problem reports concerning two aerospace systems as well as benchmark data sets that are widely used in the field of text mining. The first system is the Aviation Safety Reporting System (ASRS) database, which contains several hundred-thousand free text reports filed by commercial pilots concerning safety issues on commercial airlines. The second aerospace system we analyze is the NASA Space Shuttle problem reports as represented in the CARS data set, which consists of 7440 NASA Shuttle problem reports. We show significant classification accuracies on both of these systems as well as compare our results with reports classified into anomaly categories by field experts.

Proceedings Article•DOI•
04 Mar 2006
TL;DR: In this paper, a constrained least square minimization problem is formulated for spacecraft inertia estimation from flight data, where the authors show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.
Abstract: This paper presents a new formulation for spacecraft inertia estimation from flight data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs (linear matrix inequalities). The resulting minimization problem is a semidefinite optimization problem that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to test data collected from a robotic testbed consisting of a free rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

Proceedings Article•DOI•
24 Jul 2006
TL;DR: The approach, development, and validation of prognostics for two types of electronic equipment, a switch-mode power supply and a GPS receiver, were selected based on their relatively high failure rates and relevance to many commonly used avionics systems.
Abstract: Maintenance of aircraft electronic systems has traditionally been performed in reaction to reported failures or through periodic system replacements. Recent changes in weapons platform acquisition and support requirements have spurred interest in application of prognostic health management (PHM) concepts developed for mechanical systems to electronic systems. The approach, development, and validation of prognostics for two types of electronic equipment are discussed in this paper. The two applications, a switch-mode power supply and a GPS receiver were selected based on their relatively high failure rates and relevance to many commonly used avionics systems. The method identifies prognostic features by performing device, circuit, and system-level modeling. Device modeling with equivalent circuit and mathematical physics of failure models describe parameter degradation resulting from damage accumulation for each device. Prognostic features extracted from a small array of sensors on the power supply, and from the GPS operational communication data stream are used to update life usage and failure progression models to provide an indication of the health state. The results of accelerated failure tests on both systems are used to illustrate the approach and demonstrate its effectiveness in predicting the useful life remaining. The solutions have applicability to power supplies in many avionic systems, and to a broad class of mixed digital/analog circuitry including radar and software defined radio.