scispace - formally typeset
Search or ask a question

Showing papers presented at "IEEE Aerospace Conference in 2005"


Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: This paper test four automatic methods of anomaly detection in text that are popular in the current literature on text mining, and concludes with recommendations regarding the development of an operational text mining system for analysis of problem reports that arise from complex space systems.
Abstract: Many existing complex space systems have a significant amount of historical maintenance and problem data bases that are stored in unstructured text forms. The problem that we address in this paper is the discovery of recurring anomalies and relationships between problem reports that may indicate larger systemic problems. We illustrate our techniques on data from discrepancy reports regarding software anomalies in the Space Shuttle. These free text reports are written by a number of different people, thus the emphasis and wording vary considerably. We test four automatic methods of anomaly detection in text that are popular in the current literature on text mining. The first method that we describe is k-means or Gaussian mixture model and its application to the term-document matrix. The second method is the Sammon nonlinear map, which projects high dimensional document vectors into two dimensions for visualization and clustering purposes. The third method is based on an analysis of the results of applying a clustering method, expectation maximization on a mixture of von Mises Fisher distributions that represents each document as a point on a high dimensional sphere. In this space, we perform clustering to obtain sets of similar documents. The results are derived from a new method known as spectral clustering, where vectors from the term-document matrix are embedded in a high dimensional space for clustering. The paper concludes with recommendations regarding the development of an operational text mining system for analysis of problem reports that arise from complex space systems. We also contrast such systems with general purpose text mining systems, illustrating the areas in which this system needs to be specified for the space domain

139Ā citations


Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: The Chebyshev outlier detection method does not ascertain the reason for the outlier; it identifies potential outlier data, allowing for domain experts to investigate the cause.
Abstract: During data collection and analysis, it is often necessary to identify and possibly remove outliers that exist. An objective method for identifying outliers to be removed is critical. Many automated outlier detection methods are available. However, many are limited by assumptions of a distribution or require upper and lower predefined boundaries in which the data should exist. If there is a known distribution for the data, then using that distribution can aid in finding outliers. Often, a distribution is not known, or the experimenter does not want to make an assumption about a certain distribution. Also, enough information may not exist about a set of data to be able to determine reliable upper and lower boundaries. For these cases, an outlier detection method, using the empirical data and based upon Chebyshev's inequality, was formed. This method allows for detection of multiple outliers, not just one at a time. This method also assumes that the data are independent measurements and that a relatively small percentage of outliers are contained in the data. Chebyshev's inequality gives a bound of what percentage of the data falls outside of k standard deviations from the mean. This calculation holds no assumptions about the distribution of the data. If the data are known to be unimodal without a known distribution, then the method can be improved by using the unimodal Chebyshev inequality. The Chebyshev outlier detection method uses the Chebyshev inequality to calculate upper and lower outlier detection limits. Data values that are not within the range of the upper and lower limits would be considered data outliers. Outliers could be due to erroneous data or could indicate that the data are correct but highly unusual. This algorithm does not ascertain the reason for the outlier; it identifies potential outlier data, allowing for domain experts to investigate the cause

126Ā citations


Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: The consultative committee for space data systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, which consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data.
Abstract: The consultative committee for space data systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005 The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions The standard will be accompanied by free software sources on a future Web site An application-specific integrated circuit (ASIC) implementation of the compressor is currently under development This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm Performance results and comparisons with other compressors are given for a test set of space images

87Ā citations


Proceedings Articleā€¢DOIā€¢
R.F. Orsagh1, Douglas W. Brown1, Michael J. Roemer1, T. Dabnev, A.J. HessĀ ā€¢
05 Mar 2005
TL;DR: In this paper, the authors present an integrated approach to switching mode power supply health management that implements techniques from engineering disciplines including statistical reliability modeling, damage accumulation models, physics of failure modeling, and sensor-based condition monitoring using automated reasoning algorithms.
Abstract: This paper presents an integrated approach to switching mode power supply health management that implements techniques from engineering disciplines including statistical reliability modeling, damage accumulation models, physics of failure modeling, and sensor-based condition monitoring using automated reasoning algorithms. Novel features extracted from sensed parameters such as temperature, power quality, and efficiency were analyzed using advanced fault detection and damage accumulation algorithms. Using model-based assessments in the absence of fault indications, and updating the model-based assessments with sensed information when it becomes available provides health state awareness at any point in time. Intelligent fusion of this diagnostic information with historical component reliability statistics provides a robust health state awareness as the basis for accurate prognostic predictions. Complementary prognostic techniques including analysis of projected operating conditions by physics-based component aging models, empirical (trending) models, and system level failure progression models will be used to develop verifiable prognostic models. The diagnostic techniques, and prognostic models have been demonstrated through accelerated failure testing of switching mode power supplies

75Ā citations


Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: In this article, the authors describe the MER instrument positioning system that allows the in situ instruments to operate and collect their important science data using a robust, dexterous robotic arm combined with visual target selection and autonomous software functions.
Abstract: During Mars Exploration Rover (MER) surface operations, the scientific data gathered by the in situ instrument suite has been invaluable with respect to the discovery of a significant water history at Meridiani Planum and the hint of water processes at work in Gusev Crater. Specifically, the ability to perform precision manipulation from a mobile platform (i.e., mobile manipulation) has been a critical part of the successful operation of the Spirit and Opportunity rovers. As such, this paper describes the MER instrument positioning system that allows the in situ instruments to operate and collect their important science data using a robust, dexterous robotic arm combined with visual target selection and autonomous software functions.

65Ā citations


Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: In this article, the design considerations and results for an overlapped subarray radar antenna, including a custom subarray weighting function and the corresponding circuit design and fabrication, are presented.
Abstract: Overlapped subarray networks produce flat-topped sector patterns with low sidelobes that suppress grating lobes outside of the main beam of the subarray pattern. They are typically used in limited scan applications, where it is desired to minimize the number of controls required to steer the beam. However, the architecture of an overlapped subarray antenna includes many signal crossovers and a wide variation in splitting/combining ratios, which make it difficult to maintain required error tolerances. This paper presents the design considerations and results for an overlapped subarray radar antenna, including a custom subarray weighting function and the corresponding circuit design and fabrication. Measured pattern results will be shown for a prototype design compared with desired patterns.

62Ā citations


Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: A cooperative control algorithm is proposed, according to which each UAV decides its path independently based on an information theoretic criterion function, which incorporates target detection probability and survival probability for sensors corresponding to hostile fire by targets as well as collision with other UAVs.
Abstract: With the recent advent of moderate-cost unmanned (or uninhabited) aerial vehicles (UAV) and their success in surveillance, it is natural to consider the cooperative management of groups of UAVs. The problem considered in this paper is the optimization of the information obtained by a group of UAVs carrying out surveillance of several ground targets distributed over a large area. The UAVs are assumed to be equipped with ground moving target indicator (GMTI) radars, which measure the locations of moving ground targets as well as their radial velocities (Doppler). In this paper, a cooperative control algorithm is proposed, according to which each UAV decides its path independently based on an information theoretic criterion function. The criterion function also incorporates target detection probability and survival probability for sensors corresponding to hostile fire by targets as well as collision with other UAVs. The control algorithm requires limited communication and modest computation

62Ā citations


Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: Computer simulations demonstrate that the proposed hybrid approach is robust in performing joint detection and tracking for multiple targets even though the environment is hostile in terms of high clutter density and low target detection probability.
Abstract: In this paper, we present a new approach for online joint detection and tracking for multiple targets. We combine a deterministic clustering algorithm for target detection with a sequential Monte Carlo method for multiple target tracking. The proposed approach continuously monitors the appearance and disappearance of a set of regions of interest for target detection within the surveillance region. No computational effort for target tracking is expended unless these regions of interest are persistently detected. In addition, we also integrate a very efficient 2D data assignment algorithm into the sampling method for the data association problem. The proposed approach is applicable to nonlinear and nonGaussian models for the target dynamics and measurement likelihood. Computer simulations demonstrate that the proposed hybrid approach is robust in performing joint detection and tracking for multiple targets even though the environment is hostile in terms of high clutter density and low target detection probability

60Ā citations


Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: This paper discusses the implementation of a guidance system based on mixed integer linear programming (MILP) on a modified, autonomous T-33 aircraft equipped with Boeing's UCAV avionics package, and a receding horizon MILP formulation is presented.
Abstract: This paper discusses the implementation of a guidance system based on mixed integer linear programming (MILP) on a modified, autonomous T-33 aircraft equipped with Boeing's UCAV avionics package. A receding horizon MILP formulation is presented for safe, real-time trajectory generation in a partially-known, cluttered environment. Safety at all times is guaranteed by constraining the intermediate trajectories to terminate in a loiter pattern that does not intersect with any no-fly zones and can always be used as a safe backup plan. Details about the real-time software implementation using CPLEX and Boeing's OCP platform are given. A test scenario developed for the DARPA-sponsored software enabled control capstone demonstration is outlined, and simulation and flight test results are presented

56Ā citations


Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: The need for real predictive prognostics capabilities has been around for as long as man has operated complex and expensive machinery as discussed by the authors, and there has been a long history of trying to develop and implement various degrees of prognostic and useful life remaining capabilities.
Abstract: The desire and need for real predictive prognostics capabilities have been around for as long as man has operated complex and expensive machinery. There has been a long history of trying to develop and implement various degrees of prognostic and useful life remaining capabilities. Stringent diagnostic, prognostic, and health management capability requirements are being placed on new applications, like the Joint Strike Fighter (JSF), in order to enable and reap the benefits of revolutionary autonomic logistic support concepts. While fault detection and fault isolation effectiveness with very low false alarm rates continue to improve on these new applications; the prognostics requirements are even more ambitious and present very significant challenges to the system design teams. This paper explores some of these design challenges and issues; discuss the various degrees of prognostic capabilities; and draw heavily on lessons learned from previous prognostic development efforts

54Ā citations


Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: In this paper, the authors report the temperature dependence of the second generation post resonator gyroscopes and determine the effect of hysteresis over the range 35degC to 65degC.
Abstract: We report the temperature dependence of the JPL/Boeing MEMS second generation post resonator gyroscopes and determine the effect of hysteresis over the range 35degC to 65degC. The results indicate a strong linear dependence of the drive frequency and sense frequency with temperature of 0.093Hz/degC and AGC bias voltage with temperature of 13mV/degC. The results also indicate a significant time lag of the gyroscope of these quantities when responding to external temperature variations but determined no hysteresis exists in the drive frequency, sense frequency, and AGC bias. Both the time-frequency and time-bias voltage relationships are of the form y = A+B*exp(-t/T) where A is an offset parameter in Hertz and Volts respectively and B depends on the magnitude of the temperature variation

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: In this article, the results of a project to transition Nantero's laboratory carbon nanotube (CNT) process technology into the BAE Systems radiation-hard CMOS production foundry to enable the development of novel nanotechnology-based solutions for government space applications are described.
Abstract: This paper details the results of a project to transition Nantero's laboratory carbon nanotube (CNT) process technology into the BAE Systems radiation-hard CMOS production foundry to enable the development of novel nanotechnology-based solutions for government space applications. Working jointly, BAE Systems and Nantero have successfully developed the necessary processes, recipes, and protocols to enable BAE Systems to develop rad-hard CMOS-CNT hybrid devices and circuits. The success of this project has established the BAE Systems Manassas, VA facility as the first U.S. government sponsored foundry to qualify carbon nanotubes for use within a production fab line. The project addressed all aspects needed to qualify nanotubes and comprised three main steps: 1) development of recipes for coating a 150 mm wafer with a monolayer fabric of single-walled nanotubes (SWNTs), 2) edge-bead removal (EBR) of the CNTs from around the edge, bevel, and backside of the wafer to prevent contamination of further processing equipment, 3) demonstration of repeatable coating and EBR of the CNTs between various wafers over multiple lots. The fabrication process for creating a 1-2 nm thick monolayer fabric of SWNTs is described and characterized with respect to the fabric thickness, resistivity, elemental composition, particle count and uniformity

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: A hybrid clustering and routing architecture for wireless sensor networks is proposed which is a modified subtractive clustering technique, an energy-aware cluster head selection method and a cost-based routing algorithm.
Abstract: Wireless sensor networks have been widely studied and usefully employed in many applications such as medical monitoring, automotive safety and space applications. Typically, sensor nodes have several limitations such as limited battery life, low computational capability, short radio transmission range and small memory space. However, the most severe constraint of the nodes is their limited energy resource because they cease to function when their battery has been depleted. To reduce energy usage in wireless sensor networks, many cluster-based routings have been proposed. Among those proposed, LEACH (low energy adaptive clustering hierarchy) is a well-known cluster-based sensor network architecture which aims to distribute energy consumption evenly to every node in a given network. This clustering technique requires a predefined number of clusters and has been developed with an assumption that the sensor nodes are uniformly distributed through out the network. In this paper, we propose a hybrid clustering and routing architecture for wireless sensor networks. There are three main parts in our proposed architecture which are a modified subtractive clustering technique, an energy-aware cluster head selection method and a cost-based routing algorithm. These are all centralized techniques and are expected to be executed at the base station

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: The purpose of this paper is to provide a general methodology for conducting a preliminary cost benefit analysis that calculates an ROI for PHM implementation.
Abstract: Individuals who work in the field of prognostic and health management (PHM) technology have come to understand that PHM can provide the ability to effectively manage the operation, maintenance and logistic support of individual assets or groups of assets through the availability of regularly updated and detailed health information. Naturally, prospective customers of PHM technology ask, `How will the implementation of PHM benefit my organization?' Typically, the response by individuals in the field is, `Anecdotal evidence indicates that PHM decreases maintenance costs, increases operational availability and improves safety'. This information helps the prospective customer understand the practical benefits of the technology but that customer stills needs more information to justify their investment in the technology. The customer needs a calculated return on investment (ROI) figure for their particular asset that provides financial assessment of the benefit of the investment. The data, time and expertise required to conduct a rigorous cost benefit analysis makes the effort seem daunting to the average engineer with little to no financial analysis training. The reality is that with a cursory understanding of the asset operation, maintenance and logistic issues, a useful cost benefit analysis can be conducted by engineers without business school training. The purpose of this paper is to provide a general methodology for conducting a preliminary cost benefit analysis that calculates an ROI for PHM implementation. The paper will discuss the general types of information needed for the analysis, the quantifying of expected benefits and the types of supporting data required to validate the benefit assumptions as well as an outline for the costing of the PHM technology

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: In this article, a preliminary characterization of the categories of the realm of the mental, able to fit and integrate the foundational ontology DOLCE (a Descriptive Ontology for Linguistic and Cognitive Engineering); they call this module COM (Computational Ontology of Mind).
Abstract: The main goal of this paper is a preliminary characterization of the categories of the realm of the mental, able to fit and integrate the foundational ontology DOLCE (a Descriptive Ontology for Linguistic and Cognitive Engineering); we call this module COM (Computational Ontology of Mind). The idea of COM emerges from the need of a conceptual clarification from the standpoint of formal ontology of the entities that play a role in agent technologies for information systems. Based on philosophical tradition, we have singled out a central relation in the realm of the mental: aboutness. In our proposal aboutness connects a mental state with a mental object, at a certain time, and with respect to a given intentional agent. Furthermore, we envisage a generalization of this framework to mental processes and events. Thus, in the paper we give a first analysis of these entities, mainly focused on mental objects and their characteristics. We are also specifying the basic features of mental states and intentional agents, exploiting ontological categories and relations implemented in DOLCE

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: In this article, the authors developed a fault tolerant control architecture that couples techniques for fault detection and identification with reconfigurable flight control to augment the reliability and autonomy of an unmanned aerial vehicle.
Abstract: The past decade has seen the development of several reconfigurable flight control strategies for unmanned aerial vehicles. Although the majority of the research is dedicated to fixed wing vehicles, simulation results do support the application of reconfigurable flight control to unmanned rotorcraft. This paper develops a fault tolerant control architecture that couples techniques for fault detection and identification with reconfigurable flight control to augment the reliability and autonomy of an unmanned aerial vehicle. The architecture is applicable to fixed and rotary wing aircraft. An adaptive neural network feedback linearization technique is employed to stabilize the vehicle after the detection of a fault. Actual flight test results support the validity of the approach on an unmanned helicopter. The fault tolerant control architecture recovers aircraft performance after the occurrence of four different faults in the flight control system: three swash-plate actuator faults and a collective actuator fault. All of these faults are catastrophic under nominal conditions

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: An unscented Kalman filter is applied to the navigation problem, which leads to a consistent estimate of vehicle and feature states, and preliminary hardware test results showing navigation and mapping using an off-the-shelf inertial measurement unit and camera in a laboratory environment are presented.
Abstract: A method for passive GPS-free navigation of a small unmanned aerial vehicle with a minimal sensor suite (limited to an inertial measurement unit and a monocular camera) is presented. The navigation task is cast as a simultaneous localization and mapping (SLAM) problem. While SLAM has been the subject of a great deal of research, the highly non-linear system dynamics and limited sensor suite available in this application presents a unique set of challenges which have not previously been addressed. In this particular application solutions based on extended Kalman filters have been shown to diverge and alternate techniques are required. In this paper an unscented Kalman filter is applied to the navigation problem, which leads to a consistent estimate of vehicle and feature states. This paper presents: (a) simulation results showing mapping and navigation in three dimensions; and (b) preliminary hardware test results showing navigation and mapping using an off-the-shelf inertial measurement unit and camera in a laboratory environment

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: In this article, the authors extend the application of current-mode, shared-bus converters to power system architectures configured as Parallel-Input, Series-Output (PISO) interconnects.
Abstract: This paper extends the application of current-mode, shared-bus converters to power system architectures configured as Parallel-Input, Series-Output (PISO). By employing a PISO interconnect method, current-mode commercial-off-the-shelf (COTS) dc-dc converters can deliver higher output voltages, provide flexible options for power system expansion, and preserve system efficiencies equal to that obtained from standalone converters. However, without proper control, non-uniformly distributed voltages occur due to converter component mismatch. System reliability suffers as a result of thermal overstress to the converters that contribute a greater portion of the output power. Conversely, robust system stability and uniform output voltage distribution among series-connected converters is realized through output voltage distribution control. Through both computer simulation and experimental prototype, the uniform voltage distribution power converter architecture is validated and successfully applied during power converter burn-in testing whereby converter load energy is recycled to the power system input, resulting in 49% to 80% conservation of energy

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: The Spirit FLASH anomaly is discussed, including the drama of the investigation, the root cause and the lessons learned from the experience.
Abstract: The Mars Exploration Rover "Spirit" suffered a debilitating anomaly that prevented communication with Earth for several anxious days. With the eyes of the world upon us, the anomaly team used each scrap of information, our knowledge of the system, and sheer determination to analyze and fix the problem, then return the vehicle to normal operation. This paper will discuss the Spirit FLASH anomaly, including the drama of the investigation, the root cause and the lessons learned from the experience

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: This paper presents the development of the prototype software product to illustrate the feasibility of the techniques, methodologies, and approaches needed to verify and validate PHM capabilities.
Abstract: Impact Technologies and the Georgia Institute of Technology are developing a Web-based software application that will provide JSF (F-35) system suppliers with a comprehensive set of PHM verification and validation (V&V) resources which will include: standards and definitions, V&V metrics for detection, diagnosis, and prognosis, access to costly seeded fault data sets and example implementations, a collaborative user forum for the exchange of information, and an automated tool for impartially evaluating the performance and effectiveness of PHM technologies. This paper presents the development of the prototype software product to illustrate the feasibility of the techniques, methodologies, and approaches needed to verify and validate PHM capabilities. A team of JSF system suppliers has been assembled to contribute, provide feedback and make recommendations to the product under development. The approach being pursued for assessing the overall PHM system accuracy is to quantify the associated uncertainties at each of the individual levels of a PHM system, and build up the accumulated inaccuracies as information is processed through the PHM architecture

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: This paper describes the early 2004 ground test validation of specific OASIS components on selected Mars exploration rover (MER) images, including the rock-finding algorithm, RockIT, and the rock size feature extraction code, and develops the RockIT GUI, an interface that allows users to easily visualize and modify theRockIT results.
Abstract: The Onboard Autonomous Science Investigation System (OASIS) evaluates geologic data gathered by a planetary rover. This analysis is used to prioritize the data for transmission, so that the data with the highest science value is transmitted to Earth. In addition, the onboard analysis results are used to identify science opportunities. A planning and scheduling component of the system enables the rover to take advantage of the identified science opportunity. OASIS is a NASA-funded research project that is currently being tested on the FIDO rover at JPL for use on future missions. In this paper, we provide a brief overview of the OASIS system, and then describe our recent successes in integrating with and using rover hardware. OASIS currently works in a closed loop fashion with onboard control software (e.g., navigation and vision) and has the ability to autonomously perform the following sequence of steps: analyze gray scale images to find rocks, extract the properties of the rocks, identify rocks of interest, retask the rover to take additional imagery of the identified target and then allow the rover to continue on its original mission. We also describe the early 2004 ground test validation of specific OASIS components on selected Mars exploration rover (MER) images. These components include the rock-finding algorithm, RockIT, and the rock size feature extraction code. Our team also developed the RockIT GUI, an interface that allows users to easily visualize and modify the rock-finder results. This interface has allowed us to conduct preliminary testing and validation of the rock-finder's performance.

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: In this paper, a neural network-based flight control simulator, FLTZreg, is used for the simulation of various faults in fixed-wing aircraft flight control systems for the purpose of real-time fault detection and isolation.
Abstract: In this paper we consider the problem of test design for real-time fault detection and isolation (FDI) in the flight control system of fixed-wing aircraft. We focus on the faults that are manifested in the control surface elements (e.g., aileron, elevator, rudder and stabilizer) of an aircraft. For demonstration purposes, we restrict our focus on the faults belonging to nine basic fault classes. The diagnostic tests are performed on the features extracted from fifty monitored system parameters. The proposed tests are able to uniquely isolate each of the faults at almost all severity levels. A neural network-based flight control simulator, FLTZreg, is used for the simulation of various faults in fixed-wing aircraft flight control systems for the purpose of FDI

Proceedings Articleā€¢DOIā€¢
17 Feb 2005
TL;DR: In this article, the authors presented a scenario where this approach can be applied to a rotorcraft performing nap-of-the-earth flight in challenging terrain with multiple known surface threats, in both urban and nonurban type settings.
Abstract: This paper extends a recently developed approach to the optimal path planning of an autonomous vehicle in an obstacle field to three dimensions with specific applications to terrestrial navigation, with obstacle, terrain, and threat avoidance. We present a scenario where this approach can be applied to a rotorcraft performing nap-of-the-Earth flight in challenging terrain with multiple known surface threats, in both urban and non-urban type settings. Mixed-integer linear programming (MILP) is the underlying problem formulation, from which an optimal solution can be obtained through the use of a commercially available MILP solver such as CPLEX. The solution obtained would be optimal in terms of the cost function specified in terms of fuel, time, altitude and threat exposure, and other predefined criteria. A receding horizon implementation of this MILP algorithm makes it suitable for real-time or near real-time applications

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: This work presents a method that uses a probabilistic fusion of data from multiple sensor sources for onboard segmentation, detection and classification of geological properties in the Atacama desert in Chile.
Abstract: The volume of data that planetary rovers and their instrument payloads can produce will continue to outpace available deep space communication bandwidth. Future exploration rovers will require science autonomy systems that interpret collected data in order to selectively compress observations, summarize results, and respond to new discoveries. We present a method that uses a probabilistic fusion of data from multiple sensor sources for onboard segmentation, detection and classification of geological properties. Field experiments performed in the Atacama desert in Chile show the system's performance versus ground truth on the specific problem of automatic rock identification.

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: In this article, an overview of the diagnostic process for a rolling element bearing with pit type fault from the mechanical defect, to the defect signature, to detection process applies the enveloping technique.
Abstract: The rolling element bearing is an important element for power transmission within the helicopter drive train system. Monitoring the condition of the rolling bearing element's component provides advantages in the operation, safety, and maintenance areas. An overview of the diagnostic process for a rolling element bearing with pit type fault from the mechanical defect, to the defect signature, to the detection process is presented. The detection process applies the enveloping technique. The work presented is demonstrated with an analytical description of the enveloping process, analytical examples, a simple numerical simulation and the results from an operational helicopter. The benefits of understanding the fundamental bearing analysis technology include improved diagnostics capability and prognostic capability

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: Results show that the performance of PLS is comparable to SVM inText categorization and could be a better candidate for multi-class text categorization.
Abstract: Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis Error! Reference source not found.. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's-21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: The CGS static program analysis tool is developed, which can exhaustively analyze large C programs and identifies statements in which arrays are accessed out of bounds, or, pointers are used outside the memory region they should address.
Abstract: Recent NASA mission failures (e.g., Mars Polar Lander and Mars Orbiter) illustrate the importance of having an efficient verification and validation process for such systems. One software error, as simple as it may be, can cause the loss of an expensive mission, or lead to budget overruns and crunched schedules. Unfortunately, traditional verification methods cannot guarantee the absence of errors in software systems. Therefore, we have developed the CGS static program analysis tool, which can exhaustively analyze large C programs. CGS analyzes the source code and identifies statements in which arrays are accessed out of bounds, or, pointers are used outside the memory region they should address. This paper gives a high-level description of CGS and its theoretical foundations. It also reports on the use of CGS on real NASA software systems used in Mars missions (from Mars PathFinder to Mars Exploration Rover) and on the International Space Station

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: The basic foundations ofrisk management, the elements necessary for effective risk management, and the capabilities of this new risk database and how it is implemented to support NASA's risk management needs are presented.
Abstract: Most project managers know that risk management (RM) is essential to good project management. At NASA, standards and procedures to manage risk through a tiered approach have been developed - from the global agency requirements down to a program or project implementation. The basic methodology for NASA's risk management strategy includes processes to identify, analyze, plan, track, control, communicate and document risks. The identification, characterization, mitigation plan, and mitigation responsibilities associated with specific risks are documented to help communicate, manage, and effectuate appropriate closure. This approach helps to ensure more consistent documentation and assessment and provides a means of archiving lessons learned for future identification or mitigation activities. A new risk database and management tool was developed by NASA in 2002 and since has been used successfully to communicate, document and manage a number of diverse risks for the International Space Station, Space Shuttle, and several other NASA projects and programs. Program organizations use this database application to effectively manage and track each risk and gain insight into impacts from other organization's viewpoint. Schedule, cost, technical and safety issues are tracked in detail through this system. Risks are tagged within the system to ensure proper review, coordination and management at the necessary management level. The database is intended as a day-to-day tool for organizations to manage their risks and elevate those issues that need coordination from above. Each risk is assigned to a managing organization and a specific risk owner who generates mitigation plans as appropriate. In essence, the risk owner is responsible for shepherding the risk through closure. The individual that identifies a new risk does not necessarily get assigned as the risk owner. Whoever is in the best position to effectuate comprehensive closure is assigned as the risk owner. Each mitigation plan includes the specific tasks that will be conducted to either decrease the likelihood of the risk occurring and/or lessen the severity of the consequences. As each mitigation task is completed, the responsible managing organization records the completion of the task in the risk database and then re-scores the risk considering the task's results. By keeping scores updated, a managing organization's current top risks and risk posture can be readily identified including the status of any risk in the system. A number of metrics measure risk process trends from data contained in the database. This allows for trend analysis to further identify improvements to the process and assist in the management of all risks. The metrics also scrutinize both the effectiveness and compliance of risk management requirements. The risk database is an evolving tool and is continuously improved with capabilities requested by the NASA project community. This paper presents the basic foundations of risk management, the elements necessary for effective risk management, and the capabilities of this new risk database and how it is implemented to support NASA's risk management needs.

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: In this article, the authors introduce the concept of incorporating the actuator lifetime as a controlled parameter and describe preliminary methods for speed/position tracking control of an electromechanical actuator (EMA) while maintaining a desired minimum lifetime of an actuator motor.
Abstract: Existing actuator controls are typically designed based on optimizing performance and robustness to system uncertainties, without considering the operational lifetime of the actuator. It is often desirable, and sometimes necessary, to trade off performance for extended actuator operational lifetime. This paper introduces the concept of incorporating the actuator lifetime as a controlled parameter. We describe preliminary methods for speed/position tracking control of an electromechanical actuator (EMA) while maintaining a desired minimum lifetime of the actuator motor

Proceedings Articleā€¢DOIā€¢
05 Mar 2005
TL;DR: The results indicate that Cougaar agent applications would enable reliable cost-effective aerospace applications that face unreliable, slow networks, dynamic conditions, and expensive launches.
Abstract: Aerospace operations face unreliable, slow networks, dynamic conditions, and expensive launches. Therefore, researchers are investigating autonomous agent architectures for space system control. However, agent systems to date lack proven reliability, scalability, and cost-effectiveness. Under the UltraLog program, the United States Defense Advanced Research Projects Agency (DARPA) has sponsored the development of the Cougaar agent architecture (open-source at http://cougaar.org), a robust reusable agent framework. UltraLog used Cougaar to build a large-scale distributed prototype planning application, which was assessed under machine kills, network cuts and degradations, and increased workload. The results indicate that Cougaar agent applications would enable reliable cost-effective aerospace applications