scispace - formally typeset
Search or ask a question

Showing papers presented at "IEEE Aerospace Conference in 2011"


Proceedings ArticleDOI
05 Mar 2011
TL;DR: A novel battery health management system for electric UAVs (unmanned aerial vehicles) based on a Bayesian inference driven prognostic framework to predict the end-of-discharge (EOD) event that indicates that the battery pack has run out of charge for any given flight of anElectric UAV platform.
Abstract: This paper presents a novel battery health management system for electric UAVs (unmanned aerial vehicles) based on a Bayesian inference driven prognostic framework. The aim is to be able to predict the end-of-discharge (EOD) event that indicates that the battery pack has run out of charge for any given flight of an electric UAV platform. The amount of usable charge of a battery for a given discharge profile is not only dependent on the starting state-of-charge (SOC), but also other factors like battery health and the discharge or load profile imposed. This problem is more pronounced in battery powered electric UAVs since different flight regimes like takeoff/landing and cruise have different power requirements and a dead stick condition (battery shut off in flight) can have catastrophic consequences. Since UAVs deployments are relatively new, there is a lack of statistically significant flight data to motivate data-driven approaches. Consequently, we have developed a detailed discharge model for the batteries used and used it in a Bayesian inference based filtering (Particle Filtering) technique to generate remaining useful life (RUL) distributions for a given discharge. The results section presents the validation of this approach in hardware-in-the-loop tests.12

122 citations


Proceedings ArticleDOI
05 Mar 2011
TL;DR: In this paper, the authors used the calibrated, high signal-to-noise ratio measurements of the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) to investigate terrestrial ecology topics related to: (1) Pattern and Spatial Distribution of Ecosystems and their components, (2) Ecosystem Function, Physiology and Seasonal Activity, Biogeochemical Cycles, (3) Changes in Disturbance Activity, and (4) Environment and Human Health.
Abstract: Contiguous spectral measurements in the image domain made by the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) have been used to advance a range of Terrestrial Ecology science investigation over the past two decades. Currently there are hundreds of relevant refereed journal articles. The calibrated, high signal-to-noise ratio measurements of AVIRIS are used to investigate terrestrial ecology topics related to: (1) Pattern and Spatial Distribution of Ecosystems and their Components, (2) Ecosystem Function, Physiology and Seasonal Activity, (3) Biogeochemical Cycles, (3) Changes in Disturbance Activity, and (4) Ecosystems and Human Health.

105 citations


Proceedings ArticleDOI
05 Mar 2011
TL;DR: The current MSR architecture has evolved into a campaign of three missions in series, along with a sample receiving facility to contain and handle the samples once back on Earth, and the challenges, and potential implementation.
Abstract: Over the last few years, Mars Sample Return (MSR) has become a top priority amongst the Mars science community as the next big step in the Mars Exploration Program (MEP). In addition, a joint MEP has been established between the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA) to work together on all future missions leading to a shared implementation of an MSR. The proposed MSR architecture has evolved into a campaign of three missions in series, along with a sample receiving facility to contain and handle the samples once back on Earth. The distinction from earlier architectures is the addition of a proposed rover-based mission to be sent in advance to carefully select and cache samples for possible eventual return. This rover mission would be baselined for launch in 2018. The next two proposed missions would entail a lander, with both a rover to fetch the previously collected cache and a rocket (Mars Ascent Vehicle, or MAV) to launch it into Mars orbit, and an orbiter that would capture the sample container and return it to Earth, landing in a specialized Earth entry vehicle (EEV). This paper discusses the current architecture, how it evolved, the challenges, and potential implementation. Concepts presented are NASA's view of the elements involved, with recognition of potential contributions of ESA. Both agencies are conducting studies to establish roles moving forward.1,2

76 citations


Proceedings ArticleDOI
05 Mar 2011
TL;DR: A design environment consisting of a framework, a central model and a newly developed conceptual design module, both for parameter and method replacement, are introduced and first results for multi-fidelity calculations are shown.
Abstract: In the present study we introduce a design environment consisting of a framework, a central model and a newly developed conceptual design module. The Common Parametric Aircraft Configuration Scheme (CPACS) is the standard syntax definition for the exchange of information within preliminary airplane design at DLR. Several higher fidelity analysis modules are already connected to CPACS, including aerodynamics, primary structures, mission analysis and climate impact. The analysis modules can be interfaced via a distributed framework. To initialize the design processes, capabilities are needed to close the gap between top-level requirements and preliminary design. Additionally, results of a design loop need to be merged to generate inputs for further iterations and convergence control. For this purpose we developed a conceptual design module based on handbook methods where the focus is set on multi-fidelity. For the upward change in level of detail a knowledge-based approach is used for the generation of CPACS models. This includes the geometry generation, as well as additional data such as the mass breakdown and the tool-specific inputs for further analyses in higher fidelity modules. The feedback loop is closed downwards by reducing the granularity from the CPACS data set back to the level of conceptual design methods. The conceptual design module is object-oriented and concepts, both for parameter and method replacement, are introduced. First results for multi-fidelity calculations are shown.

68 citations


Proceedings ArticleDOI
05 Mar 2011
TL;DR: In this paper, a model-based prognostics methodology using particle filters is developed, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem.
Abstract: Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active.

56 citations


Proceedings ArticleDOI
05 Mar 2011
TL;DR: In this paper, the authors present the technical requirements for OLFAR and first order estimates of data rates for space-based radio astronomy based on the proposed scalable distributed correlator model.
Abstract: Recently new and interesting science drivers have emerged for very low frequency radio astronomy from 0.3 MHz to 30 MHz. However Earth bound radio observations at these wavelengths are severely hampered by ionospheric distortions, man made interference, solar flares and even complete reflection below 10 MHz. OLFAR is Orbiting Low Frequency ARray, a project whose aim is to develop a detailed system concept for space based very low frequency large aperture radio interferometric array observing at these very long wavelengths. The OLFAR cluster could either orbit the moon, whilst sampling during the Earth-radio eclipse phase, or orbit the Earth-moon L2 point, sampling almost continuously or Earth-trailing and leading orbit. The aim of this paper is to present the technical requirements for OLFAR and first order estimates of data rates for space based radio astronomy based on the proposed scalable distributed correlator model. The OLFAR cluster will comprise of autonomous flight units, each of which is individually capable of inter satellite communication and down-link. The down-link data rate is heavily dependent on distance of the cluster from Earth and thus the deployment location of OLFAR, which are discussed.

54 citations


Proceedings ArticleDOI
05 Mar 2011
TL;DR: The approach described in this paper borrows concepts and principles from the field of “Systems Health Management” for complex systems and implements a two level health management strategy that can be applied through a model-based software development process.
Abstract: Complexity of software systems has reached the point where we need run-time mechanisms that can be used to provide fault management services. Testing and verification may not cover all possible scenarios that a system will encounter, hence a simpler, yet formally specified run-time monitoring, diagnosis, and fault mitigation architecture is needed to increase the software system's dependability. The approach described in this paper borrows concepts and principles from the field of “Systems Health Management” for complex systems and implements a two level health management strategy that can be applied through a model-based software development process. The Component-level Health Manager (CLHM) for software components provides a localized and limited functionality for managing the health of a component locally. It also reports to the higher-level System Health Manager (SHM) which manages the health of the overall system. SHM consists of a diagnosis engine that uses the timed fault propagation (TFPG) model based on the component assembly. It reasons about the anomalies reported by CLHM and hypothesizes about the possible fault sources. Thereafter, necessary system level mitigation action can be taken. System-level mitigation approaches are subject of on-going investigations and have not been included in this paper. We conclude the paper with case study and discussion.

46 citations


Proceedings ArticleDOI
05 Mar 2011
TL;DR: This work develops robust algorithms for solving the relative pose and motion of cooperative satellites using on-board sensors using a stereoscopic vision system and makes the relative motion filtering algorithm robust to uncertainties in the inertia tensor.
Abstract: Estimating the relative pose and motion of cooperative satellites using on-board sensors is a challenging problem. When the satellites are non-cooperative, the problem becomes far more complicated, as there might be poor or no a priori information about the motion or structure of the target satellite. In this work we develop robust algorithms for solving the said problem by assuming that only visual sensory information is available. Using two cameras mounted on a chaser satellite, the relative state of a target satellite, including the position, attitude, and rotational and translational velocities is estimated. Our approach employs a stereoscopic vision system for tracking a set of feature points on the target spacecraft. The perspective projection of these points on the two cameras constitutes the observation model of an EKF-based filtering scheme. In the final part of this work, the relative motion filtering algorithm is made robust to uncertainties in the inertia tensor. This is accomplished by endowing the plain EKF with a maximum a posteriori identification scheme for determining the most probable inertia tensor from several available hypotheses.

43 citations


Proceedings ArticleDOI
05 Mar 2011
TL;DR: In this paper, the authors compared the scalability, spectral efficiency, and interference resistance of two air-to-ground wireless datalinks in terms of their scalability and spectral efficiency.
Abstract: New air-to-ground wireless datalinks are needed to supplement existing civil aviation technologies. The 960 – 1164 MHz part of the IEEE L band has been identified as a candidate spectrum. EUROCONTROL — the European organization for the Safety of Air Navigation, has funded two parallel projects and developed two proposals called L-DACS1 and L-DACS2. Although, there is a significant amount of literature available on each of the two technologies from the two teams that designed the respective proposals, there is very little independent comparison of the two proposals. The goal of this paper is to provide this comparison. We compare the two proposals in terms of their scalability, spectral efficiency, and interference resistance. Both the technologies have to co-exist with several other aeronautical technologies that use the same L-band. 12

43 citations


Proceedings ArticleDOI
05 Mar 2011
TL;DR: A novel graphical approach to data representation is described, called the Track Graph, which provides a compact and efficient structure for the storage of tracklet data and subsequent formulation of the tracklet stitching problem, and describes polynomial-time algorithms to solve the full tracklet-to-track assignment problem.
Abstract: Complex track stitching problems have risen in prominence with the development of wide area persistent sensors for urban surveillance. Common approaches such as multiple hypothesis algorithms in tracking and tracklet stitching solve these problems by creating a representation of all possible data associations, then solving for the optimal data association via integer programming or suboptimal techniques. The number of data association hypotheses grows exponentially with number of objects and time, which leads to scaling issues when tracking targets in high-density environments over long periods of time. State-of-the-art multiple hypothesis approaches make tradeoffs to limit complexity, discarding potential data associations to avoid scaling problems. These approaches run the risk of eliminating the best data association hypotheses. This paper describes a novel graphical approach to data representation, called the Track Graph, which provides a compact and efficient structure for the storage of tracklet data and subsequent formulation of the tracklet stitching problem. The Track Graph implicitly represents the set of feasible tracklet stitching hypotheses, with linear scaling in time in both edges (feasible associations) and nodes (tracklets). Using the Track Graph, we pose the track-stitching problem as a min-cost flow problem, and describe polynomial-time algorithms to solve the full tracklet-to-track assignment problem, obtaining the same optimal solution as multiple hypothesis tracking algorithms. We demonstrate the efficacy of our algorithms on high density simulated scenarios.

42 citations


Proceedings ArticleDOI
05 Mar 2011
TL;DR: The main contribution of this article is the presentation of a feasible approach to obstacle avoidance based on the segmentation of camera images into sky and non-sky regions, named the Sky Segmentation Approach (SSA).
Abstract: The capability to visually discern possible obstacles from the sky would be a valuable asset to a UAV for avoiding both other flying vehicles and static obstacles in its environment. The main contribution of this article is the presentation of a feasible approach to obstacle avoidance based on the segmentation of camera images into sky and non-sky regions. The approach is named the Sky Segmentation Approach (SSA). The central concept is that potentially threatening static obstacles protrude from the horizon line. The main challenge for SSA is automatically interpreting the images robustly enough for use in various environments and fast enough for real-time performance. In order to achieve robust image segmentation, machine learning is applied to a large database of images with many different types of skies. From these images, different types of visual features are extracted, among which most of the features investigated in the literature. In the interest of execution speed and comprehensibility, decision trees are learned to map the feature values at an image location to a classification as sky or non-sky. The learned decision trees are fast enough to allow real-time execution on a Digital Signal Processor: it is run onboard a small UAV at ∼ 30 Hz. Experiments in simulation and preliminary experiments on a small UAV show the potential of SSA for achieving robust obstacle avoidance in urban areas.

Proceedings ArticleDOI
05 Mar 2011
TL;DR: In this paper, the problem of collision avoidance with moving obstacles for unmanned aerial vehicles is solved using a direct method, meaning the problem is transcripted to a nonlinear programming problem and solved with an optimization method.
Abstract: This paper addresses the problem of collision avoidance with moving obstacles for unmanned aerial vehicles. It is assumed that obstacle detection and tracking can be achieved 60 seconds prior to collision. Such a time horizon allows on-board trajectory re-planning with updated constraints due to intruder and ownship dynamics. This trajectory generation problem is solved using a direct method, meaning the problem is transcripted to a nonlinear programming problem and solved with an optimization method. The main challenge in trajectory generation framework is to reliably provide a feasible (safe and flyable) trajectory within a deterministic time. In order to improve the method's reliability, a Monte Carlo analysis is used to investigate the convergence properties of the optimization process, the properties of the generated trajectories and their effectiveness in obstacle avoidance. The results show that the method is able to converge to a feasible and near-optimal trajectories within two seconds, except in very restrictive cases. Moreover, the dynamic feasibility of the generated trajectories is verified with nonlinear simulations, where the trajectory generation is integrated with the six degree-of-freedom nonlinear model of a fixed-wing research vehicle developed at Cranfield University. The results show that the generated trajectories can be tracked with a proposed two-degree-of-freedom control scheme. The improved convergence, fast computation and assured dynamic feasibility pave the way for on-board implementation and flight testing.

Proceedings ArticleDOI
05 Mar 2011
TL;DR: This work reports on recent progress in developing an Earth-based (outdoors) robotic test bed for Tier-scalable Reconnaissance at the University of Arizona and Caltech for distributed, science-driven, and significantly less constrained reconnaissance of prime locations on a variety of planetary bodies, with particular focus on Saturn's moon Titan with its methane/hydrocarbon lakes and Mars.
Abstract: Tier-scalable robotic reconnaissance missions are called for in extreme space environments, including planetary atmospheres, surfaces (both solid and liquid), and subsurfaces (e.g., oceans), as well as in potentially hazardous or inaccessible operational areas on Earth. Such future missions will require increasing degrees of operational autonomy: (1) Automatic mapping of an operational area from different vantages (i.e., spaceborne, airborne, surface, subsurface); (2) automatic sensor deployment and sensor data gathering; (3) automatic feature extraction and target/region-of-interest/anomaly identification within the mapped operational area; (4) automatic target prioritization for follow-up or close-up (in-situ) examination; and (5) subsequent automatic, targeted deployment and navigation/relocation of agents/sensors (e.g., to follow up on transient events). We report on recent progress in developing an Earth-based (outdoors) robotic test bed for Tier-scalable Reconnaissance at the University of Arizona and Caltech for distributed, science-driven, and significantly less constrained (compared to state-of-the-art) reconnaissance of prime locations on a variety of planetary bodies, with particular focus on Saturn's moon Titan with its methane/hydrocarbon lakes and Mars. The test bed currently comprises several computer-controlled robotic surface vehicles, i.e., rovers and lake landers/boats equipped with a variety of sensors. To achieve a fully operational Tier-scalable Reconnaissance test bed, aerial platforms will be integrated as a next step. The robotic surface vehicles can be interactively or automatically controlled from anywhere in the world in near real-time via the Internet. The test bed enables the implementation, field-testing, and validation of algorithms and strategies for navigation, exploration, sensor deployment, sensor data gathering, feature extraction, anomaly detection, and science goal prioritization for autonomous planetary exploration. Furthermore, it permits field-testing of novel instruments and sensor technologies, as well as testing of cooperative multi-agent scenarios and distributed scientific exploration of operational areas. As such the robotic test bed enables the development, implementation, field-testing, and validation of software packages for inter-agent communication and coordination to navigate and explore operational areas with greatly reduced reliance on (ultimately without assistance from) ground operators, thus affording the degree of mission autonomy/flexibility necessary to support future missions to Titan, Mars, and other planetary bodies, including asteroids. 1 2

Proceedings ArticleDOI
05 Mar 2011
TL;DR: The Hyperspectral Thermal Emission Spectrometer (HyTES) as mentioned in this paper is an airborne pushbroom imaging spectrometer based on the Dyson optical configuration.
Abstract: The Jet Propulsion Laboratory has developed the Hyperspectral Thermal Emission Spectrometer (HyTES).12 It is an airborne pushbroom imaging spectrometer based on the Dyson optical configuration. First low altitude test flights are scheduled for later this year. HyTES uses a compact 7.5–12□m hyperspectral grating spectrometer in combination with a Quantum Well Infrared Photodetector (QWIP) and grating based spectrometer. The Dyson design allows for a very compact and optically fast system (F/1.6). Cooling requirements are minimized due to the single monolithic prism-like grating design. The configuration has the potential to be the optimal science-grade imaging spectroscopy solution for high altitude, lighter-than-air (HAA, LTA) vehicles and unmanned aerial vehicles (UAV) due to its small form factor and relatively low power requirements. The QWIP sensor allows for optimum spatial and spectral uniformity and provides adequate responsivity which allows for near 100mK noise equivalent temperature difference (NEDT) operation across the LWIR passband. The QWIP's repeatability and uniformity will be helpful for data integrity since currently an onboard calibrator is not planned. A calibration will be done before and after eight hour flights to gage any inconsistencies. This has been demonstrated with lab testing. Further test results show adequate NEDT, linearity as well as applicable earth science emissivity target results (Silicates, water) measured in direct sunlight.

Proceedings ArticleDOI
05 Mar 2011
TL;DR: The cathodeless electron cyclotron resonance ion engines, µ10, propelled the Hayabusa asteroid explorer, launched in May 2003, which is focused on demonstrating the technology necessary for a sample return from an asteroid, using electric propulsion, optical navigation, material sampling in a zero gravity field, and direct re-entry from a heliocentric orbit as discussed by the authors.
Abstract: The cathode-less electron cyclotron resonance ion engines, µ10, propelled the Hayabusa asteroid explorer, launched in May 2003, which is focused on demonstrating the technology necessary for a sample return from an asteroid, using electric propulsion, optical navigation, material sampling in a zero gravity field, and direct re-entry from a heliocentric orbit. It rendezvoused with the asteroid Itokawa after a two-year deep space flight using the ion engines. Though it succeeded in landing on the asteroid on November 2005, the spacecraft was seriously damaged. This delayed Earth return in 2010 from the original plan in 2007. Reconstruction on the operational scheme using thrust vector control of ion engines, Xe cold gas jets and solar pressure torque made Hayabusa leave for Earth in April 2007. Although most of the neutralizers were degraded and unable to be used in fall of 2009, a combination of an ion source and its neighboring neutralizer has kept the orbit maneuver to Earth including a series of final trajectory correction maneuvers. Finally, the spacecraft decayed in atmosphere and only the reentry capsule was retrieved from the Australian outback on June 14th, 2010. For the round trip space odyssey between Earth and the asteroid, the ion engines served the total accumulated operational time 39,637 hour·unit, the powered spaceflight in 25,590 hours, delta-V of 2.2 km/s, total impulse of 1 MN·s and 47 kg Xenon propellant consumption. 1 2

Proceedings ArticleDOI
05 Mar 2011
TL;DR: In this paper, Emergent Space Technologies, Inc. developed specialized algorithms for designing efficient tour missions for Near-Earth Asteroids that may be applied to the design of efficient spacecraft missions capable of visiting large numbers of orbital debris pieces.
Abstract: The amount of hazardous debris in Earth orbit has been increasing, posing an ever-greater danger to space assets and crewed missions. In January of 2007, a Chinese ASAT test produced approximately 2; 600 pieces of orbital debris. In February of 2009, Iridium 33 collided with an inactive Russian satellite, yielding approximately 1; 300 pieces of debris. These recent disastrous events and the sheer size of the Earth orbiting population make clear the necessity of removing orbital debris. In fact, experts from both NASA and ESA have stated that 10 to 20 pieces of orbital debris need to be removed per year to stabilize the orbital debris environment. However, no spacecraft trajectories have yet been designed for removing multiple debris objects and the size of the debris population makes the design of such trajectories a daunting task. Designing an efficient spacecraft trajectory to rendezvous with each of a large number of orbital debris pieces is akin to the famous Traveling Salesman problem, an NP-complete combinatorial optimization problem in which N cities are to be visited in turn. The goal is to choose the order in which the cities are visited so as to minimize the total path distance traveled. In the case of orbital debris, the pieces of debris to be visited must be selected and ordered such that spacecraft fuel consumption is minimized or at least kept low enough to be feasible. Emergent Space Technologies, Inc. has developed specialized algorithms for designing efficient tour missions for Near-Earth Asteroids that may be applied to the design of efficient spacecraft missions capable of visiting large numbers of orbital debris pieces. The first step is to identify a list of high priority debris targets using the Analytical Graphics, Inc. SOCRATES website and then obtain their state information from Celestrak. The tour trajectory design algorithms will then be used to determine the itinerary of objects and ΔV requirements. These results will shed light on how many debris pieces can be visited for various amounts of propellant, which launch vehicles can accommodate such missions, and how much margin is available for debris removal system payloads.

Proceedings ArticleDOI
05 Mar 2011
TL;DR: A rotary-percussive core drill for the 2018 Mars Sample Return mission and in particular for the Mars Astrobiology Explorer-Cacher, MAX-C mission is presented in this paper.
Abstract: Since 1990s Honeybee Robotics has been developing and testing surface coring drills for future planetary missions. Recently, we focused on developing a rotary-percussive core drill for the 2018 Mars Sample Return mission and in particular for the Mars Astrobiology Explorer-Cacher, MAX-C mission. The goal of the 2018 MAX-C mission is to acquire approximately 20 cores from various rocks and outcrops on the surface of Mars. The acquired cores, 1 cm diameter and 5 cm long, would be cached for return back to Earth either in 2022 or 2024, depending which of the MSR architectures is selected. We built a testbed coring drill that was used to acquire drilling data, such as power, rate of penetration, and Weight on Bit, in various rock formations. Based on these drilling data we designed a prototype Mars Sample Return coring drill. The proposed MSR drill is an arm-mounted, standalone device, requiring no additional arm actuation once positioned and preloaded. A low mass, compact transmission internal to the housing provides all of the actuation of the tool mechanisms. The drill uses a rotary-percussive drilling approach and can acquire a 1 cm diameter and 5 cm long core in Saddleback basalt in less than 30 minutes with only ∼20 N Weight on Bit and less than 100 Watt of power. The prototype MSR drill weighs approximately 5 kg1,2.

Proceedings ArticleDOI
Kapil Bakshi1
05 Mar 2011
TL;DR: The paper describes the Cloud framework and architecture, with characteristics of virtualization and multi-tenancy to build an end-to-end IaaS cloud-computing infrastructure, and describes phases of adoption of cloud data center by an enterprise.
Abstract: Cloud computing is one of the fastest growing opportunities for enterprises and service providers.12 Enterprises use the Infrastructure-as-a-service (IaaS) model to build private clouds, and virtual private clouds that reduce operating and capital expenses and increase the agility and reliability of their critical information systems. Service providers build public clouds to offer on-demand, secure, multi-tenant, pay-per-use IT infrastructure to businesses and government agencies that use cloud services to offload, or augment, their internal resources using a public cloud infrastructure. This paper starts with the cloud taxonomy and model overview. Then the paper describes the Cloud framework and architecture, with characteristics of virtualization and multi-tenancy to build an end-to-end IaaS cloud-computing infrastructure. Logical building blocks for cloud data centers including the virtualized network, compute, and storage resources, which are overlaid with Service orchestration, modular approach and service differentiation elements. The paper also describes phases of adoption of cloud data center by an enterprise.

Proceedings ArticleDOI
05 Mar 2011
TL;DR: The requirements for reliable space based multicore computing and approaches being explored to deliver this capability within NASA's extremely tight power, mass, and cost constraints are discussed.
Abstract: The current trend in commercial processors of moving to many cores (30 to100 and beyond) on a single die poses both an opportunity and a challenge for space based processing. 1 2 The opportunity is to leverage this trend for space application and thus provide an order of magnitude increase in onboard processing capability. The challenge is to provide the requisite reliability in an extremely challenging environment. In this paper, we will discuss the requirements for reliable space based multicore computing and approaches being explored to deliver this capability within NASA's extremely tight power, mass, and cost constraints.

Proceedings ArticleDOI
05 Mar 2011
TL;DR: In this article, a Space to Ground bidirectional optical communication link at 5.6 Gbps has been verified between a low Earth orbit satellite (NFIRE) and a TESAT optical ground station hosted at the ESA site in Tenerife (Spain).
Abstract: A Space to Ground bidirectional optical communication link at 5.6 Gbps has been verified between a low Earth orbit satellite (NFIRE) and a TESAT optical ground station hosted at the ESA site in Tenerife (Spain)12. It is the first demonstration of coherent laser communication from space to ground to our knowledge.

Proceedings ArticleDOI
05 Mar 2011
TL;DR: The use of DoDAF to create four types of common executables models is summarized, including Markov Chains, Petri Nets, System Dynamics models, and Mathematical graphs.
Abstract: This paper summarizes an approach to using DoDAF to create executable architecture environments. 12The use of DoDAF to create four types of common executables models is summarized. These modeling types include Markov Chains, Petri Nets, System Dynamics models, and Mathematical graphs. Some aspects of the framework are demonstrated using a suppression of enemy air defenses scenario.

Proceedings ArticleDOI
05 Mar 2011
TL;DR: Several new metrics and methodologies stemming from weather forecast verification, nonlinear exact filtering, non linear uncertainty propagation and Monte Carlo method are proposed to validate user defined particle-filtering based prognostic algorithm.
Abstract: Prognosis is a fundamental enabling technique for condition-based maintenance (CBM) systems and prognostics and health management (PHM) systems and therefore, plays a critical role in the successful deployment of these systems. The purpose of prognosis is to predict the remaining useful life of a system/subsystem or a component when a fault is detected. Although different prognostic algorithms have been developed and tentatively applied to various mechanical and electrical systems in the past decade, the verification and validation (V&V) remains a challenging open problem. The difficulties lie in the facts that first, there is usually no statistically sufficient data to do V&V and second, there is no rigorous and general V&V framework available. In this paper, several new metrics and methodologies stemming from weather forecast verification, nonlinear exact filtering, nonlinear uncertainty propagation and Monte Carlo method are proposed to validate user defined particle-filtering based prognostic algorithm. The presented metrics and methodologies are generic and can be extended to the V&V of other prognostic algorithms on different platforms. The methodologies are demonstrated on the prognosis of a real world application. 1 2

Proceedings ArticleDOI
05 Mar 2011
TL;DR: The pursuer motion control law proposed in this paper is based on the definition of an oscillatory motion created by a center of oscillation and allows the pursuer UAV for the fulfillment of the requirements of target tracking under the stated constraints.
Abstract: A pursuer UAV tracking and loitering around a target is the problemanalyzed in this work The UAV is assumed to be a fixed-wing vehicle and constant airspeed together with bounded lateral accelerations are the main constraints of the problem The pursuer motion control law proposed in this paper is based on the definition of an oscillatory motion created by a center of oscillation: it allows the pursuer UAV for the fulfillment of the requirements of target tracking under the stated constraints In particular, the center of oscillation tracks the real motion of the target and the UAV tracks the center of oscillation by means of a suitable guidance law In this work a description of the mathematical model of the problem, the oscillatory motion and the guidance law is provided Proofs of stable closed loop behavior are given Simulation results are finally shown and commented

Proceedings ArticleDOI
05 Mar 2011
TL;DR: This paper deals with an estimation of the Remaining Useful Life of bearings based on the utilization of the Wavelet Packet Decomposition (WPD) and the Mixture of Gaussians Hidden Markov Models (MoG-HMM).
Abstract: This paper deals with an estimation of the Remaining Useful Life of bearings based on the utilization of the Wavelet Packet Decomposition (WPD) and the Mixture of Gaussians Hidden Markov Models (MoG-HMM). The raw data provided by the sensors are first processed to extract features by using the wavelet packet decomposition. This latter provides a more flexible way of time-frequency representation and filtering of a signal, by allowing the use of variable sized windows and different detail levels. The extracted features are then fed as inputs of dedicated learning algorithms in order to estimate the parameters of a mixture of Gaussian Hidden Markov Model. Once this learning phase is achieved, the generated model is exploited during a second phase to continuously assess the current health state of the physical component and to estimate its remaining useful life with the associated confidence value. The proposed method is tested on a benchmark data taken from the “NASA prognostic data repository” related to several bearings'. Bearings are chosen because they are the most used and also the most faulty mechanical element in some industrial systems and process. Furthermore, the method is compared to a traditional time-feature prognostic and some simulation results are given at the end of the paper.

Proceedings ArticleDOI
05 Mar 2011
TL;DR: The team from Ames Research Center has developed techniques for assessing the fault tolerance of ZigBee WSNs challenged by radio frequency (RF) interference or WSN node failure.
Abstract: Wireless sensor networks (WSN) based on the IEEE 802.15.4 Personal Area Network standard are finding increasing use in the home automation and emerging smart energy markets. The network and application layers, based on the ZigBee 2007 PRO Standard, provide a convenient framework for component-based software that supports customer solutions from multiple vendors. This technology is supported by System-on-a-Chip solutions, resulting in extremely small and low-power nodes. The Wireless Connections in Space Project addresses the aerospace flight domain for both flight-critical and non-critical avionics. WSNs provide the inherent fault tolerance required for aerospace applications utilizing such technology. The team from Ames Research Center has developed techniques for assessing the fault tolerance of ZigBee WSNs challenged by radio frequency (RF) interference or WSN node failure.12

Proceedings ArticleDOI
05 Mar 2011
TL;DR: This work states that no currently available modeling language can represent all aspects of a system (including system-of-systems) at all levels of abstraction across the lifecycle.
Abstract: Development of complex systems is a collaborative effort spanning disciplines, teams, processes, software tools, and modeling formalisms. It is the vision of model-based systems engineering (MBSE) to enable a consistent, coherent, interoperable, and evolving model of a system throughout its lifecycle. However, no currently available modeling language can represent all aspects of a system (including system-of-systems) at all levels of abstraction across the lifecycle.

Proceedings ArticleDOI
05 Mar 2011
TL;DR: In this paper, a statistical look at past rideshares to understand the opportunities and obstacles for prospective future piggyback launches is presented, with the purpose of identifying the fundamental issues.
Abstract: Rideshares (“piggyback” launches) go almost back to the first satellite launches, with the first one in 1960. Given the extraordinary cost of launch, it is natural to seek out ways to share costs, or to make use of the unused capacity of a larger launch vehicle. One tool that would be of use to mission planners is a statistical look at past rideshares to help understand the opportunities and obstacles for prospective future rideshares. The purpose of this paper is to begin to collect the data necessary for such analyses, and to start identifying the fundamental issues. 12

Proceedings ArticleDOI
05 Mar 2011
TL;DR: The ability to refuel cryogenic propulsion stages on-orbit provides an innovative paradigm shift for space transportation supporting National Aeronautics and Space Administration's (NASA) Exploration program as well as deep space robotic, national security and commercial missions.
Abstract: The ability to refuel cryogenic propulsion stages on-orbit provides an innovative paradigm shift for space transportation supporting National Aeronautics and Space Administration's (NASA) Exploration program as well as deep space robotic, national security and commercial missions1,2. Refueling enables large beyond low Earth orbit (LEO) missions without requiring super heavy lift vehicles that must continuously grow to support increasing mission demands as America's exploration transitions from early Lagrange point missions to near Earth objects (NEO), the lunar surface and eventually Mars. Earth-to-orbit launch can be optimized to provide competitive, cost-effective solutions that allow sustained exploration.

Proceedings ArticleDOI
05 Mar 2011
TL;DR: In this article, the posterior Cramer-Rao lower bound (PCRLB) is used to determine the optimal set of transmitters in a MIMO radar event based on tracking accuracy and energy consumption.
Abstract: A technique is presented that will determine an efficient set of transmitters in a MIMO radar event based on tracking accuracy and energy consumption.123 The posterior Cramer-Rao lower bound (PCRLB) will provide the means of determining these optimal transmitters by placing a bound on the variance of the track state estimate. This is a predictive PCRLB since it is calculated before any measurements are taken. Optimal transmitters are chosen by minimizing a proposed cost function that incorporates the PCRLB, along with number of transmitters in the MIMO event. To account for measurement origin uncertainty, an information reduction factor (IRF) will be incorporated in the calculation of the PCRLB for each predicted measurement. Since the complexity of the cost calculation increases exponentially with the number of sensors, several approximations are made for the calculation of the PCRLB and IRF. The jacobian matrix for the sine space measurement equations are derived for use in the calculation of the PCRLB. This resource allocation scheme is evaluated using the GTRI/ONR MIMO Radar Benchmark with metrics including track completeness ratio and total cumulative energy consumption.

Proceedings ArticleDOI
05 Mar 2011
TL;DR: The most promising approach today is the movement toward a more integrated and model-centric approach to mission conception, design, implementation and operations, which elevates engineering models to a principal role in systems engineering, gradually replacing traditional document-centric engineering practices.
Abstract: The increasingly ambitious requirements levied on JPL's space science missions, and the development pace of such missions, challenge our current engineering practices. 12All the engineering disciplines face this growth in complexity to some degree, but the challenges are greatest in systems engineering where numerous competing interests must be reconciled and where complex system-level interactions must be identified and managed. Undesired system-level interactions are increasingly a major risk factor that cannot be reliably exposed by testing, and natural-language single-viewpoint specifications are inadequate to capture and expose system level interactions and characteristics. Systems engineering practices must improve to meet these challenges, and the most promising approach today is the movement toward a more integrated and model-centric approach to mission conception, design, implementation and operations. This approach elevates engineering models to a principal role in systems engineering, gradually replacing traditional document-centric engineering practices.