scispace - formally typeset
Search or ask a question

Showing papers presented at "IEEE Aerospace Conference in 2012"


Proceedings ArticleDOI
Kapil Bakshi1
03 Mar 2012
TL;DR: MapReduce, in conjunction with the Hadoop Distributed File System (HDFS) and HBase database, as part of the Apache Hadoops project is a modern approach to analyze unstructured data.
Abstract: The amount of data in our industry and the world is exploding. Data is being collected and stored at unprecedented rates. The challenge is not only to store and manage the vast volume of data (“big data”), but also to analyze and extract meaningful value from it. There are several approaches to collecting, storing, processing, and analyzing big data. The main focus of the paper is on unstructured data analysis. Unstructured data refers to information that either does not have a pre-defined data model or does not fit well into relational tables. Unstructured data is the fastest growing type of data, some example could be imagery, sensors, telemetry, video, documents, log files, and email data files. There are several techniques to address this problem space of unstructured analytics. The techniques share a common character tics of scale-out, elasticity and high availability. MapReduce, in conjunction with the Hadoop Distributed File System (HDFS) and HBase database, as part of the Apache Hadoop project is a modern approach to analyze unstructured data. Hadoop clusters are an effective means of processing massive volumes of data, and can be improved with the right architectural approach.

229 citations


Proceedings ArticleDOI
David W. Matolak1
03 Mar 2012
TL;DR: In this article, the authors provide a comprehensive review of past work on the air-ground (AG) channel, and follow this with a brief description of plans for an AG channel measurement and modeling campaign.
Abstract: Use of unmanned aircraft systems (UASs) for multiple applications is expected to grow dramatically in the coming decades; this fact has motivated this paper's focus on fundamental physical layer characteristics relevant to UAS communications. In the past, for aeronautical communications with high transmitted power levels, narrow signal bandwidths, elevated ground site antennas in open areas, and low duty cycle transmissions, simple models for channel attenuation sufficed. In the future, when UAS ground stations may not all be in cleared areas with elevated antennas, higher data rates (wider bandwidths) are required, and small UASs with stringent power limitations still require high reliability, more comprehensive air-ground (AG) channel characteristics will be required in order to ensure robust signal designs for high-reliability AG links. We have found that no accurate, validated wideband models exist for the AG channel, particularly not in the Land C-bands that are being proposed for UASs. Airframe shadowing models also do not yet exist. We thus provide a comprehensive review of past work on the AG channel, and follow this with a brief description of plans for an AG channel measurement and modeling campaign. Resulting AG channel models will subsequently be used in the evaluation of candidate air interfaces for UAS control and non-payload communications (CNPC). The air interface must operate in the presence of both delay and Doppler spreads, and shadowing. It should also be spectrally efficient, low-latency, and reasonably robust to interference. We discuss these AG air interface considerations, and also show some initial modeling results based on both analysis and measurements.

134 citations


Proceedings ArticleDOI
03 Mar 2012
TL;DR: Using MBSE and SysML to model a standard CubeSat and applying that model to an actual CubeSat mission, the Radio Aurora Explorer (RAX) mission, developed by the Michigan Exploration Lab (MXL) and SRI International.
Abstract: Model Based Systems Engineering (MBSE) is an emerging technology that is providing the next advance in modeling and systems engineering. MBSE uses Systems Modeling Language (SysML) as its modeling language. SysML is a domain-specific modeling language for systems engineering used to specify, analyze, design, optimize, and verify systems. An MBSE Challenge project was established to model a hypothetical FireSat satellite system to evaluate the suitability of SysML for describing space systems. Although much was learned regarding modeling of this system, the fictional nature of the FireSat system precluded anyone from actually building the satellite. Thus, the practical use of the model could not be demonstrated or verified. This paper reports on using MBSE and SysML to model a standard CubeSat and applying that model to an actual CubeSat mission, the Radio Aurora Explorer (RAX) mission, developed by the Michigan Exploration Lab (MXL) and SRI International.

97 citations


Proceedings ArticleDOI
03 Mar 2012
TL;DR: In this paper, the Daum filter, an exact nonlinear filter, the unscented Kalman filter, and the particle filter were reviewed for the solution to the first problem. But the results of the analysis were limited.
Abstract: Model-based prognostics approaches use domain knowledge about a system and its failure modes through the use of physics-based models Model-based prognosis is generally divided into two sequential problems: a joint state-parameter estimation problem, in which, using the model, the health of a system or component is determined based on the observations; and a prediction problem, in which, using the model, the state-parameter distribution is simulated forward in time to compute end of life and remaining useful life The first problem is typically solved through the use of a state observer, or filter The choice of filter depends on the assumptions that may be made about the system, and on the desired algorithm performance In this paper, we review three separate filters for the solution to the first problem: the Daum filter, an exact nonlinear filter; the unscented Kalman filter, which approximates nonlinearities through the use of a deterministic sampling method known as the unscented transform; and the particle filter, which approximates the state distribution using a finite set of discrete, weighted samples, called particles Using a centrifugal pump as a case study, we conduct a number of simulation-based experiments investigating the performance of the different algorithms as applied to prognostics

89 citations


Proceedings ArticleDOI
03 Mar 2012
TL;DR: This work presents RCE and shows how its software components are reused in two aerospace applications, allowing the integration of different domain-specific tools from local and remote locations into one overall calculation.
Abstract: The DLR developed the open source software framework RCE to support the collaborative and distributed work in the shipyard industry. From a technology side of view a software from the shipbuilding field has many requirements in common with aerospace software projects. Accordingly, RCE has become the basis for further projects within the DLR. Over the last years of usage a subset of frequently used software components could be derived and are provided by the RCE framework. In particular, the workflow engine, allowing the integration of different domain-specific tools from local and remote locations into one overall calculation has become important for various projects. We present RCE and show how its software components are reused in two aerospace applications.

88 citations


Proceedings ArticleDOI
03 Mar 2012
TL;DR: In this paper, the authors investigate the feasibility of identifying, robotically capturing, and returning an entire NEA to the vicinity of the Earth by the middle of the next decade.
Abstract: This paper describes the interim results of a study sponsored by the Keck Institute for Space Studies to investigate the feasibility of identifying, robotically capturing, and returning an entire Near-Earth Asteroid (NEA) to the vicinity of the Earth by the middle of the next decade. The feasibility hinges on finding an overlap between the smallest NEAs that can be reasonably discovered and characterized and the largest NEAs that can be captured and transported in a reasonable flight time. This overlap appears to be centered on NEAs with a nominal diameter of roughly 7 m corresponding to masses in the range of 250,000 kg to 1,000,000 kg. Trajectory analysis based on asteroid 2008HU4 suggests that such an asteroid could be returned to a high-Earth orbit using a single Atlas V-class launch vehicle and a 40-kW solar electric propulsion system by 2026. The return of such an object could serve as a testbed for human operations in the vicinity of an asteroid. It would provide a wealth of scientific and engineering information and would enable detailed evaluation of its resource potential, determination of its internal structure and other aspects important for planetary defense activities.

84 citations


Proceedings ArticleDOI
03 Mar 2012
TL;DR: In this article, the authors used functional near-infrared spectroscopy (fNIR) to investigate the relationship of the hemodynamic response in the anterior prefrontal cortex to changes in mental workload, level of expertise, and task performance during learning of simulated unmanned aerial vehicle (UAV) piloting tasks.
Abstract: An accurate assessment of mental workload and expertise level would help improve operational safety and efficacy of human computer interaction for aerospace applications. The current study utilized functional near-infrared spectroscopy (fNIR) to investigate the relationship of the hemodynamic response in the anterior prefrontal cortex to changes in mental workload, level of expertise, and task performance during learning of simulated unmanned aerial vehicle (UAV) piloting tasks. Results indicated that fNIR measures are correlated to task performance and subjective self-reported measures; and contained additional information that allowed categorizing learning phases. Level of expertise does appear to influence the hemodynamic response in the dorsolateral/ventrolateral prefrontal cortices. Since fNIR allows development of portable and wearable instruments, it has the potential to be deployed in future learning environments to personalize the training regimen and/or assess the effort of human operators in critical multitasking settings.

67 citations


Proceedings ArticleDOI
03 Mar 2012
TL;DR: The Deep Space Climate Observatory (DSCOVR) as mentioned in this paper is a mission to the Earth-Sun first Lagrange point (L1) to observe the Earth as a planet.
Abstract: In 1998, then-Vice President Al Gore proposed a mission to the Earth-Sun first Lagrange point (L1) to observe the Earth as a planet. This mission was named Triana, after the lookout on Christopher Columbus's fleet who is reputedly the first of the European explorers to see the new world. Triana mission development proceeded for 21 months and cost an estimated $249M (in FY07$) before it was de-manifested from the Space Shuttle. The spacecraft has been in a state of “Stable Suspension” since November 2001. After the mission was placed into suspension, it was renamed the Deep Space Climate Observatory (DSCOVR). This paper will cover an overview of the original mission and highlights of refurbishing this mission to launch 16 years after it started, plus an update on its currently planned mission architecture.

59 citations


Proceedings ArticleDOI
03 Mar 2012
TL;DR: In this paper, a model parameter augmented particle filtering prognostic framework is presented to explore battery behavior under these future load uncertainties, in order to infer the most optimal flight profile that would maximize the battery charge utilized while constraining the probability of a dead stick condition (i.e. battery shut off in flight).
Abstract: The amount of usable charge of a battery for a given discharge profile is not only dependent on the starting state-of-charge (SOC), but also other factors like battery health and the discharge or load profile imposed. For electric UAVs (unmanned aerial vehicles) the variation in the load profile can be very unpredictable. This paper presents a model parameter augmented Particle Filtering prognostic framework to explore battery behavior under these future load uncertainties. Stochastic programming schemes are explored to utilize the battery life predictions generated as a function of load, in order to infer the most optimal flight profile that would maximize the battery charge utilized while constraining the probability of a dead stick condition (i.e. battery shut off in flight).

53 citations


Proceedings ArticleDOI
03 Mar 2012
TL;DR: The nuclear thermal rocket (NTR) represents the next evolutionary step in high performance rocket propulsion as mentioned in this paper, which can achieve specific impulse (I sp ) values of ∼900 seconds (s) or more.
Abstract: The nuclear thermal rocket (NTR) represents the next “evolutionary step” in high performance rocket propulsion. Unlike conventional chemical rockets that produce their energy through combustion, the NTR derives its energy from fission of Uranium-235 atoms contained within fuel elements that comprise the engine's reactor core. Using an “expander” cycle for turbopump drive power, hydrogen propellant is raised to a high pressure and pumped through coolant channels in the fuel elements where it is superheated then expanded out a supersonic nozzle to generate high thrust. By using hydrogen for both the reactor coolant and propellant, the NTR can achieve specific impulse (I sp ) values of ∼900 seconds (s) or more — twice that of today's best chemical rockets. From 1955–1972, twenty rocket reactors were designed, built and ground tested in the Rover and NERVA (Nuclear Engine for Rocket Vehicle Applications) programs. These programs demonstrated: (1) high temperature carbide-based nuclear fuels; (2) a wide range of thrust levels; (3) sustained engine operation; (4) accumulated lifetime at full power; and (5) restart capability — all the requirements needed for a human Mars mission. Ceramic metal “cermet” fuel was pursued as well, as a backup option. The NTR also has significant “evolution and growth” capability. Configured as a “bimodal” system, it can generate its own electrical power to support spacecraft operational needs. Adding an oxygen “afterburner” nozzle introduces a variable thrust and I sp capability and allows bipropellant operation. In NASA's recent Mars Design Reference Architecture (DRA) 5.0 study, the NTR was selected as the preferred propulsion option because of its proven technology, higher performance, lower launch mass, versatile vehicle design, simple assembly, and growth potential. In contrast to other advanced propulsion options, no large technology scale-ups are required for NTP either. In fact, the smallest engine tested during the Rover program — the 25,000 lb f (25 klb f ) “Pewee” engine is sufficient when used in a clustered engine arrangement. The “Copernicus” crewed spacecraft design developed in DRA 5.0 has significant capability and a human exploration strategy is outlined here that uses Copernicus and its key components for precursor near Earth object (NEO) and Mars orbital missions prior to a Mars landing mission. The paper also discusses NASA's current activities and future plans for NTP development that include system-level Technology Demonstrations — specifically ground testing a small, scalable NTR by 2020, with a flight test shortly thereafter.

51 citations


Proceedings ArticleDOI
03 Mar 2012
TL;DR: The All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) as discussed by the authors is a vehicle based on six wheels at the ends of six multi-degree-of-freedom limbs.
Abstract: As part of the Human-Robot Systems project funded by NASA, the Jet Propulsion Laboratory has developed a vehicle called ATHLETE: the All-Terrain Hex-Limbed Extra-Terrestrial Explorer.1 Each vehicle is based on six wheels at the ends of six multi-degree-of-freedom limbs. Because each limb has enough degrees of freedom for use as a general-purpose leg, the wheels can be locked and used as feet to walk out of excessively soft or other extreme terrain. Since the vehicle has this alternative mode of traversing through or at least out of extreme terrain, the wheels and wheel actuators can be sized for nominal terrain. There are substantial mass savings in the wheel and wheel actuators associated with designing for nominal instead of extreme terrain. These mass savings are comparable-to or larger-than the extra mass associated with the articulated limbs. As a result, the entire mobility system, including wheels and limbs, can be about 25% lighter than a conventional mobility chassis. A side benefit of this approach is that each limb has sufficient degrees-of-freedom to use as a general-purpose manipulator (hence the name "limb" instead of "leg"). Our prototype ATHLETE vehicles have quick-disconnect tool adapters on the limbs that allow tools to be drawn out of a "tool belt" and maneuvered by the limb. A power-take-off from the wheel actuates the tools, so that they can take advantage of the 1+ horsepower motor in each wheel to enable drilling, gripping or other power-tool functions. This paper describes the applicability of the ATHLETE concept to exploration of the moon, Mars and Near-Earth Asteroids (NEAs). Recently, the focus of human exploration beyond LEO has been on NEAs. One scenario for exploration of a NEA has been likened to a submarine exploring a wrecked ship — humans would sit in a "bubble" and approach the asteroid for up-close examination and robotic manipulation. What is important is to ensure that the bubble doesn't collide with the asteroid surface, nor float away. Multiple limbs, such as available on ATHLETE, allow for precise positioning and anchoring so as to enable the human bubble to maximize its exploration potential. A microgravity testbed has been constructed in the ATHLETE lab, with six computer-controlled winches able to lift ATHLETE and payloads so as to simulate the motion of the system in the vicinity of a NEA. Accurate 6-axis force-torque sensors will measure the applied forces and moments wherever the vehicle touches a simulated asteroid surface. These measured forces can be used to compute the resultant motion of the vehicle in the microgravity environment, and the winches then move the vehicle along the computed trajectory. Preliminary test results from this system are described.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: After the completion of a series of system level checks to ensure that the robot traveled well on-board the Space Shuttle Atlantis, ground control personnel will remotely control the robot to perform free space tasks that will help characterize the differences between earth and zero-g control.
Abstract: Robonaut 2, or R2, arrived on the International Space Station in February 2011 and is currently undergoing testing in preparation for it to become, initially, an Intra-Vehicular Activity (IVA) tool and then evolve into a system that can perform Extra-Vehicular Activities (EVA). After the completion of a series of system level checks to ensure that the robot traveled well on-board the Space Shuttle Atlantis, ground control personnel will remotely control the robot to perform free space tasks that will help characterize the differences between earth and zero-g control. For approximately one year, the fixed base R2 will perform a variety of experiments using a reconfigurable task board that was launched with the robot. While working side-by-side with human astronauts, Robonaut 2 will actuate switches, use standard tools, and manipulate Space Station interfaces, soft goods and cables. The results of these experiments will demonstrate the wide range of tasks a dexterous humanoid can perform in space and they will help refine the methodologies used to control dexterous robots both in space and here on Earth. After the trial period that will evaluate R2 while on a fixed stanchion in the US Laboratory module, NASA plans to launch climbing legs that when attached to the current on-orbit R2 upper body will give the robot the ability to traverse through the Space Station and start assisting crew with general IVA maintenance activities. Multiple control modes will be evaluated in this extra-ordinary ISS test environment to prepare the robot for use during EVAs. Ground Controllers will remotely supervise the robot as it executes semi-autonomous scripts for climbing through the Space Station and interacting with IVA interfaces. IVA crew will locally supervise the robot using the same scripts and also teleoperate the robot to simulate scenarios with the robot working alone or as an assistant during space walks.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: This paper will discuss the method used to develop formalisms (the ontology), the formalisms themselves, the mapping to SysML and approach to using these formalisms to specify a control system and enforce architectural constraints in a Sys ML model.
Abstract: State Analysis is a methodology developed over the last decade for architecting, designing and documenting complex control systems. Although it was originally conceived for designing robotic spacecraft, recent applications include the design of control systems for large ground-based telescopes. The European Southern Observatory (ESO) began a project to design the European Extremely Large Telescope (E-ELT), which will require coordinated control of over a thousand articulated mirror segments. The designers are using State Analysis as a methodology and the Systems Modeling Language (SysML) as a modeling and documentation language in this task. To effectively apply the State Analysis methodology in this context it became necessary to provide ontological definitions of the concepts and relations in State Analysis and greater flexibility through a mapping of State Analysis into a practical extension of SysML. The ontology provides the formal basis for verifying compliance with State Analysis semantics including architectural constraints. The SysML extension provides the practical basis for applying the State Analysis methodology with SysML tools. This paper will discuss the method used to develop these formalisms (the ontology), the formalisms themselves, the mapping to SysML and approach to using these formalisms to specify a control system and enforce architectural constraints in a SysML model.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: The proposed multi-sensor health diagnosis methodology using the DBN based state classification can be structured in three consecutive stages: first, defining health states and preprocessing the sensory data for DBN training and testing; second, developing DBNbased classification models for the diagnosis of predefined health states; third, validating DBN classification models with testing sensory dataset.
Abstract: Effective health diagnosis provides multifarious benefits such as improved safety, improved reliability and reduced costs for the operation and maintenance of complex engineered systems. This paper presents a novel multi-sensor health diagnosis method using Deep Belief Networks (DBN). The DBN has recently become a popular approach in machine learning for its promised advantages such as fast inference and the ability to encode richer and higher order network structures. The DBN employs a hierarchical structure with multiple stacked Restricted Boltzmann Machines and works through a layer by layer successive learning process. The proposed multi-sensor health diagnosis methodology using the DBN based state classification can be structured in three consecutive stages: first, defining health states and preprocessing the sensory data for DBN training and testing; second, developing DBN based classification models for the diagnosis of predefined health states; third, validating DBN classification models with testing sensory dataset. The performance of health diagnosis using DBN based health state classification is compared with support vector machine technique and demonstrated with aircraft wing structure health diagnostics and aircraft engine health diagnosis using 2008 PHM challenge data.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: This paper demonstrates how agile systems engineering techniques can be adapted to a high technology development program and shows how project momentum was critical to separate the constant non-recurring technology challenges to be worked rapidly from the engineering risk liens requiring longer time frames to retire.
Abstract: Agile system engineering practices have matured for software projects while hardware system engineering continues to embrace classical development techniques. High technology projects require innovative solutions to meet the restrictions of cost and schedule and still deliver high performance critical systems. This paper addresses the application of the flexible style of agile systems engineering for dynamic, complex hardware and software projects. These projects can benefit from applying the principles of agile systems engineering as has been demonstrated in the software realm. Fundamental to the rapid development is understanding the role of innovation and momentum in agile project management and systems engineering. For post industrial age projects that require non proven concepts, large degrees of uncertainty and ambiguity and extensive non-recurring engineering, agile systems engineering allows for project development with continuous change while addressing risk. Agile systems engineering exploits the role of momentum to allow innovation in the development process while allowing risk interactions to be managed in a disciplined manner. Examples of how these concepts were used on the design and development of two small satellites at The Johns Hopkins University Applied Physics Laboratory (JHU/APL) in the Multi-Mission Bus Demonstrator (MMBD) project. This challenging satellite build did not use existing key technology (heritage hardware) and created a large paradigm shift from traditional satellite development. Rapid design and development, a “momentum play”, was used to continuously allow change and assessment in a hardware adaptation of the SCRUM technique seen in Extreme Programming. The MMBD project demonstrates the adaptation of these agile concepts. By freezing late in the design cycle, the MMBD project was able to insert innovations throughout the program cycle. The ability to be innovative related to the speed with which the development progressed, including working quickly through all technology choices. This paper discusses agile systems engineering as applied to both software and hardware. Short of papers on embedded systems using agile systems engineering, there are too few projects demonstrating these adaptations of techniques to complex, innovative hardware projects. The Multi-Mission Bus Demonstrator is an excellent benchmark example of program management of rapid technology maturity in a high technology application. This paper demonstrates how agile systems engineering techniques can be adapted to a high technology development program and shows how project momentum was critical to separate the constant non-recurring technology challenges to be worked rapidly from the engineering risk liens requiring longer time frames to retire.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: The conclusion is that spatially-extended in-situ information about the chemical and physical heterogeneity of small bodies has the potential to lead to a much improved understanding about their origin, evolution, and astrobiological relevance.
Abstract: The recent decadal survey report for planetary science (compiled by the National Research Council) has prioritized three main areas for planetary exploration: (1) the characterization of the early Solar system history, (2) the search for planetary habitats, and (3) an improved understanding about the nature of planetary processes. A growing number of ground and space observations suggest that small bodies are ideally suited for addressing all these three priorities. In parallel, several technological advances have been recently made for microgravity rovers, penetrators, and MEMS-based instruments. Motivated by these findings and new technologies, the objective of this paper is to study the expected science return of spatially-extended in-situ exploration at small bodies, as a function of surface covered and in the context of the key science priorities identified by the decadal survey report. Specifically, targets within the scope of our analysis belong to three main classes: main belt asteroids and irregular satellites, Near Earth Objects, and comets. For each class of targets, we identify the corresponding science objectives for potential future exploration, we discuss the types of measurements and instruments that would be required, and we discuss mission architectures (with an emphasis on spatially-extended in-situ exploration) to achieve such objectives. Then, we characterize (notionally) how the science return for two reference targets would scale with the amount (and type) of surface that is expected to be covered by a robotic mobile platform. The conclusion is that spatially-extended in-situ information about the chemical and physical heterogeneity of small bodies has the potential to lead to a much improved understanding about their origin, evolution, and astrobiological relevance.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: While the simpler, more inexpensive ZigBee Pro protocol performs well under moderate levels of interference, the more complex and costly ISA100.11a protocol is needed to ensure reliable data delivery under heavier interference.
Abstract: Standards-based wireless sensor network (WSN) protocols are promising candidates for spacecraft avionic systems, offering unprecedented instrumentation flexibility and expandability. However, when migrating from wired to wireless data gathering systems, ensuring reliable data transport is a key consideration. In this paper, we conduct a rigorous laboratory analysis of the relative performance of the ZigBee Pro and ISA100.11a protocols in a representative crewed aerospace environment. Since both operate in the 2.4 GHz radio frequency (RF) band shared by systems such as Wi-Fi, they are subject at times to potentially debilitating RF interference. We compare message delivery rates achievable by both under varying levels of 802.11g Wi-Fi traffic. We conclude that while the simpler, more inexpensive ZigBee Pro protocol performs well under moderate levels of interference, the more complex and costly ISA100.11a protocol is needed to ensure reliable data delivery under heavier interference. This paper represents the first published, rigorous analysis of WSN protocols in an aerospace analog environment of which we are aware and the first published head-to-head comparison of ZigBee Pro and ISA100.11a.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: This paper outlines a low-cost, low-power, arc-minute accurate star tracker that is designed for use on a CubeSat and concludes that this system is a viable option for CubeSats looking to improve their attitude determination.
Abstract: This paper outlines a low-cost, low-power, arc-minute accurate star tracker that is designed for use on a CubeSat. The device is being developed at the University of Texas at Austin for use on two different 3-unit CubeSat missions. The hardware consists of commercial off-the-shelf parts designed for use in industrial machine vision systems and employs a 1024×768 grey-scale charge coupled device (CCD) sensor. The software includes the three standard steps in star tracking: centroiding, star identification, and attitude determination. Centroiding algorithms were developed in-house. The star identification code was adapted from the Pyramid Star Identification technique developed by Mortari. Attitude determination was performed using Markley's singular value decomposition method. The star tracker was then tested with both internal and external simulated star-fields and night-sky tests. The resulting accuracy was on the order of arc-minutes. It was concluded that this system is a viable option for CubeSats looking to improve their attitude determination. Further proof of the system will be obtained when the star tracker flies on the planned CubeSat missions in 2013 or later.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: In this paper, a new ellipsoid measure is introduced to analyze and select the optimal shape and motion of tunable continuum hooks for given terrains and climbing scenarios, and the authors illustrate and support the analysis using results from laboratory experiments using a robot rover with continuum appendages developed by their research group.
Abstract: This paper presents an analysis of the ability of continuum “hook” appendages to transform robot climbing. The key innovation is via exploitation of contact and impact dynamics when “grasping” the terrain. We introduce a new ellipsoid measure and use it to analyze and select the optimal shape and motion of tunable continuum hooks for given terrains and climbing scenarios. This new ellipsoid is a generalization of impact ellipsoids used previously for traditional rigid-link robots. We illustrate and support the analysis using results from laboratory experiments using a novel robot rover with continuum appendages developed by our research group.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: Two detailed energy reserve budgeting case studies for FPGA-based CubeSats with respect to stored energy reserves for image compression and processing using a Canny edge detector are presented.
Abstract: CubeSats are a simple, low-cost option for developing quickly-deployable satellites, however, the tradeoff for these benefits is a small physical size, which restricts the CubeSat's solar panels' size and thus the available power budget and stored energy reserves. These power/energy limitations restrict the CubeSat's functionality and data processing capabilities, which makes leveraging CubeSats for compute-intensive missions challenging. Additionally, increasing sensor capabilities due to technological advances further compounds this functionality limitation, enabling sensors to gather significantly more data than a satellite's limited downlink bandwidth can accommodate. The influx in sensed data, which is particularly high for image-processing applications, introduces a pressing need for high-performance on-board data processing, which preprocesses and/or compresses the data before transmission. FPGAs have been incorporated into state-of-the-art satellites to provide high-performance on-board data processing, while simultaneously reducing the satellites' data processing energy consumption. However, even though FPGAs can provide these capabilities in full-scale satellites, a CubeSat's limited power budget makes integration of FPGAs into CubeSats a challenging task. For example, the commonly used Virtex4QV Radiation Tolerant FPGA family's average power consumption ranges from 1.25 to 12.5 Watts, whereas the CubeSat's power budget ranges from 2 to 8 Watts, with the smallest, cheapest CubeSat systems at the lower end of this range. Therefore, in order to successfully integrate FPGAs into CubeSats, the components' power consumptions must be clearly budgeted with respect to the CubeSat's specific functionalities and orbital pattern, which dictates the available power and stored energy reserves. In this paper, we present two detailed energy reserve budgeting case studies for FPGA-based CubeSats with respect to stored energy reserves for image compression and processing using a Canny edge detector. CubeSat designers can leverage this energy reserve budget with the application-specific components' power consumptions for applications such as hyper-spectral imaging (HSI), ground motion target indication (GMTI), and star tracking to quickly determine maximum payload operational time with respect to specific orbital patterns and mission requirements.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: A fractionated spacecraft is a cluster of independent modules that interact wirelessly to maintain cluster flight and realize the functions usually performed by a monolithic satellite, based on a layered architecture consisting of a novel operating system, a middleware layer, and component-structured applications.
Abstract: A fractionated spacecraft is a cluster of independent modules that interact wirelessly to maintain cluster flight and realize the functions usually performed by a monolithic satellite. This spacecraft architecture poses novel software challenges because the hardware platform is inherently distributed, with highly fluctuating connectivity among the modules. It is critical for mission success to support autonomous fault management and to satisfy real-time performance requirements. It is also both critical and challenging to support multiple organizations and users whose diverse software applications have changing demands for computational and communication resources, while operating on different levels and in separate domains of security. The solution proposed in this paper is based on a layered architecture consisting of a novel operating system, a middleware layer, and component-structured applications. The operating system provides primitives for concurrency, synchronization, and secure information flows; it also enforces application separation and resource management policies. The middleware provides higher-level services supporting request/response and publish/subscribe interactions for distributed software. The component model facilitates the creation of software applications from modular and reusable components that are deployed in the distributed system and interact only through well-defined mechanisms. Two cross-cutting aspects — multi-level security and multi-layered fault management — are addressed at all levels of the architecture. The complexity of creating applications and performing system integration is mitigated through the use of a domain-specific model-driven development process that relies on a dedicated modeling language and its accompanying graphical modeling tools, software generators for synthesizing infrastructure code, and the extensive use of model-based analysis for verification and validation.

Proceedings ArticleDOI
22 May 2012

Proceedings ArticleDOI
03 Mar 2012
TL;DR: The design and implementation of the Fast Lossless (FL) algorithm on the GPU will provide in the future a fast and practical real-time solution for airborne and space applications.
Abstract: On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. At JPL, a novel, adaptive and predictive technique for lossless compression of hyperspectral data, named the Fast Lossless (FL) algorithm, was recently developed. This technique uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. Because of its outstanding performance and suitability for real-time onboard hardware implementation, the FL compressor is being formalized as the emerging CCSDS Standard for Lossless Multispectral & Hyperspectral image compression. The FL compressor is well-suited for parallel hardware implementation. A GPU hardware implementation was developed for FL targeting the current state-of-the-art GPUs from NVIDIA®. The GPU implementation on a NVIDIA® GeForce® GTX 580 achieves a throughput performance of 583.08 Mbits/sec (44.85 MSamples/sec) and an acceleration of at least 6 times a software implementation running on a 3.47 GHz single core Intel® Xeon™ processor. This paper describes the design and implementation of the FL algorithm on the GPU. The massively parallel implementation will provide in the future a fast and practical real-time solution for airborne and space applications.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: In this article, the authors describe the design and performance of the Reflector/Boom Assembly (RBA) under multiple constraints and requirements that are inherent to a spinning large flexible reflector/structure.
Abstract: The Soil Moisture Active Passive1 (SMAP) instrument includes a conically scanning 6-m diameter deployable Astromesh reflector and feedhorn that rotates relative to a de-spun spacecraft at 14.6 RPM. This is the first application of a spinning Astromesh reflector. This paper describes the design and performance of the Reflector/Boom Assembly (RBA) under multiple constraints and requirements that are inherent to a spinning large flexible reflector/structure. The deployed RBA has stringent mass property control and knowledge requirements, structural natural frequency separation requirements, and all other typical ones including the antenna performance. Finally the validation of the design on the ground by analysis/test and its difficulties due to gravity are discussed.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: The applicability of crowdsourcing and collaborative competition in the design of the Zero Robotics software infrastructure, metrics of success and achievement of objectives are discussed.
Abstract: Crowdsourcing is the art of constructively organizing crowds of people to work toward a common objective. Collaborative competition is a specific kind of crowdsourcing that can be used for problems that require a collaborative or cooperative effort to be successful, but also use competition as a motivator for participation or performance. The DARPA InSPIRE program is using crowdsourcing to develop spaceflight software for small satellites under a sub-program called SPHERES Zero Robotics — a space robotics programming competition. The robots are miniature satellites, called SPHERES, that operate inside the International Space Station (ISS). The idea is to allow thousands of amateur participants to program using the SPHERES simulator and eventually test their algorithms in microgravity. The entire software framework for the program, to provide the ability for thousands to collaboratively use the SPHERES simulator and create algorithms, is also built by crowdsourcing. This paper describes the process of building the software framework for crowdsourcing SPHERES development in collaboration with a commercial crowdsourcing company called TopCoder. It discusses the applicability of crowdsourcing and collaborative competition in the design of the Zero Robotics software infrastructure, metrics of success and achievement of objectives.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: The Quadshot as discussed by the authors is a novel aerial robotic platform with vertical take-off and landing (VTOL) capability, achieving highly dynamic maneuverability via a combination of differential thrust and aerodynamic surfaces.
Abstract: This paper presents the Quadshot, a novel aerial robotic platform with Vertical Take-Off and Landing (VTOL) capability. Highly dynamic maneuverability is achieved via a combination of differential thrust and aerodynamic surfaces (elevons). The relaxed stability, flying wing, tail-sitter configuration, Radio Controlled (RC) airframe is actively stabilized by onboard controllers in three complementary modes of operation, i.e. hover, horizontal flight and aerobatic flight. In hover mode the vehicle flies laterally, similar to a quadrotor helicopter, can maintain accurate position for aiming payload and land with pinpoint accuracy when equipped with a GPS unit. In horizontal and aerobatic modes it flies like an airplane to cover larger distances more rapidly and efficiently. Dynamic modeling and control algorithms have been discussed before for quadrotors [1]–[4] and classical aircraft configurations, as have other VTOL concepts such as tilt-rotors (eg. the V-22 Osprey) and tail-sitters (eg. the Sydney Univ. T-wing and the Convair XFY-1 Pogo) [5]–[6]. The important contributions of this paper are the combined use of differential thrust in multiple axes and aerodynamic surfaces for flight control, the assisted transition between hover and forward flight control modes with pitch rotation of the entire airframe and the elimination of failure-prone mechanisms for thruster tilting. The development and use of highly extensible Open Source Software and Hardware from the Paparazzi project in a transitioning vehicle is also novel. The vehicle is made highly affordable for both researchers and hobbyists by the use of the Paparazzi Open Source Software [16] and its Lisa embedded avionics suite. Careful attention to the mechanical design promotes large scale manufacturing and easy assembly, further bringing down the cost. The materials selected create a highly durable airframe, which is still inexpensive. Modular airframe design enables quick modification of actuators and electronics, allowing a greater variety of missions. The electronics are also designed to be extensible, supporting the addition of extra sensors and actuators. Custom designed airfoils provide good payload capacity while maintaining 3D aerobatic flight capability; the wing design ensures adequate stability for manual glide control in non-normal situations. This paper covers the software, mechanical and electronic hardware design, control algorithms and aerodynamics associated with this airframe. Experimental flight control results and the design lessons learned are discussed.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: In this article, a Mixed Ornstein-Uhlenbeck (MOU) process is proposed as a target motion model, which exhibits drift terms in both position and velocity, and thus has a well-behaved limit for both.
Abstract: This paper analyzes a Mixed Ornstein-Uhlenbeck (MOU) process that has a number of appealing properties as a target motion model. Relevant earlier models that have been proposed include the standard nearly constant velocity motion model (unbounded long-term position and velocity), the Ornstein-Uhlenbeck process (bounded position, but no defined velocity), and the Integrated Ornstein-Uhlenbeck process (bounded velocity, but unbounded position). The Mixed Ornstein-Uhlenbeck (MOU) process exhibits drift terms in both position and velocity, and thus has a well-behaved limit for both. The initial target state can be defined in a natural way based on the steady-state characteristics of the MOU process, leading to a stationary stochastic process. Similarly, multi-target stationarity is achieved by choosing the initial target birth distribution according to the steady-state distribution on the number of targets. The MOU process can be used both in simulations and, correspondingly, in Kalman-based recursive filtering as part of multi-target tracking solutions.1 2

Proceedings ArticleDOI
03 Mar 2012
TL;DR: Improvements in the energy efficiency and speed of planetary rover autonomous traverse accomplished by converting processes typically performed by the CPU onto a Field Programmable Gate Arrays (FPGA) coprocessor are described.
Abstract: Safe navigation under resource constraints is a key concern for autonomous planetary rovers operating on extraterrestrial bodies. Computational power in such applications is typically constrained by the radiation hardness and energy consumption requirements. For example, even though the microprocessors used for the Mars Science Laboratory (MSL) mission rover are an order of magnitude more powerful than those used for the rovers on the Mars Exploration Rovers (MER) mission, the computational power is still significantly less than that of contemporary desktop microprocessors. It is therefore important to move safely and efficiently through the environment while consuming a minimum amount of computational resources, energy and time. Perception, pose estimation, and motion planning are generally three of the most computationally expensive processes in modern autonomy navigation architectures. An example of this is on the MER where each rover must stop, acquire and process imagery to evaluate its surroundings, estimate the relative change in pose, and generate the next mobility system maneuver [1]. This paper describes improvements in the energy efficiency and speed of planetary rover autonomous traverse accomplished by converting processes typically performed by the CPU onto a Field Programmable Gate Arrays (FPGA) coprocessor. Perception algorithms in general are well suited to FPGA implementations because much of processing is naturally parallelizable. In this paper we present novel implementations of stereo and visual odometry algorithms on a FPGA. The FPGA stereo implementation is an extension of [2] that uses "random in linear out" rectification and a higher-performance interface between the rectification, filter, and disparity stages of the stereo pipeline. The improved visual odometry component utilizes a FPGA implementation of a Harris feature detector and sum of absolute differences (SAD) operator. The FPGA implementation of the stereo and visual odometry functionality have demonstrated a performance improvement of approximately three orders of magnitude compared to the MER-class avionics. These more efficient perception and pose estimation modules have been merged with motion planning techniques that allow for continuous steering and driving to navigate cluttered obstacle fields without stopping to perceive. The resulting faster visual odometry rates also allow for wheel slip to be detected earlier and more reliably. Predictions of resulting improvements in planetary rover energy efficiency and average traverse speeds are reported. In addition, field results are presented that compare the performance of autonomous navigation on the Athena planetary rover prototype using continuous steering or driving and continuous steering and driving with GESTALT traversability analysis using the FPGA perception and pose estimation improvements.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: In this paper, the results of soil shear interface analysis for wheels with grousers are presented, and the processes of thrust and resistances are investigated and behavior characterized for grousered wheels.
Abstract: The performance of wheels operating in loose granular material for the application of planetary vehicles is well researched but little effort has been made to study the soil shearing which governs traction. Net traction measurements and application of energy metrics have been solely relied upon to investigate performance but lack the ability to evaluate or describe soil-wheel interaction leading to thrust and resistances. The complexity of rim and grouser interaction with the ground has also prevented adequate models from being formulated. This work relies on empirical data gathered in attempt to study the effects of rim surface on soil shearing and ultimately how this governs traction. A novel experimentation and analysis technique was developed to enable investigation of terramechanics fundamentals in great detail. This technique, the Shear Interface Imaging Analysis Tool, is utilized to provide visualization and analysis capability of soil motion at and below the wheel-soil interface. Analysis of the resulting displacement field identifies clusters of soil motion and shear interfaces. Complexities in soil flow patterns greatly affect soil structure below the wheel and the resulting tractive capability. Grouser parameter variations, spacing and height, are studied for a rigid wheel. The results of soil shear interface analysis for wheels with grousers are presented. The processes of thrust and resistances are investigated and behavior characterized for grousered wheels.

Proceedings ArticleDOI
03 Mar 2012
TL;DR: A Doppler lidar sensor is being developed by NASA under the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project, a versatile instrument capable of providing precision velocity vectors, vehicle ground relative altitude, and attitude.
Abstract: Landing mission concepts that are being developed for exploration of planetary bodies are increasingly ambitious in their implementations and objectives. Most of these missions require accurate position and velocity data during their descent phase in order to ensure safe soft landing at the pre-designated sites. To address this need, a Doppler lidar is being developed by NASA under the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project. This lidar sensor is a versatile instrument capable of providing precision velocity vectors, vehicle ground relative altitude, and attitude. The capabilities of this advanced technology have been demonstrated through two helicopter flight test campaigns conducted over a vegetation-free terrain in 2008 and 2010. Presently, a prototype version of this sensor is being assembled for integration into a rocket-powered terrestrial free-flyer vehicle. Operating in a closed loop with the vehicle's guidance and navigation system, the viability of this advanced sensor for future landing missions will be demonstrated through a series of flight tests in 2012.