scispace - formally typeset
Search or ask a question
Conference

IEEE Aerospace Conference 

About: IEEE Aerospace Conference is an academic conference. The conference publishes majorly in the area(s): Spacecraft & Mars Exploration Program. Over the lifetime, 10625 publications have been published by the conference receiving 93982 citations.


Papers
More filters
Proceedings ArticleDOI
09 Mar 2002
TL;DR: PEGASIS (power-efficient gathering in sensor information systems), a near optimal chain-based protocol that is an improvement over LEACH, is proposed, where each node communicates only with a close neighbor and takes turns transmitting to the base station, thus reducing the amount of energy spent per round.
Abstract: Sensor webs consisting of nodes with limited battery power and wireless communications are deployed to collect useful information from the field. Gathering sensed information in an energy efficient manner is critical to operate the sensor network for a long period of time. In W. Heinzelman et al. (Proc. Hawaii Conf. on System Sci., 2000), a data collection problem is defined where, in a round of communication, each sensor node has a packet to be sent to the distant base station. If each node transmits its sensed data directly to the base station then it will deplete its power quickly. The LEACH protocol presented by W. Heinzelman et al. is an elegant solution where clusters are formed to fuse data before transmitting to the base station. By randomizing the cluster heads chosen to transmit to the base station, LEACH achieves a factor of 8 improvement compared to direct transmissions, as measured in terms of when nodes die. In this paper, we propose PEGASIS (power-efficient gathering in sensor information systems), a near optimal chain-based protocol that is an improvement over LEACH. In PEGASIS, each node communicates only with a close neighbor and takes turns transmitting to the base station, thus reducing the amount of energy spent per round. Simulation results show that PEGASIS performs better than LEACH by about 100 to 300% when 1%, 20%, 50%, and 100% of nodes die for different network sizes and topologies.

3,731 citations

Proceedings ArticleDOI
06 Mar 2010
TL;DR: The 2007 Mars Design Reference Architecture 5.0 as discussed by the authors provides a common framework for future planning of systems concepts, technology development, and operational testing as well as potential Mars robotic missions, research conducted on the International Space Station, and future potential lunar exploration missions.
Abstract: This paper provides a summary of the 2007 Mars Design Reference Architecture 5.0 (DRA 5.0) [1], which is the latest in a series of NASA Mars reference missions. It provides a vision of one potential approach to human Mars exploration, including how Constellation systems could be used. The strategy and example implementation concepts that are described here should not be viewed as constituting a formal plan for the human exploration of Mars, but rather provide a common framework for future planning of systems concepts, technology development, and operational testing as well as potential Mars robotic missions, research that is conducted on the International Space Station, and future potential lunar exploration missions. This summary of the Mars DRA 5.0 provides an overview of the overall mission approach, surface strategy and exploration goals, as well as the key systems and challenges for the first three concepts for human missions to Mars.1,2

571 citations

Proceedings ArticleDOI
09 Mar 2002
TL;DR: The radiation effects analysis is summarized that suggests that commercial grade processors are likely to be adequate for Mars surface missions, and the level of speedup that may accrue from using these instead of radiation hardened parts is discussed.
Abstract: NASA's Mars Exploration Rover (MER) missions will land twin rovers on the surface of Mars in 2004. These rovers will have the ability to navigate safely through unknown and potentially hazardous terrain, using autonomous passive stereo vision to detect potential terrain hazards before driving into them. Unfortunately, the computational power of currently available radiation hardened processors limits the amount of distance (and therefore science) that can be safely achieved by any rover in a given time frame. We present overviews of our current rover vision and navigation systems, to provide context for the types of computation that are required to navigate safely. We also present baseline timing results that represent a lower bound in achievable performance (useful for systems engineering studies of future missions), and describe ways to improve that performance using commercial grade (as opposed to radiation hardened) processors. In particular, we document speedups to our stereo vision system that were achieved using the vectorized operations provided by Pentium MMX technology. Timing data were derived from implementations on several platforms: a prototype Mars rover with flight-like electronics (the Athena Software Development Model (SDM) rover), a RAD6000 computing platform (as will be used in the 2003 MER missions), and research platforms with commercial Pentium III and Sparc processors. Finally, we summarize the radiation effects analysis that suggests that commercial grade processors are likely to be adequate for Mars surface missions, and discuss the level of speedup that may accrue from using these instead of radiation hardened parts.

428 citations

Journal ArticleDOI
01 Feb 1997
TL;DR: In this paper, a genetic algorithm adjusts some of the least significant bits of the beam steering phase shifters to minimize the total output power, which results in minor deviations in the steering direction and small perturbations in the sidelobe level in addition to constraining the search space of the genetic algorithm.
Abstract: This paper describes a new approach to adaptive phase-only nulling with phased arrays. A genetic algorithm adjusts some of the least significant bits of the beam steering phase shifters to minimize the total output power. Using small adaptive phase values results in minor deviations in the beam steering direction and small perturbations in the sidelobe level in addition to constraining the search space of the genetic algorithm. Various results are presented to show the advantages and limitations of this approach, in general, the genetic algorithm proves to be better than previous phase-only adaptive algorithms.

361 citations

Proceedings ArticleDOI
Fred Daum1, J. Huang1
08 Mar 2003
TL;DR: A simple back-of-the-envelope formula is derived that explains why a carefully designed PF should mitigate the curse of dimensionality for certain filtering problems, but the PF does not avoid the curseof dimensionality in general.
Abstract: Particle filtering (PF) is a new class of algorithms to solve the nonlinear filtering problem. These PFs are very general and easy to code. The main issue with PF is the large computational complexity. In particular, for typical low dimensional tracking problems, the PF requires 2 to 6 orders of magnitude more computer throughput than the extended Kalman filter, to achieve the same accuracy. It has been asserted that the PF avoids the curse of dimensionality, but there is no formula or theorem that bounds or, approximates the computational complexity of the PF as a function of dimension (d). In this paper, we derive a simple back-of-the-envelope formula that explains why a carefully designed PF should mitigate the curse of dimensionality for certain filtering problems, but the PF does not avoid the curse of dimensionality in general. We also show experimental results which c o n f i i our simple formula. We consider this a triumph of theory. This new theory hinges on the fact that the volume of the d dimensional unit sphere is an amazingly small fraction of the volume of the d dimensional unit cube, for large d.

331 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
2023470
2022508
2021369
2020457
2019453
2018409