Author
David M. Racek
Bio: David M. Racek is an academic researcher from Montana State University. The author has contributed to research in topic(s): Control reconfiguration & Raw data. The author has an hindex of 2, co-authored 2 publication(s) receiving 28 citation(s).
Papers
More filters
06 Mar 2010
TL;DR: This paper presents the design of a many-core computer architecture with fault detection and recovery using partial reconfiguration of an FPGA, which has the advantage of recovering from faults in both the circuit fabric and the configuration RAM of anFPGA in addition to spatially avoiding permanently damaged regions of the chip.
Abstract: This paper presents the design of a many-core computer architecture with fault detection and recovery using partial reconfiguration of an FPGA. The FPGA fabric is partitioned into tiles which contain homogenous soft processors. At any given time, three processors are configured in triple modulo redundancy to detect faults. Spare processors are brought online to replace faulted tiles in real time. A recovery procedure involving partial reconfiguration is used to repair faulted tiles. This type of approach has the advantage of recovering from faults in both the circuit fabric and the configuration RAM of an FPGA in addition to spatially avoiding permanently damaged regions of the chip. 1 2
17 citations
07 Mar 2009
TL;DR: In this paper, the authors present an end-to-end data handling system design that will handle the large data downlink volumes that are becoming increasingly prevalent as the complexity of Earth science increases.
Abstract: Global ecosystem observations are important for Earth-system studies. The National Research Council's report entitled Earth Science and Applications from Space is currently guiding NASA's Earth science missions. It calls for a global land and coastal area mapping mission. The mission, scheduled to launch in the 2013-2016 timeframe, includes a hyperspectral imager and a multi-spectral thermal-infrared sensor. These instruments will enable scientists to characterize global species composition and monitor the response of ecosystems to disturbance events such as drought, flooding, and volcanic events. Due to the nature and resolution of the sensors, these two instruments produce approximately 645 GB of raw data each day, thus pushing the limits of conventional data handling and telecommunications capabilities. The implications of and solutions to the challenge of high downlink data volume were examined. Low risk and high science return were key design values. The advantages of onboard processing and advanced telecommunications methods were evaluated. This paper will present an end-to-end data handling system design that will handle the large data downlink volumes that are becoming increasingly prevalent as the complexity of Earth science increases. The designs presented here are the work of the authors and may differ from the current mission baseline.
11 citations
Cited by
More filters
25 Jun 2012
TL;DR: This paper presents a novel high performance and fault-tolerant ICAP controller which can operate at a high speed and recover from emerging faults, and demonstrates the use of Triple Modular Redundancy (TMR) in some of theICAP controller components which have the ability to reconfigure the rest of the IC AP controller when faults are detected.
Abstract: Dynamic Partial Reconfiguration is an important feature of modern FPGAs as it allows for better exploitation of FPGA resources over time and space. The Internal Configuration Access Port (ICAP) enables DPR from within an FPGA chip, leading to the possibility of fully autonomous FPGA-based systems. This paper presents a novel high performance and fault-tolerant ICAP controller which can operate at a high speed and recover from emerging faults. Test results showed that our ICAP controller is 25 times faster than the Xilinx' XPS_HWICAP IP core. We demonstrate the use of Triple Modular Redundancy (TMR) in some of the ICAP controller components which have the ability to reconfigure the rest of the ICAP controller when faults are detected. This method is shown to have a 49% smaller area footprint compared to traditional full TMR.
27 citations
14 Jul 2014
TL;DR: A hardware implementation of the `Modified Fast Lossless' compression algorithm for push broom instruments on a Field Programmable Gate Array (FPGA) and targets the current state-of-the-art FPGAs and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.
Abstract: Efficient on-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware.
27 citations
29 Jul 2009
TL;DR: A hardware implementation of the ‘Modified Fast Lossless’ compression algorithm for pushbroom instruments on a Field Programmable Gate Array (FPGA), which targets the current state-of-the-art FPGAs and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.
Abstract: Efficient on-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed ‘Fast Lossless’ algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. It was modified for pushbroom instruments and makes it practical for flight implementations. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the ‘Modified Fast Lossless’ compression algorithm for pushbroom instruments on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.
20 citations
21 Feb 2016
TL;DR: This paper investigates the improvements in reliability of a LEON3 soft processor operating on a SRAM-based FPGA when using triple-modular redundancy and other processor-specific mitigation techniques and demonstrates an average improvement of 10×.
Abstract: Processors are an essential component in most satellite payload electronics and handle a variety of functions including command handling and data processing. There is growing interest in implementing soft processors on commercial FPGAs within satellites. Commercial FPGAs offer reconfigurability, large logic density, and I/O bandwidth; however, they are sensitive to ionizing radiation and systems developed for space must implement single-event upset mitigation to operate reliably. This paper investigates the improvements in reliability of a LEON3 soft processor operating on a SRAM-based FPGA when using triple-modular redundancy and other processor-specific mitigation techniques. The improvements in reliability provided by these techniques are validated with both fault injection and heavy ion radiation tests. The fault injection experiments indicate an improvement of 51× and the radiation testing results demonstrate an average improvement of 10×. Orbit failure rate estimations were computed and suggest that the TMR LEON3 processor has a mean-time to failure of over 76 years in a geosynchronous orbit.
20 citations
26 Oct 2015
TL;DR: This paper proposes the use of Triple Modular Redundancy at the controller level, and calculates system reliability using Markov models to quantitatively show the advantage of the proposed technique in terms of extended lifetime.
Abstract: Fault-tolerance is becoming an essential feature in the design of Networked Control Systems (NCSs). Furthermore, Sensor-to-Actuator (S2A) architectures have shown some advantages over conventional In-Loop architectures. This paper focuses on fault-tolerant controllers in the context of S2A systems. It proposes the use of Triple Modular Redundancy at the controller level. The fault-tolerant controller will be hosted in an FPGA that has a spare location. The voter in this TMR scheme is fault-secure to guarantee that the controllers never produce an undetected incorrect control action. Finally, system reliability is calculated using Markov models to quantitatively show, via case studies, the advantage of the proposed technique in terms of extended lifetime.
13 citations