scispace - formally typeset
Search or ask a question

Showing papers by "Gregory D. Peterson published in 2001"


Proceedings ArticleDOI
24 Jul 2001
TL;DR: The challenge of effectively exploiting H PRC resources is addressed by first considering the performance evaluation and optimization problem before turning to improving the programming infrastructure used for porting applications to HPRC platforms.
Abstract: High Performance Computers (HPC) provide dramatically improved capabilities for a number of defense and commercial applications, but often are too expensive to acquire and to program. The smaller market and customized nature of HPC architectures combine to increase the cost of most such platforms. To address the problems with high hardware costs, one may create more inexpensive Beowolf clusters of dedicated commodity processors. Despite the benefit of reduced hardware costs, programming the HPC platforms to achieve high performance often proves extremely time-consuming and expensive in practice. In recent years, programming productivity gains come from the development of common APIs and libraries of functions to support distributed applications. Examples include PVM, MPI, BLAS, and VSIPL. The implementation of each API or library is optimized for a given platform, but application developers can write code that is portable across specific HPC architectures. The application of reconfigurable computing (RC) into HPC platforms promises significantly enhanced performance and flexibility at a modest cost. Unfortunately, configuring (programming) the reconfigurable computing nodes remains a challenging task and relatively little work to date has focused on potential high performance reconfigurable computing (HPRC) platforms consisting of reconfigurable nodes paired with processing nodes. This paper addresses the challenge of effectively exploiting HPRC resources by first considering the performance evaluation and optimization problem before turning to improving the programming infrastructure used for porting applications to HPRC platforms.

13 citations


Proceedings ArticleDOI
14 Aug 2001
TL;DR: This novel approach to computer architecture is introduced by first discussing the current state of H PRC systems and promising HPRC system architectures, particularly with respect to their ability to exploit different types of potential parallelism at multiple levels, and explores the most appropriate software architecture for HPRCs.
Abstract: High performance reconfigurable computer systems exploit the parallel processing benefits of high performance computing in conjunction with the fast, adaptive hardware acceleration associated with reconfigurable computing. We introduce this novel approach to computer architecture by first discussing the current state of HPRC systems and promising HPRC system architectures, particularly with respect to their ability to exploit different types of potential parallelism at multiple levels. Next, we explore the most appropriate software architecture for HPRC systems, along with hardware architectural issues such as hardware virtualization. We then consider runtime issues concerning communications, context switching, and operating system services. Finally, we consider challenges with developing efficient applications on HPRC platforms.

10 citations


Book ChapterDOI
01 Jan 2001
TL;DR: This paper focuses on how a designer can develop and use a performance/cost model to perform tradeoffs between the different types of emulation, hardware acceleration, and simulation tools available.
Abstract: Performance demands for design verification continue to grow (exponentially) with the size of designs. Consequently, the amount of time spent verifying a particular design meets its specification is taking an ever-increasing proportion of the design cycle time. Given the substantial costs and increasing importance of system verification technologies, determining the best verification strategy is critical to design and business success. This paper focuses on how a designer can develop and use a performance/cost model to perform tradeoffs between the different types of emulation, hardware acceleration, and simulation tools available. Different forms of model and parameters are explored, with some typical engineering examples illustrated.

5 citations


Proceedings ArticleDOI
TL;DR: This paper addresses the application of three simulation acceleration approaches, used independently or in conjunction with one another: parallel simulation, mixed-abstraction/multiresolution simulation, and hardware acceleration via reconfigurable computing elements.
Abstract: The development of large, complex systems, the training of personnel, and the refinement of concepts of operations all depend on high-performance simulation technologies. For each of the above application areas, among others, there is a chronic need for ever-higher performance. This paper addresses the application of three simulation acceleration approaches, used independently or in conjunction with one another: parallel simulation, mixed-abstraction/multiresolution simulation, and hardware acceleration via reconfigurable computing elements. After discussing the merits of each approach, the paper presents analytic techniques for determining the most effective approach to use for a given simulation problem.

4 citations


Proceedings ArticleDOI
TL;DR: The challenge facing the nation is the most effective means to exploit new computational and communications technologies while mitigating the impact of attacks, faults, and unanticipated usage patterns.
Abstract: Fielding and managing the dynamic, complex information systems infrastructure necessary for defense operations presents significant opportunities for revolutionary improvements in capabilities. An example of this technology trend is the creation and validation of the Joint Battlespace Infosphere (JBI) being developed by the Air Force Research Lab. The JBI is a system of systems that integrates, aggregates, and distributes information to users at all echelons, from the command center to the battlefield. The JBI is a key enabler of meeting the Air Force's Joint Vision 2010 core competencies such as Information Superiority, by providing increased situational awareness, planning capabilities, and dynamic execution. At the same time, creating this new operational environment introduces significant risk due to an increased dependency on computational and communications infrastructure combined with more sophisticated and frequent threats. Hence, the challenge facing the nation is the most effective means to exploit new computational and communications technologies while mitigating the impact of attacks, faults, and unanticipated usage patterns.

3 citations


Proceedings ArticleDOI
30 Mar 2001
TL;DR: A model is applied to perform tradeoffs between the different types of emulation, hardware acceleration, and simulation verification tools available to determine the most appropriate mix of verification technologies to employ in development efforts.
Abstract: Verification times for electronic system designs typically require more than half the over-all design effort. With the substantial costs and increasing importance of system verification technologies, determining the best verification strategy, is critical to complex systems design. As a tool to improve the design process, we apply a model to perform tradeoffs between the different types of emulation, hardware acceleration, and simulation verification tools available. The predictive power of the modeling approach is applied to usage scenarios to determine the most appropriate mix of verification technologies to employ in development efforts.

1 citations


Book ChapterDOI
01 Jan 2001
TL;DR: Practical test suite requirements are developed, FTL System’s VIVATM technology for automatically generating and applying test suites are outlined, and the experimental results using VIVA (VHDL Interactive Validation Alchemy) to test FTL Systems’ Auriga ® family of HDL compilers are described.
Abstract: Testing the adherence of a hardware description language (HDL) tool to an associated HDL specification is of critical importance to design teams. This paper will develop practical test suite requirements, outline FTL System’s VIVATM technology for automatically generating and applying test suites, and describe the experimental results using VIVA (VHDL Interactive Validation Alchemy) to test FTL Systems’ Auriga ® family of HDL compilers.

Proceedings ArticleDOI
10 Oct 2001
TL;DR: The requirements and rationale for the STEAMS language development effort are presented, including modeling deficiencies facing the VHDL-AMS user community.
Abstract: Designers face the challenge of specifying and implementing complicated mixed-technology systems. In order to better address mixed-signal designs, the VHDL-AMS and Verilog-AMS languages have been developed. These languages provide powerful capabilities to model and simulate behaviors in both the continuous and discrete time domains. Contemporaneously, the control systems community developed the object-oriented Modelica language to support the specification and continuous time modeling of complex control systems. The STEAMS (SUAVE and Tennessee Extensions for Analog and Mixed-Signal Systems) effort strives to provide an object-oriented systems specification and modeling language that supports both discrete and continuous time behaviors. STEAMS enables the modeling of interacting continuous and discrete time components coupled with the modeling productivity benefits associated with object-oriented techniques. This paper presents the requirements and rationale for the STEAMS language development effort, including modeling deficiencies facing the VHDL-AMS user community.

Proceedings ArticleDOI
10 Oct 2001
TL;DR: This work discusses the use of VHDL-AMS for developing highly predictive, accurate models of cellular processes, with predictive models particularly promising for drug development, epidemiology, bio-engineering, and genetic applications.
Abstract: The understanding of biological systems remains one of our primary scientific activities. A primary goal of biological research is the development of accurate models that can be used to explain biological processes, with predictive models particularly promising for drug development, epidemiology, bio-engineering, and genetic applications. We discuss the use of VHDL-AMS for developing highly predictive, accurate models of cellular processes.