scispace - formally typeset
Search or ask a question

Showing papers on "Electronic design automation published in 2005"


Journal ArticleDOI
09 May 2005
TL;DR: The thesis of this paper is that the changing environment requires a new look at the operation of the power grid and a complete redesign of the control, communication and computation infrastructure.
Abstract: The power grid is not only a network interconnecting generators and loads through a transmission and distribution system, but is overlaid with a communication and control system that enables economic and secure operation. This multilayered infrastructure has evolved over many decades utilizing new technologies as they have appeared. This evolution has been slow and incremental, as the operation of the power system consisting of vertically integrated utilities has, until recently, changed very little. The monitoring of the grid is still done by a hierarchical design with polling for data at scanning rates in seconds that reflects the conceptual design of the 1960s. This design was adequate for vertically integrated utilities with limited feedback and wide-area controls; however, the thesis of this paper is that the changing environment, in both policy and technology, requires a new look at the operation of the power grid and a complete redesign of the control, communication and computation infrastructure. We provide several example novel control and communication regimes for such a new infrastructure.

337 citations


Journal ArticleDOI
TL;DR: An automated tool capable of converting synchronous netlists into dual-rail circuits and it is interfaced to industry CAD tools is developed and compared in order to evaluate the method and the tool.
Abstract: Dual-rail encoding, return-to-spacer protocol, and hazard-free logic can be used to resist power analysis attacks by making energy consumed per clock cycle independent of processed data. Standard dual-rail logic uses a protocol with a single spacer, e.g., all-zeros, which gives rise to energy balancing problems. We address these problems by incorporating two spacers; the spacers alternate between adjacent clock cycles. This guarantees that all gates switch in every clock cycle regardless of the transmitted data values. To generate these dual-rail circuits, an automated tool has been developed. It is capable of converting synchronous netlists into dual-rail circuits and it is interfaced to industry CAD tools. Dual-rail and single-rail benchmarks based upon the advanced encryption standard (AES) have been simulated and compared in order to evaluate the method and the tool.

166 citations


Journal ArticleDOI
TL;DR: A non-manifold data structure, a constructive design method, four freeform modification tools, and a detail template encoding/decoding method are developed for the design automation of customized apparel products.
Abstract: This paper presents solution techniques for a three-dimensional Automatic Made-to-Measure scheme for apparel products. Freeform surface is adopted to represent the complex geometry models of apparel products. When designing the complex surface of an apparel product, abstractions are stored in conjunction with the models using a non-manifold data structure. Apparel products are essentially designed with reference to human body features, and thus share a common set of features as the human model. Therefore, the parametric feature-based modeling enables the automatic generation of fitted garments on differing body shapes. In our approach, different apparel products are each represented by a specific feature template preserving its individual characteristics and styling. When the specific feature template is encoded as the equivalent human body feature template, it automates the generation of made-to-measure apparel products. The encoding process is performed in 3D, which fundamentally solves the fitting problems of the 2D tailoring and pattern-making process. This paper gives an integrated solution scheme all above problems. In detail, a non-manifold data structure, a constructive design method, four freeform modification tools, and a detail template encoding/decoding method are developed for the design automation of customized apparel products.

131 citations


Proceedings ArticleDOI
20 Feb 2005
TL;DR: This paper presents an architecture that combines VLIW (Very Large Instruction Word) processing with the capability to introduce application specific customized instructions and complex hardware functions that allows for an overall speedup of 30X and 12X on average for signal processing benchmarks from the MediaBench.
Abstract: The capability and heterogeneity of new FPGA (Field Programmable Gate Array) devices continues to increase with each new line of devices. Efficiently programming these devices is increasing in difficulty. However, FPGAs continue to be utilized for algorithms traditionally targeted to embedded DSP microprocessors such as signal and image processing applications.This paper presents an architecture that combines VLIW (Very Large Instruction Word) processing with the capability to introduce application specific customized instructions and complex hardware functions. To support this architecture, a compilation and design automation flow are described for programs written in C.Several design tradeoffs for the architecture were examined including number of VLIW functional units and register file size. The architecture was implemented on an Altera Stratix II FPGA. The Stratix II device was selected because it offers a large number of high-speed DSP (digital signal processing) blocks that execute multiply accumulate operations.We show that our combined VLIW with hardware functions exhibit as much as 230X speedup and 63X on average for computational kernels for a set of benchmarks. This allows for an overall speedup of 30X and 12X on average for signal processing benchmarks from the MediaBench.

112 citations


Proceedings ArticleDOI
13 Jun 2005
TL;DR: Effective logic soft error protection requires solutions to the following three problems: accurate soft error rate estimation for combinational logic networks; automated estimation of system effects of logic soft errors, and identification of regions in a design that must be protected.
Abstract: Logic soft errors are radiation induced transient errors in sequential elements (flip-flops and latches) and combinational logic. Robust enterprise platforms in sub-65nm technologies require designs with built-in logic soft error protection. Effective logic soft error protection requires solutions to the following three problems: (1) accurate soft error rate estimation for combinational logic networks; (2) automated estimation of system effects of logic soft errors, and identification of regions in a design that must be protected; and, (3) new cost-effective techniques for logic soft error protection, because classical fault-tolerance techniques are very expensive.

99 citations


Journal ArticleDOI
TL;DR: The novelty of this work lies in the introduction of the first comprehensive synthesis methodology and tool for general multilevel threshold logic design, built on top of an existing Boolean logic synthesis tool.
Abstract: We propose an algorithm for efficient threshold network synthesis of arbitrary multioutput Boolean functions. Many nanotechnologies, such as resonant tunneling diodes, quantum cellular automata, and single electron tunneling, are capable of implementing threshold logic efficiently. The main purpose of this work is to bridge the current wide gap between research on nanoscale devices and research on synthesis methodologies for generating optimized networks utilizing these devices. While functionally-correct threshold gates and circuits based on nanotechnologies have been successfully demonstrated, there exists no methodology or design automation tool for general multilevel threshold network synthesis. We have built the first such tool, threshold logic synthesizer (TELS), on top of an existing Boolean logic synthesis tool. Experiments with 56 multioutput benchmarks indicate that, compared to traditional logic synthesis, up to 80.0% and 70.6% reduction in gate count and interconnect count, respectively, is possible with the average being 22.7% and 12.6%, respectively. Furthermore, the synthesized networks are well-balanced structurally. The novelty of this work lies in the introduction of the first comprehensive synthesis methodology and tool for general multilevel threshold logic design.

91 citations


Journal ArticleDOI
TL;DR: A method for automated synthesis of analog circuits using evolutionary search and a set of circuit design rules based on topological reuse and the design of the evaluation function-which evaluates each generated circuit using SPICE simulations-has been automated to a great extent.
Abstract: We present a method for automated synthesis of analog circuits using evolutionary search and a set of circuit design rules based on topological reuse. The system requires only moderate expert knowledge on part of the user. It allows circuit size, circuit topology, and device values to evolve. The circuit representation scheme employs a topological reuse-based approach-it uses commonly used subcircuits for analog design as inputs and utilizes these to create the final circuit. The connectivity between these blocks is governed by a well-defined set of rules and the scheme is capable of representing most standard analog circuit topologies. The system operation consists of two phases-in the first phase, the circuit size and topology are evolved. A limited amount of device sizing also occurs in this phase. The second phase consists entirely of device value optimization. The design of the evaluation function-which evaluates each generated circuit using SPICE simulations-has also been automated to a great extent. The evaluation function is generated automatically depending on a behavioral description of the circuit. We present several experimental results obtained using this scheme, including two types of comparators, two types of oscillators, and an XOR logic gate. The generated circuits closely resemble hand designed circuits. The computational needs of the system are modest.

86 citations


Proceedings ArticleDOI
31 May 2005
TL;DR: This tutorial highlights key issues and architectural alternatives for this promising technology and outlines the challenges posed by the hybrid circuits pose for design automation.
Abstract: Physics offers several active devices with nanometer-scale footprint, that can be best used in combination with a CMOS subsystem. Such hybrid circuits offer the potential for high defect tolerance combined with unparalleled performance. In this tutorial, we highlight key issues and architectural alternatives for this promising technology and outline the challenges posed by the hybrid circuits pose for design automation.

81 citations


Proceedings ArticleDOI
07 Mar 2005
TL;DR: This work presents an approach to design and verify SystemC models at the transaction level by first model both the design and the properties in PSL in UML and translating them into an intermediate format modeled by abstract state machines (ASM).
Abstract: Transaction-level modeling allows exploring several SoC design architectures, leading to better performance and easier verification of the final product. In this paper, we present an approach to design and verify SystemC models at the transaction level. We integrate the verification as part of the design flow where we first model both the design and the properties (written in Property Specification language) in Unifed Modeling Language (UML); then, we translate them into an intermediate format modeled with AsmL [language based on Abstract State Machines (ASM)]. The AsmL model is used to generate a finite state machine of the design, including the properties. Checking the correctness of the properties is performed on the fly while generating the state machine. Finally, we translate the verified design to SystemC and map the properties to a set of assertions (as monitors in C#) that can be reused to validate the design at lower levels by simulation. For existing SystemC designs, we propose to translate the code back to AsmL in order to apply the same verification approach. At the SystemC level, we also present a genetic algorithm to enhance the assertions coverage. We will ensure the soundness of our approach by proving the correctness of the SystemC-to-AsmL and AsmL-to-SystemC transformations. We illustrate our approach on two case studies including the PCI bus standard and a master/slave generic architecture from the SystemC library.

78 citations


Journal ArticleDOI
TL;DR: A novel CAD tool based on modal analysis methods, which improves the efficiency and robustness of the classical aggressive space-mapping technique, is presented for those purposes and the use of a new segmentation strategy and the hybridization of several well-known optimization algorithms is proposed.
Abstract: Waveguide filters are key elements present in many microwave and millimeter-wave communication systems. In recent times, ever-increasing efforts are being devoted to the development of automated computer-aided design (CAD) tools of such devices. In this paper, a novel CAD tool based on modal analysis methods, which improves the efficiency and robustness of the classical aggressive space-mapping technique, is presented for those purposes. The use of a new segmentation strategy and the hybridization of a specific combination of several well-known optimization algorithms is proposed. The CAD tool has been successfully validated with the practical design of several H-plane coupled cavity filters in a rectangular waveguide for space and communication applications. A filter prototype for local multipoint distribution systems operating at Ka-band, and two tunable H-plane filters with tuning posts operating at 11 and 13 GHz, have been successfully designed, manufactured, and measured.

72 citations


Proceedings ArticleDOI
13 Jun 2005
TL;DR: A scalable multi-level optimization methodology for spiral inductors that integrates the flexibility of constrained global optimization using mesh-adaptive direct search (MADS) algorithms with the rapid convergence of local nonlinear convex optimization techniques is developed.
Abstract: The efficient optimization of integrated spiral inductors remains a fundamental barrier to the realization of effective analog and mixed-signal design automation. In this paper, we develop a scalable multi-level optimization methodology for spiral inductors that integrates the flexibility of constrained global optimization using mesh-adaptive direct search (MADS) algorithms with the rapid convergence of local nonlinear convex optimization techniques. Experimental results indicate that our methodology locates optimal spiral inductor geometries with significantly fewer function evaluations than current techniques.

Proceedings ArticleDOI
27 Apr 2005
TL;DR: The paradigm shift from viewing the soC design problem as a matter of organizing complex hierarchies of buses with multiple coupled timing domains, to viewing the SoC as a problem in network design where those timing issues are automatically isolated, promises significant improvements in designer productivity, component reuse and SoC functionality.
Abstract: Self-timed packet-switched networks are poised to take a major role in addressing the complex system design and timing closure problems of future complex systems-on-chip. The robust, correct-by-construction characteristics of self-timed communications enables each IP block on the SoC to operate in its own isolated timing domain, greatly simplifying the problems of timing verification. Design automation software can remove the need for expertise in self-timed design, enabling the on-chip interconnect to be treated as an additional IP block within a conventional (synchronous) design flow. The paradigm shift from viewing the SoC design problem as a matter of organizing complex hierarchies of buses with multiple coupled timing domains, where every interface between timing domains must be verified carefully, to viewing the SoC as a problem in network design where those timing issues are automatically isolated, promises significant improvements in designer productivity, component reuse and SoC functionality.

01 Jan 2005
TL;DR: The proposed topdown design automation approach is expected to relieve biochip users from the burden of manual optimization of bioassays, time-consuming hardware design, and costly testing and maintenance procedures.
Abstract: Microfluidics-based biochips are soon expected to revolutionize clinical diagnosis, DNA sequencing, and other laboratory procedures involving molecular biology. As more bioassays are executed concurrently on a biochip, system integration and design complexity are expected to increase dramatically. Current techniques for full-custom design of digital microfluidic biochips however do not scale well for concurrent assays and for next-generation systemon-chip (SOC) designs that are expected to include fluidic components. We present here an overview of an integrated system-level design methodology that attempts to address key issues in the synthesis, testing and reconfiguration of digital microfluidics-based biochips. The proposed topdown design automation approach is expected to relieve biochip users from the burden of manual optimization of bioassays, time-consuming hardware design, and costly testing and maintenance procedures.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on power management and budgeting within the integrated circuits (ICs) to key system-design parameters, including power management, power-efficient memory design, weak-inversion analog design and test visibility.
Abstract: Implantable medical electronics are differentiated from most of the other electronic-system implementations by their unique combination of extreme low-power and high-reliability requirements. Many implanted medical devices rely on a fixed nonrechargeable battery over their entire lifetime. These constraints elevate power management and budgeting within the integrated circuits (ICs) to key system-design parameters. So much so, in fact, that the requirements drive not only differences in circuit design, but also place a number of constraints on the design environment and design tools, manufacturing processes, and targets used to implement those designs, test methodologies, and the fundamental understanding into the physics of failure. These constraints often make standard offerings available from the wafer foundries, electronic-design-automation (EDA) suppliers, and commercial third-party IP providers unviable without some level of modification. A successful design often requires process changes and monitoring to ensure low-static-current drain, digital-cell-library optimization with synthesis tools for low-power power-efficient memory design, weak-inversion analog design, accurate low-power models, and test visibility. As system complexity and activity rise, without a proportional increase in available energy, these challenges grow more persistent.

Journal ArticleDOI
TL;DR: An approach to automate the conversion of floating-point MATLAB programs into fixed-pointMATLAB programs, for mapping to FPGAs by profiling the expected inputs to estimate errors and attempts to minimize the hardware resources while constraining the quantization error within a specified limit.
Abstract: Most practical FPGA designs of digital signal processing (DSP) applications are limited to fixed-point arithmetic owing to the cost and complexity of floating-point hardware. While mapping DSP applications onto FPGAs, a DSP algorithm designer must determine the dynamic range and desired precision of input, intermediate, and output signals in a design implementation. The first step in a MATLAB-based hardware design flow is the conversion of the floating-point MATLAB code into a fixed-point version using "quantizers" from the filter design and analysis (FDA) toolbox for MATLAB. This paper describes an approach to automate the conversion of floating-point MATLAB programs into fixed-point MATLAB programs, for mapping to FPGAs by profiling the expected inputs to estimate errors. Our algorithm attempts to minimize the hardware resources while constraining the quantization error within a specified limit. Experimental results on five MATLAB benchmarks are reported for Xilinx Virtex II FPGAs.

Proceedings ArticleDOI
12 Jun 2005
TL;DR: A system on chip (SoC) library for MOSIS scalable CMOS rules has been developed and all steps in the design flow are fully automated with scripts and have been tested successfully in a large VLSI design class at the Illinois Institute of Technology.
Abstract: A system on chip (SoC) library for MOSIS scalable CMOS rules has been developed It is intended for use with Synopsys and Cadence Design Systems electronic design automation tools. Students can also use layout tools for semi-custom designs and insert them with the proposed design flow. Scalable submicron rules are used for the cell library, allowing it to be used for several AMI and TSMC technologies. Consequently, it is possible to fabricate student projects as well as do research in system on chip design through the MOSIS educational program. All steps in the design flow are fully automated with scripts and have been tested successfully in a large VLSI design class at the Illinois Institute of Technology.

Proceedings ArticleDOI
18 Sep 2005
TL;DR: A new frequency-dependent model that utilizes closed-form expressions to quickly characterize square spiral inductors is introduced that provides several orders of magnitude performance improvement over field solver-based approaches.
Abstract: During the spiral inductor design process, designers and design automation tools require efficient modeling techniques for initial design space exploration in order to quickly pinpoint appropriate inductor geometries. In this paper, we introduce a new frequency-dependent model that utilizes closed-form expressions to quickly characterize square spiral inductors. Our modeling approach is centered on new analytical expressions for the inductor's series resistance and series inductance. The model provides several orders of magnitude performance improvement over field solver-based approaches with typical errors of less than 3% when compared with numerical field solver simulations and demonstrates excellent agreement with measured data from inductors fabricated in TSMC's 0.18mum mixed-mode/RF process


Patent
23 Dec 2005
TL;DR: In this article, a thermal analysis engine performs fine-grained thermal simulations of the semiconductor chip based on thermal models and boundary conditions for all thermally significant structures in the chip and the adjacent system.
Abstract: A thermally aware design automation suite integrates system-level thermal awareness into the design of semiconductor chips. A thermal analysis engine performs fine-grain thermal simulations of the semiconductor chip based on thermal models and boundary conditions for all thermally significant structures in the chip and the adjacent system that impact the temperature of the semiconductor chip. The thermally aware design automation suite uses the simulations of the thermal analysis engine to repair or otherwise modify the thermally significant structures to equalize temperature variations across the chip, impose specified design assertions on selected portions of the chip, and verify overall chip performance and reliability over designated operating ranges and manufacturing variations. The thermally significant structures are introduced or modified via one or more of: change in number, change in location, and change in material properties.

Proceedings ArticleDOI
13 Jun 2005
TL;DR: This system integrates high-level and physical design algorithms to concurrently improve a system's schedule, resource binding, and floor-plan, thereby allowing the incremental exploration of the combined behavioral- level and physical-level design space.
Abstract: Achieving design closure is one of the biggest headaches for modern VLSI designers. This problem is exacerbated by high-level design automation tools that ignore increasingly important factors such as the impact of interconnect on the area and power consumption of integrated circuits. Bringing physical information up into the logic level or even behavioral-level stages of system design is essential to solve this problem. In this paper, we present an incremental floorplanning high-level synthesis system. This system integrates high-level and physical design algorithms to concurrently improve a system's schedule, resource binding, and floor-plan, thereby allowing the incremental exploration of the combined behavioral-level and physical-level design space. Compared with previous approaches that repeatedly call loosely coupled floorplanners for physical estimation, this approach has the benefit of efficiency, stability, and better quality of results. For designs containing functional units with non-unity aspect ratios, the average CPU time improved by 369 %, the area improved by 14.24 %, and power improved by 4 %.

01 Jan 2005
TL;DR: This paper presents a study on design automation at eleven small and medium enterprises (SMEs) and the companies’ answers are presented together with an interpreted potential for design automation.
Abstract: This paper presents a study on design automation at eleven small and medium enterprises (SMEs). These have been interviewed on their need of, perceived potential for, current state of, requirements and wishes on, design automation as well as their view on realization and implementation of design automation applications. The companies' answers are presented together with an interpreted potential for design automation.

Proceedings ArticleDOI
05 Mar 2005
TL;DR: The Evolvable Computation Group at NASA's Jet Propulsion Laboratory has demonstrated that the same tools used for computer-aided design and design evaluation can be used for automated innovation and design.
Abstract: The Evolvable Computation Group, at NASA's Jet Propulsion Laboratory, is tasked with demonstrating the utility of computational engineering and computer optimized design for complex space systems. The group is comprised of researchers over a broad range of disciplines including biology, genetics, robotics, physics, computer science and system design, and employs biologically inspired evolutionary computational techniques to design and optimize complex systems. Over the past two years we have developed tools using genetic algorithms, simulated annealing and other optimizers to improve on human design of space systems. We have further demonstrated that the same tools used for computer-aided design and design evaluation can be used for automated innovation and design. These powerful techniques also serve to reduce redesign costs and schedules

01 Jan 2005
TL;DR: A new approach for building efficient, automated decision procedures for first-order logics involving arithmetic is presented, where decision problems involving arithmetic are transformed to problems in the Boolean domain, such as Boolean satisfiability solving, thereby leveraging recent advances in that area.
Abstract: Decision procedures for first-order logics are widely applicable in design verification and static program analysis. However, existing procedures rarely scale to large systems, especially for verifying properties that depend on data or timing, in addition to control. This thesis presents a new approach for building efficient, automated decision procedures for first-order logics involving arithmetic. In this approach, decision problems involving arithmetic are transformed to problems in the Boolean domain, such as Boolean satisfiability solving, thereby leveraging recent advances in that area. The transformation automatically detects and exploits problem structure based on new theoretical results and machine learning. The results of experimental evaluations show that our decision procedures can outperform other state-of-the-art procedures by several orders of magnitude. The decision procedures form the computational engines for two verification systems, UCLID and TMV These systems have been applied to problems in computer security, electronic design automation, and software engineering that require efficient and precise analysis of system functionality and timing. This thesis describes two such applications: finding format-string exploits in software, and verifying circuits that operate under timing assumptions.

Proceedings ArticleDOI
18 Jan 2005
TL;DR: A novel bi-directional U ML-SystemC translation tool UMLSC is proposed and a set of principles for modeling SystemC design in UML and an algorithm for UML- systemC bi- Directional translation are addressed.
Abstract: The combination of Unified Modeling Language (UML) and SystemC has led to an object-oriented high-level design automation methodology. In this paper, a novel bi-directional UML-SystemC translation tool UMLSC is proposed. Specifically, a set of principles for modeling SystemC design in UML and an algorithm for UML-SystemC bi-directional translation are addressed. The principles and the algorithm are integrated into UMLSC* which provides a smooth link between visual specification, implementation and verification. An implementation example is given to verify the effectiveness of the proposed principles and the algorithm.

Journal ArticleDOI
TL;DR: This article illustrates the effectiveness of the DEFACTO approach in automatically mapping several kernel codes to an FPGA quickly and correctly and presents a detailed example of the comparison of the performance of an automatically generated design against a manually generated implementation of the same computation.

Journal ArticleDOI
TL;DR: This study develops a method for determining optimal design embodiments under the following assumptions: (1): the design approach involves the axiomatic design theory and (2) the design-relevant information refers to a designer’s intuition, expressible as f-granular information.
Abstract: This study develops a method for determining optimal design embodiments under the following assumptions: (1) the design approach involves the axiomatic design theory and (2) the design-relevant information refers to a designer’s intuition, expressible as f-granular information. The optimal design embodiments are then obtained by determining such optimal set of crisp points having definiteness positions close to each other. Here, a crisp point means a value of one of the parameters needed to specify the design. The definiteness position of a crisp point depends on two factors: (1) how much a designer knows about this point, and (2) how desirable it is to design the actual product. The usefulness of the proposed method is demonstrated by an example that involves both weight and dimension measures. As axiomatic-design-theoretic practices move further into the age of design automation, various machine intelligence techniques capable of computing qualitative information (f-granular information) rather than numbers (numerical or semi-numerical information) will be needed. This study is intended to further such investigations.

Proceedings ArticleDOI
07 Mar 2005
TL;DR: This paper describes how the hardware version of the Iterator can be used to enhance model reuse, and proposes a new approach to structural design patterns concepts to hardware design.
Abstract: Increasing reuse opportunities is a well-known problem for software designers as well as for hardware designers. Nonetheless, current software and hardware engineering practices have embraced different approaches to this problem. Software designs are usually modelled after a set of proven solutions to recurrent problems called design patterns. This approach differs from the component-based reuse usually found in hardware designs: design patterns do not specify unnecessary implementation details. Several authors have already proposed translating structural design patterns concepts to hardware design. In this paper we extend the discussion to behavioural design patterns. Specifically, we describe how the hardware version of the Iterator can be used to enhance model reuse.

Proceedings ArticleDOI
16 May 2005
TL;DR: A hierarchical MEMS synthesis and optimization architecture has been developed for MEMS design automation that integrates an object-oriented component library with a MEMS simulation tool and two levels of optimization: global genetic algorithms and local gradient-based refinement.
Abstract: A hierarchical MEMS synthesis and optimization architecture has been developed for MEMS design automation. The architecture integrates an object-oriented component library with a MEMS simulation tool and two levels of optimization: global genetic algorithms and local gradient-based refinement. An object-oriented data structure is used to represent hierarchical levels of elements in the design library and their connectivity. Additionally, all elements encapsulate instructions and restrictions for the genetic operations of mutation and crossover. The parameterized component library includes distinct low-level primitive elements and high-level clusters of primitive elements. Surface micro-machined suspended resonators are used as an example to introduce the hierarchical MEMS synthesis and optimization process.

Patent
Adam P. Donlin1, Douglas Densmore1
14 Apr 2005
TL;DR: In this article, the prototype designs are characterized using one or more electronic design automation tools to generate pre-characterization data, which are used either directly or indirectly in the system level design process.
Abstract: Systems, methods, software, and techniques can be used to precharacterize a variety of prototype system designs. The prototype system designs can be defined at one or more levels of abstraction. The prototype designs are characterized using one or more electronic design automation tools to generate precharacterization data. Precharacterization data and associated prototype designs are used either directly or indirectly in the system level design process.

ReportDOI
01 Jan 2005
TL;DR: A new hardware design methodology rooted in an abstraction of communication timing, which provides flexibly timed module interfaces and automatic generation of pipelined communication is proposed, allowing an entire system to be reused with automatic performance improvement on larger, next generation devices.
Abstract: RTL design methodologies are struggling to meet the challenges of modern, large system design. Their reliance on manually timed design with fully exposed device resources is laborious, restricts reuse, and is increasingly ineffective in an era of Moore's Law expansion and growing interconnect delay. We propose a new hardware design methodology rooted in an abstraction of communication timing, which provides flexibly timed module interfaces and automatic generation of pipelined communication. Our core approach is to replace inter-module wiring with streams, which are FIFO buffered channels. We develop a process network model for streaming systems (TDFPN) and a hardware description language with built in streams (TDF). We describe a complete synthesis methodology for mapping streaming applications to a commercial FPGA, with automatic generation of efficient hardware streams and module-side flow control. We use this methodology to compile seven multimedia applications to a Xilinx Virtex-II Pro FPGA, finding that stream support can be relatively inexpensive. We further propose a comprehensive, system-level optimization flow that uses information about streaming behavior to guide automatic communication buffering, pipelining, and placement. We discuss specialized stream support on reconfigurable, programmable platforms, with intent to provide better results and compile tunes than streaming on generic FPGAs. We also show how streaming can support an efficient abstraction of area, allowing an entire system to be reused with automatic performance improvement on larger, next generation devices.