scispace - formally typeset
Search or ask a question

Showing papers on "Electronic design automation published in 2007"


01 Jan 2007
TL;DR: In this paper, the authors present the challenges faced by industry in system level design and propose a design methodology, platform-based design (PBD), that has the potential of addres- sing these challenges in a unified way.
Abstract: System-level design (SLD) is considered by many as the next frontier in electronic design automation (EDA). SLD means many things to different people since there is no wide agreement on a definition of the term. Academia, designers, and EDA experts have taken different avenues to attack the problem, for the most part springing from the basis of traditional EDA and trying to raise the level of abstraction at which integrated circuit designs are captured, analyzed, and synthesized from. However, my opinion is that this is just the tip of the iceberg of a much bigger problem that is common to all systemindustry. In particular, I believe that notwithstanding the obvious differences in the vertical industrial segments (for example, consumer, automotive, computing, and communica- tion), there is a common underlying basis that can be explored. This basis may yield a novel EDA industry and even a novel engineering field that could bring substantial productivity gains not only to the semiconductor industry but to all system industries including industrial and automotive, communication and computing, avionics and building automation, space and agriculture, and health and security, in short, a real technical renaissance. In this paper, I present the challenges faced by industry in system level design. Then, I propose a design methodology, platform-based design (PBD), that has the potential of addres- sing these challenges in a unified way. Further, I place methodology and tools available today in the PBD framework and present a tool environment, Metropolis, that supports PBD and that can be used to integrate available tools and methods together with two examples of its application to separate industrial domains.

331 citations


Journal ArticleDOI
30 Apr 2007
TL;DR: A design methodology, platform-based design (PBD), is proposed that has the potential of addressing system level design challenges in a unified way and a tool environment is presented, Metropolis, that supports PBD and can be used to integrate available tools and methods.
Abstract: System-level design (SLD) is considered by many as the next frontier in electronic design automation (EDA). SLD means many things to different people since there is no wide agreement on a definition of the term. Academia, designers, and EDA experts have taken different avenues to attack the problem, for the most part springing from the basis of traditional EDA and trying to raise the level of abstraction at which integrated circuit designs are captured, analyzed, and synthesized from. However, my opinion is that this is just the tip of the iceberg of a much bigger problem that is common to all system industry. In particular, I believe that notwithstanding the obvious differences in the vertical industrial segments (for example, consumer, automotive, computing, and communication), there is a common underlying basis that can be explored. This basis may yield a novel EDA industry and even a novel engineering field that could bring substantial productivity gains not only to the semiconductor industry but to all system industries including industrial and automotive, communication and computing, avionics and building automation, space and agriculture, and health and security, in short, a real technical renaissance. In this paper, I present the challenges faced by industry in system level design. Then, I propose a design methodology, platform-based design (PBD), that has the potential of addressing these challenges in a unified way. Further, I place methodology and tools available today in the PBD framework and present a tool environment, Metropolis, that supports PBD and that can be used to integrate available tools and methods together with two examples of its application to separate industrial domains

300 citations


Journal ArticleDOI
30 Apr 2007
TL;DR: The paper describes the recent state of the art in hierarchical analog synthesis, with a strong emphasis on associated techniques for computer-aided model generation and optimization, and surveys recent advances in analog design tools that specifically deal with the hierarchical nature of practical analog and RF systems.
Abstract: The paper describes the recent state of the art in hierarchical analog synthesis, with a strong emphasis on associated techniques for computer-aided model generation and optimization. Over the past decade, analog design automation has progressed to the point where there are industrially useful and commercially available tools at the cell level-tools for analog components with 10-100 devices. Automated techniques for device sizing, for layout, and for basic statistical centering have been successfully deployed. However, successful component-level tools do not scale trivially to system-level applications. While a typical analog circuit may require only 100 devices, a typical system such as a phase-locked loop, data converter, or RF front-end might assemble a few hundred such circuits, and comprise 10 000 devices or more. And unlike purely digital systems, mixed-signal designs typically need to optimize dozens of competing continuous-valued performance specifications, which depend on the circuit designer's abilities to successfully exploit a range of nonlinear behaviors across levels of abstraction from devices to circuits to systems. For purposes of synthesis or verification, these designs are not tractable when considered "flat." These designs must be approached with hierarchical tools that deal with the system's intrinsic design hierarchy. This paper surveys recent advances in analog design tools that specifically deal with the hierarchical nature of practical analog and RF systems. We begin with a detailed survey of algorithmic techniques for automatically extracting a suitable nonlinear macromodel from a device-level circuit. Such techniques are critical to both verification and synthesis activities for complex systems. We then survey recent ideas in hierarchical synthesis for analog systems and focus in particular on numerical techniques for handling the large number of degrees of freedom in these designs and for exploring the space of performance tradeoffs early in the design process. Finally, we briefly touch on recent ideas for accommodating models of statistical manufacturing variations in these tools and flows

227 citations


Proceedings ArticleDOI
09 Jun 2007
TL;DR: This paper analyzes the SPEC CPU2006 benchmarks using performance counter based experimentation from several state of the art systems, and uses statistical techniques such as principal component analysis and clustering to draw inferences on the similarity of the benchmarks and the redundancy in the suite and arrive at meaningful subsets.
Abstract: The recently released SPEC CPU2006 benchmark suite is expected to be used by computer designers and computer architecture researchers for pre-silicon early design analysis. Partial use of benchmark suites by researchers, due to simulation time constraints, compiler difficulties, or library or system call issues is likely to happen; but a random subset can lead to misleading results. This paper analyzes the SPEC CPU2006 benchmarks using performance counter based experimentation from several state of the art systems, and uses statistical techniques such as principal component analysis and clustering to draw inferences on the similarity of the benchmarks and the redundancy in the suite and arrive at meaningful subsets.The SPEC CPU2006 benchmark suite contains several programs from areas such as artificial intelligence and includes none from the electronic design automation (EDA) application area. Hence there is a concern on the application balance in the suite. An analysis from the perspective of fundamental program characteristics shows that the included programs offer characteristics broader than the EDA programs' space. A subset of 6 integer programs and 8 floating point programs can yield most of the information from the entire suite.

199 citations


Journal ArticleDOI
01 Feb 2007
TL;DR: A memetic algorithm (MA) for a nonslicing and hard-module VLSI floorplanning problem is presented that uses an effective genetic search method to explore the search space and an efficient local search methods to exploit information in the search region.
Abstract: Floorplanning is an important problem in very large scale integrated-circuit (VLSI) design automation as it determines the performance, size, yield, and reliability of VLSI chips. From the computational point of view, VLSI floorplanning is an NP-hard problem. In this paper, a memetic algorithm (MA) for a nonslicing and hard-module VLSI floorplanning problem is presented. This MA is a hybrid genetic algorithm that uses an effective genetic search method to explore the search space and an efficient local search method to exploit information in the search region. The exploration and exploitation are balanced by a novel bias search strategy. The MA has been implemented and tested on popular benchmark problems. Experimental results show that the MA can quickly produce optimal or nearly optimal solutions for all the tested benchmark problems

159 citations


Journal ArticleDOI
TL;DR: A novel reconfigurable architecture, named 3D field-programmable gate array (3D nFPGA), which utilizes 3D integration techniques and new nanoscale materials synergistically and obtains a 4x footprint reduction comparing to the traditional CMOS-based 2D FPGAs.
Abstract: In this paper, we introduce a novel reconfigurable architecture, named 3D field-programmable gate array (3D nFPGA), which utilizes 3D integration techniques and new nanoscale materials synergistically. The proposed architecture is based on CMOS nanohybrid techniques that incorporate nanomaterials such as carbon nanotube bundles and nanowire crossbars into CMOS fabrication process. This architecture also has built-in features for fault tolerance and heat alleviation. Using unique features of FPGAs and a novel 3D stacking method enabled by the application of nanomaterials, 3D nFPGA obtains a 4x footprint reduction comparing to the traditional CMOS-based 2D FPGAs. With a customized design automation flow, we evaluate the performance and power of 3D nFPGA driven by the 20 largest MCNC benchmarks. Results demonstrate that 3D nFPGA is able to provide a performance gain of 2.6 x with a small power overhead comparing to the traditional 2D FPGA architecture.

133 citations


Journal ArticleDOI
TL;DR: This survey addresses the concept of network in three different contexts representing the deterministic, probabilistic, and statistical physics-inspired design paradigms by considering the natural representation of networks as graphs.
Abstract: The Chip Is the Network: Towards a Science of Network-on-Chip Design reviews the major design methodologies that have had a profound effect on designing future Network-on-Chip (NoC) architectures. More precisely, it addresses the problem of NoC design in the deterministic context, where the application and the architecture are modeled as graphs with worst-case type of information about the parameters of the components influencing the network traffic. Rather than simply enumerating the proposed approaches, it takes a formal approach and also discusses the main features of each proposed solution. It then goes one step further by considering the design of NoCs with partial information available (primarily under the Markovian assumption) about the application and the architecture. Similarly to the deterministic context, it discusses various probabilistic approaches to NoC design and points out their advantages and limitations. Last, but not least, it looks at emerging approaches inspired from statistical physics and information theory. The formal approach adopted means the network concept is addressed in the most general context, pointing out the main limitations of the proposed solutions, and suggesting a few open-ended problems. The Chip Is the Network: Towards a Science of Network-on-Chip Design is an invaluable reference for the NoC research community and, indeed anyone from CAD/VLSI academe or industry with an interest in this emerging paradigm.

95 citations


Journal ArticleDOI
30 Apr 2007
TL;DR: This paper discusses some newer techniques that have been deployed within IBM's physical synthesis tool called PDS that significantly improves throughput and focuses on some of the biggest contributors to runtime, placement, legalization, buffering, and electric correction.
Abstract: The traditional purpose of physical synthesis is to perform timing closure , i.e., to create a placed design that meets its timing specifications while also satisfying electrical, routability, and signal integrity constraints. In modern design flows, physical synthesis tools hardly ever achieve this goal in their first iteration. The design team must iterate by studying the output of the physical synthesis run, then potentially massage the input, e.g., by changing the floorplan, timing assertions, pin locations, logic structures, etc., in order to hopefully achieve a better solution for the next iteration. The complexity of physical synthesis means that systems can take days to run on designs with multimillions of placeable objects, which severely hurts design productivity. This paper discusses some newer techniques that have been deployed within IBM's physical synthesis tool called PDS that significantly improves throughput. In particular, we focus on some of the biggest contributors to runtime, placement, legalization, buffering, and electric correction, and present techniques that generate significant turnaround time improvements

80 citations


Journal ArticleDOI
TL;DR: The approach in this paper develops efficient techniques for constraining and reconstructing a product represented by free-form surfaces around reference objects with different shapes, so that this design automation problem can be fundamentally solved.
Abstract: This paper addresses the problem of volume parameterization that serves as the geometric kernel for design automation of customized free-form products The purpose of volume parameterization is to establish a mapping between the spaces that are near to two reference free-form models, so that the shape of a product presented in free-form surfaces can be transferred from the space around one reference model to another reference model The mapping is expected to keep the spatial relationship between the product model and reference models as much as possible We separate the mapping into rigid body transformation and elastic warping The rigid body transformation is determined by anchor points defined on the reference models using a least-squares fitting approach The elastic warping function is more difficult to obtain, especially when the meshes of the reference objects are inconsistent A three-stage approach is conducted First, a coarse-level warping function is computed based on the anchor points In the second phase, the topology consistency is maintained through a surface fitting process Finally, the mapping of volume parameterization is established on the surface fitting result Compared to previous methods, the approach presented here is more efficient Also, benefitting from the separation of rigid body transformation and elastic warping, the transient shape of a transferred product does not give unexpected distortion At the end of this paper, various industry applications of our approach in design automation are demonstrated Note to Practitioners-The motivation of this research is to develop a geometric solution for the design automation of customized free-form objects, which can greatly improve the efficiency of design processes in various industries involving customized products (eg, garment design, toy design, jewel design, shoe design, and glasses design, etc) The products in the above industries are usually composed of a very complex geometry shape (represented by free-form surfaces), and is not driven by a parameter table but a reference object with free-form shapes (eg, mannequin, toy, wrist, foot, and head models) After carefully designing a product around one particular reference model, it is desirable to have an automated tool for "grading" this product to other shape-changed reference objects while retaining the original spatial relationship between the product and reference models This is called the design automation of a customized free-form object Current commercial 3-D/2-D computer-aided design (CAD) systems, developed for the design automation of models with regular shape, cannot support the design automation in this manner The approach in this paper develops efficient techniques for constraining and reconstructing a product represented by free-form surfaces around reference objects with different shapes, so that this design automation problem can be fundamentally solved Although the approach has not been integrated into commercial CAD systems, the results based on our preliminary implementation are encouraging-the spatial relationship between reference models and the customized products is well preserved

70 citations


Proceedings ArticleDOI
05 Nov 2007
TL;DR: A novel reconfigurable architecture, named 3D nFPGA, which utilizes 3D integration techniques and new nanoscale materials synergistically and obtains a 4.5X footprint reduction compared to traditional CMOS-based 2D FPGAs.
Abstract: In this paper, we introduce a novel reconfigurable architecture, named 3D nFPGA, which utilizes 3D integration techniques and new nanoscale materials synergistically. The proposed architecture is based on CMOS-nano hybrid techniques that incorporate nanomaterials such as carbon nanotube bundles and nanowire crossbars into CMOS fabrication process. Using unique features of FPGAs and a novel 3D stacking method enabled by the application of nanomaterials, 3D nFPGA obtains a 4.5X footprint reduction compared to traditional CMOS-based 2D FPGAs. With a customized design automation flow, we evaluate the performance and power of 3D nFPGA driven by the 20 largest MCNC benchmarks. Results demonstrate that 3D nFPGA is able to provide a performance gain of 2.6X with a small power overhead comparing to the CMOS 2D FPGA architecture.

63 citations


Journal ArticleDOI
TL;DR: An intelligent routing system for automating design of electrical wiring harnesses and pipes in aircraft provides structure to the routing design process and has potential to deliver significant savings in time and cost.
Abstract: This paper discusses the development of an intelligent routing system for automating design of electrical wiring harnesses and pipes in aircraft. The system employs knowledge based engineering (KBE) methods and technologies for capturing and implementing rules and engineering knowledge relating to the routing process. The system reads a mesh of three dimensional structure and obstacles falling within a given search space and connects source and target terminals satisfying a knowledge base of design rules and best practices. Routed paths are output as computer aided design (CAD) readable geometry, and a finite element (FE) mesh consisting of geometry, routed paths and a knowledge layer providing detail of the rules and knowledge implemented in the process. Use of this intelligent routing system provides structure to the routing design process and has potential to deliver significant savings in time and cost.

Journal ArticleDOI
30 Apr 2007
TL;DR: The basic periodic steady-state problem is examined and examples and linear algebra abstractions are provided to demonstrate connections between seemingly dissimilar methods and to try to provide a more general framework for fast methods than the standard time-versus-frequency domain characterization of finite-difference, basis-collocation, and shooting methods.
Abstract: Designers of RF circuits such as power amplifiers, mixers, and filters make extensive use of simulation tools which perform periodic steady-state analysis and its extensions, but until the mid 1990s, the computational costs of these simulation tools restricted designers from simulating the behavior of complete RF subsystems. The introduction of fast matrix-implicit iterative algorithms completely changed this situation, and extensions of these fast methods are providing tools which can perform periodic, quasi-periodic, and periodic noise analysis of circuits with thousands of devices. Even though there are a number of research groups continuing to develop extensions of matrix-implicit methods, there is still no compact characterization which introduces the novice researcher to the fundamental issues. In this paper, we examine the basic periodic steady-state problem and provide both examples and linear algebra abstractions to demonstrate connections between seemingly dissimilar methods and to try to provide a more general framework for fast methods than the standard time-versus-frequency domain characterization of finite-difference, basis-collocation, and shooting methods

Journal ArticleDOI
TL;DR: A novel method for constructing arbitrarily large circuits that have known optimal solutions after technology mapping is presented and it is shown that although leading FPGA technology-mapping algorithms can produce close to optimal solutions, the results from the entire logic-synthesis flow are far from optimal.
Abstract: Field-programmable gate-array (FPGA) logic synthesis and technology mapping have been studied extensively over the past 15 years. However, progress within the last few years has slowed considerably (with some notable exceptions). It seems natural to then question whether the current logic-synthesis and technology-mapping algorithms for FPGA designs are producing near-optimal solutions. Although there are many empirical studies that compare different FPGA synthesis/mapping algorithms, little is known about how far these algorithms are from the optimal (recall that both logic-optimization and technology-mapping problems are NP-hard, if we consider area optimization in addition to delay/depth optimization). In this paper, we present a novel method for constructing arbitrarily large circuits that have known optimal solutions after technology mapping. Using these circuits and their derivatives (called Logic synthesis Examples with Known Optimal (LEKO) and Logic synthesis Examples with Known Upper bounds (LEKU), respectively), we show that although leading FPGA technology-mapping algorithms can produce close to optimal solutions, the results from the entire logic-synthesis flow (logicoptimization+mapping) are far from optimal. The LEKU circuits were constructed to show where the logic synthesis flow can be improved, while the LEKO circuits specifically deal with the performance of the technology mapping. The best industrial and academic FPGA synthesis flows are around 70 times larger in terms of area on average and, in some cases, as much as 500 times larger on LEKU examples. These results clearly indicate that there is much room for further research and improvement in FPGA synthesis

Journal ArticleDOI
TL;DR: This paper presents a simulation-based design approach based on the analysis of the actual traffic trace of the application, considering local variations in traffic rates, temporal overlap among traffic streams, and criticality of traffic streams that leads to large reduction in communication architecture power consumption and total wirelength.
Abstract: Designing a power-efficient interconnection architecture for multiprocessor systems-on-chips (MPSoCs) satisfying the application performance constraints is a nontrivial task. In order to meet the tight time-to-market constraints and to effectively handle the design complexity, it is essential to provide a computer-aided design tool support for automating this task. In this paper, we address the issue of ldquoapplication-specific design of optimal crossbar architecturerdquo satisfying the performance requirements of the application and optimal binding of the cores onto the crossbar resources. We present a simulation-based design approach that is based on the analysis of the actual traffic trace of the application, considering local variations in traffic rates, temporal overlap among traffic streams, and criticality of traffic streams. Our approach is physical design aware, where the wiring complexity of the crossbar architecture is also considered during the design process. This leads to detecting timing violations on the wires early in the design cycle and to having accurate estimates of the power consumption on the wires. We apply our methodology onto several MPSoC designs, and the synthesized crossbar platforms are validated for performance by cycle-accurate SystemC simulation of the designs. The crossbar matrix power consumption values are based on the synthesis of the register transfer level models of the designs, obtained using industry standard tools. The experimental case studies show large reduction in communication architecture power consumption (45.3% on average) and total wirelength (38% on average) for the MPSoC designs when compared with traditional design approaches. The synthesized crossbar designs also lead to large reduction in transaction latencies (up to 7 ) when compared with the existing design approaches.

Journal ArticleDOI
TL;DR: In this article, an analytical frequency-dependent resistance model for integrated spiral inductors is proposed, which provides a fast alternative to field solver-based approaches with typical errors of less than 2.6 percent while surpassing the accuracy of several other analytical modeling techniques by an order of magnitude.
Abstract: For integrated spiral inductor synthesis, designers and design automation tools require efficient modeling techniques during the initial design space exploration process. In this paper, we introduce an analytical frequency-dependent resistance model for integrated spiral inductors. Based on our resistance formulation, we have developed a systematic technique for creating wide-band circuit models for accurate time domain simulation. The analytical resistance model provides a fast alternative to field solver-based approaches with typical errors of less than 2.6 percent while surpassing the accuracy of several other analytical modeling techniques by an order of magnitude. Furthermore, the wide-band circuit generation technique captures the frequency-dependent resistance of the inductor with typical errors of less than 3.2 percent.

Proceedings ArticleDOI
04 Jun 2007
TL;DR: A silicon methodology to isolate silicon speedpath environments and feed these into a simulation framework to temporally and spatially isolate specific speedpaths in order to model and understand the real effects is described.
Abstract: Timing, test, reliability, and noise are modeled and abstracted in our design and verification flows. Specific EDA algorithms are then designed to work with these abstracted models, often in isolation of other effects. However, tighter design margins and higher reliability issues have increased the need for accurate models and algorithms. We propose utilizing silicon data to tune and improve the EDA tools and flows. In this paper we describe a silicon methodology to isolate silicon speedpath environments and feed these into a simulation framework to temporally and spatially isolate specific speedpaths in order to model and understand the real effects. This is done using accurate electrical speedpath modeling techniques which may be used to tune the accuracy and correlation of the design models. The effort required to distinguish the many different electrical effects will be outlined.

Proceedings ArticleDOI
04 Jun 2007
TL;DR: This paper describes the first fully-automated desynchronization design flow, based only on contemporary synchronous EDA tools and a new point tool for performing the des synchronization transformation, and indicates that desynchronized circuits exhibit increased variability tolerance and better average case performance, for a small area and power overhead.
Abstract: Variability is one of the fundamental problems faced by nano-scale electronic circuits and is expected to become even worse as process technology scales Desynchronization is a design methodology, which converts a synchronous gate- level circuit into a more robust asynchronous one In this paper, we describe the first fully-automated desynchronization design flow, based only on contemporary synchronous EDA tools and a new point tool for performing the desynchronization transformation The flow was used to implement, down to mask layout level, a simple pipelined processor in a 90 nm industrial library We show that the desynchronization methodology can be easily integrated into contemporary industrial EDA flows Results, on the design implemented, indicate that desynchronized circuits exhibit increased variability tolerance and better average case performance, for a small area and power overhead

Journal ArticleDOI
TL;DR: The papers which appear in this second part of this special section cover everything from protocol design and evaluation to the design and assessment of system-level solutions for wireless sensor networks in industrial automation.
Abstract: The three papers in this special section focus on wireless technologies in factory and industrial automation. The papers which appear in this second part cover everything from protocol design and evaluation to the design and assessment of system-level solutions for wireless sensor networks in industrial automation.

Patent
27 Apr 2007
TL;DR: In this paper, the authors present an approach to convert a circuit design from a synchronous representation to an asynchronous representation without interaction or redesign without knowledge of the underlying asynchronous architecture and hardware.
Abstract: Methods (700, 800, 900) and systems (Fig. 1) automate an approach to convert a circuit design from a synchronous representation (Fig. 4A) to an asynchronous representation (Fig. 4B) without interaction or redesign. Conversion of representations of synchronous circuit designs (101) to and from representations of asynchronous circuit designs (104) enable traditional electronic design automation tools to process asynchronous designs while allowing synchronous designs to be implemented using asynchronous hardware solutions. Feedback to synchronous design tools (105) in synchronous representation enables optimization while minimizing the need for knowledge of the underlying asynchronous architecture and hardware.

Journal ArticleDOI
30 Apr 2007
TL;DR: In this article, the verification issues faced by the A/RF designer are described and a verification methodology is presented to address these challenges, and the concept of an analog verification engineer is established.
Abstract: Meeting performance specifications in the design of analog and RF (A/RF) blocks and integrated circuits (IC) continues to require a high degree of skill, creativity, and expertise. However, today's A/RF designers are increasingly faced with a new challenge. Functional complexity in terms modes of operation, extensive digital calibration, and architectural algorithms is now overwhelming traditional A/RF design methodologies. Functionally verifying A/RF designs is a daunting task requiring a rigorous methodology. As occurred in digital design, analog verification is becoming a separate and critical task. This paper describes the verification issues faced by the A/RF designer and presents a verification methodology to address these challenges. It presents a systematic approach to A/RF verification, the concept of an analog verification engineer, how to establish the methodology, and concludes with an example

Book
14 Sep 2007
TL;DR: By treating digital logic as part of embedded systems design, this book provides an understanding of the hardware needed in the analysis and design of systems comprising both hardware and software components.
Abstract: Digital Design: An Embedded Systems Approach Using Verilog provides a foundation in digital design for students in computer engineering, electrical engineering and computer science courses. It takes an up-to-date and modern approach of presenting digital logic design as an activity in a larger systems design context. Rather than focus on aspects of digital design that have little relevance in a realistic design context, this book concentrates on modern and evolving knowledge and design skills. Hardware description language (HDL)-based design and verification is emphasized--Verilog examples are used extensively throughout. By treating digital logic as part of embedded systems design, this book provides an understanding of the hardware needed in the analysis and design of systems comprising both hardware and software components.Includes a Web site with links to vendor tools, labs and tutorials. . Presents digital logic design as an activity in a larger systems design context.. Features extensive use of Verilog examples to demonstrate HDL usage at the abstract behavioural level and register transfer level, as well as for low-level verification and verification environments.. Includes worked examples throughout to enhance the reader's understanding and retention of the material.. Companion Web site includes links to CAD tools for FPGA design from Synplicity, Mentor Graphics, and Xilinx, Verilog source code for all the examples in the book, lecture slides, laboratory projects, and solutions to exercises.

Journal ArticleDOI
TL;DR: This paper presents three evolutionary releases of an FPGA-based remote laboratory and discusses the didactical and technical motivations behind each release, aiming to reduce the overhead of setting up and operate a laboratory environment where designers and students can use FPGAs prototyping to validate their designs.
Abstract: The design of digital electronic systems for industrial applications can benefit in many ways from the prototyping capabilities of field-programmable gate array (FPGA) platforms. This paper presents three evolutionary releases of an FPGA-based remote laboratory and discusses the didactical and technical motivations behind each release, aiming to reduce the overhead of setting up and operate a laboratory environment where designers and students can use FPGA prototyping to validate their designs. To achieve that, a number of abstraction layers were introduced, allowing configuration and data processing in remote FPGA platforms, as well as integrating such platforms within a simulation environment. The proposed approach supported a number of projects where groups of designers and students could specify, refine, and prototype electronic systems using a pool of remotely available FPGA platforms.

Journal ArticleDOI
01 Sep 2007
TL;DR: No survey of such a broad field can be complete; however, the material presented in the paper is a summary of state-of-the-art CI concepts and approaches in product design engineering.
Abstract: Product design engineering is undergoing a transformation from informal and largely experience-based discipline to a science-based domain. Computational intelligence (CI) offers models and algorithms that can contribute greatly to design formalization and automation. This paper surveys CI concepts and approaches applicable to product design engineering. Taxonomy of the surveyed literature is presented according to the generally recognized areas in both product design engineering and CI. Some research issues that arise from the broad perspective presented in the paper have been signaled but not fully pursued. No survey of such a broad field can be complete; however, the material presented in the paper is a summary of state-of-the-art CI concepts and approaches in product design engineering.

Journal ArticleDOI
TL;DR: A general, scalable technique is developed for the model checking-based methodology and implemented in a tool called Scalable, Extensible Tool for Reliability Analysis (SETRA), which integrates the scalable model Checking-based algorithm into the conventional computer-aided design circuit design flow.
Abstract: The rapid development of CMOS and non-CMOS nanotechnologies has opened up new possibilities and introduced new challenges for circuit design. One of the main challenges is in designing reliable circuits from defective nanoscale devices. Hence, there is a need to develop methodologies to accurately evaluate circuit reliability. In recent years, a number of reliability evaluation methodologies based on probabilistic model checking, probabilistic transfer matrices, probabilistic gate models, etc., have been proposed. Scalability has been a concern in the applicability of these methodologies to the reliability analysis of large circuits. In this paper, we develop a general, scalable technique for these reliability evaluation methodologies. Specifically, an algorithm is developed for the model checking-based methodology and implemented in a tool called Scalable, Extensible Tool for Reliability Analysis (SETRA). SETRA integrates the scalable model checking-based algorithm into the conventional computer-aided design circuit design flow. The paper also discusses ways to modify the scalable algorithm for the other reliability estimation methodologies and plug them into SETRA's extensible framework. Our preliminary experiments show how SETRA can be used effectively to evaluate and compare the robustness of different circuit designs.

Journal ArticleDOI
TL;DR: The toolbox is a suite of design tools to support users from conceptual design to actual application of tripod-based parallel kinematic machines (PKMs), and includes some innovative methodologies, such as a forward kinematics solver, the concept of joint workspace, on-line monitoring based on forward k Cinematics, and the idea of motion purity.
Abstract: This paper presents a concept and implementation of a toolbox for design and application of tripod-based parallel kinematic machines (PKMs). The toolbox is a suite of design tools to support users from conceptual design to actual application of tripod-based PKMs. These design tools have been individually developed in different languages and development environments, and are integrated seamlessly using a JAVA-based platform. Users can access all the design tools through a friendly graphical user interface (GUI). It is the first computer-aided design system specially developed for tripod-based PKMs. The toolbox includes some innovative methodologies, such as a forward kinematics solver, the concept of joint workspace, on-line monitoring based on forward kinematics, and the concept of motion purity. The paper gives an overview on the toolbox architecture and some key technologies.

Book
10 Aug 2007
TL;DR: Four design flows are presented that can tackle large designs without significant changes with respect to synchronous design flow, and offer a trade-off from very low overhead, almost synchronous implementations, to very high performance, extremely robust dual-rail pipelines.
Abstract: The number of gates on a chip is quickly growing toward and beyond the one billion mark. Keeping all the gates running at the beat of a single or a few rationally related clocks is becoming impossible. In static timing analysis process variations and signal integrity issues stretch the timing margins to the point where they become too conservative and result in significant overdesign. Importance and difficulty of such problems push some developers to once again turn to asynchronous alternatives. However, the electronics industry for the most part is still reluctant to adopt asynchronous design (with a few notable exceptions) due to a common belief that we still lack a commercial-quality Electronic Design Automation tools (similar to the synchronous RTL-to-GDSII flow) for asynchronous circuits. The purpose of this paper is to counteract this view by presenting design flows that can tackle large designs without significant changes with respect to synchronous design flow. We are limiting ourselves to four design flows that we believe to be closest to this goal. We start from the Tangram flow, because it is the most commercially proven and it is one of the oldest from a methodological point of view. The other three flows (Null Convention Logic, de-synchronization, and gate-level pipelining) could be considered together as asynchronous re-implementations of synchronous (RTL-or gate-level) specifications. The main common idea is substituting the global clocks by local synchronizations. Their most important aspect is to open the possibility to implement large legacy synchronous designs in an almost "push button" manner, where all asynchronous machinery is hidden, so that synchronous RTL designers do not need to be re-educated. These three flows offer a trade-off from very low overhead, almost synchronous implementations, to very high performance, extremely robust dual-rail pipelines.

Proceedings ArticleDOI
04 Jun 2007
TL;DR: The mechanism to construct temporal assertions at models in various abstract levels and reuse the assertions on models at different abstract level is presented.
Abstract: SoC based system developments commonly employ ESL design methodologies and utilize multiple levels of abstract models to provide feasibility study models for architects and development platforms for software engineers. Such models are evolving to finer abstract models as the development moves forward. The correctness of these models coupled with the ability of having a temporal debug environment to identify and fix model issues is critical for both hardware and software development efforts that make use of such models. This paper presents the mechanism to construct temporal assertions at models in various abstract levels and reuse the assertions on models at different abstract level.

Proceedings ArticleDOI
11 Mar 2007
TL;DR: This work addresses the problem of combinational equivalence checking for threshold circuits, and proposes a new algorithm, to obtain compact functional representation of threshold elements, using this polynomial time algorithm to develop a new methodology to verify threshold circuits.
Abstract: Threshold logic is gaining prominence as an alternative to Boolean logic. The main reason for this trend is the availability of devices that implement these circuits efficiently (current mode, differential mode circuits), as well as the promise they hold for the future nanoalldevices (RTDs, SETs, QCAs and other nano devices). This has generated renewed interest in the design automation community to design efficient CAD tools for threshold logic. Recently a lot of work has been done to synthesize threshold logic circuits. So far there has been no efficient method to verify the synthesized circuits. In this work we address the problem of combinational equivalence checking for threshold circuits. We propose a new algorithm, to obtain compact functional representation of threshold elements. We give the proof of correctness, and analyze its runtime complexity. We use this polynomial time algorithm to develop a new methodology to verify threshold circuits. We report the result of our experiments, comparing the proposed methodology to the naive approach. We get up to 189X improvement in the run time (23X on average), and could verify circuits that the naive approach could not.

Proceedings ArticleDOI
25 Apr 2007
TL;DR: A matching-based placement and routing system for custom layout design automation especially for analog or mixed-signal designs that can easily generate high-quality layouts and greatly reduce the layout design time.
Abstract: Matching placement and routing is very important in layout design of high performance analog circuits. This paper presents a matching-based placement and routing system for custom layout design automation especially for analog or mixed-signal designs. The system explores various device-level matching-placement and matching-routing patterns to generate the most compact and high-quality layouts. Inputting a circuit netlist, the system automatically analyzes the circuit and extracts matching devices to form several matching device groups. Then, it selects the best matching placement and routing pattern for each device or device group to optimize and to meet the overall placement objectives and constraints. All patterns are user-configurable, stored in the pattern database, and portable from design to design. After the layout of each device and device group is generated and placed, the constraint-driven shape-based router is invoked to complete the layout. The overall system can easily generate high-quality layouts and greatly reduce the layout design time.

Journal ArticleDOI
TL;DR: This paper presents a system-level design environment for the generation of bus-based system-on-chip architectures using an automatic layer-based refinement approach, and shows significant productivity gains over a traditional communication design.
Abstract: With growing market pressures and rising system complexities, automated system-level communication design with efficient design space exploration capabilities is becoming increasingly important. At the same time, customized network-oriented communication architectures become necessary in enabling a high-performance communication among the system components. To this end, corresponding communication design flows that are supported by efficient design automation techniques need to be developed. In this paper, we present a system-level design environment for the generation of bus-based system-on-chip architectures. Our approach supports a two-stage design flow using automated model refinement toward custom heterogeneous communication networks. Starting from an abstract specification of the desired communication channels, our environment automatically generates tailored network models at various levels of abstraction. At its core, an automatic layer-based refinement approach is utilized. We have applied our approach to a set of industrial-strength examples with a wide range of target architectures. Our experimental results show significant productivity gains over a traditional communication design, allowing early and rapid design space exploration.