scispace - formally typeset
Search or ask a question

Showing papers on "Electronic design automation published in 2012"


Proceedings ArticleDOI
03 Jun 2012
TL;DR: This work proposes SALSA, a Systematic methodology for Automatic Logic Synthesis of Approximate circuits, which encodes the quality constraints using logic functions called Q-functions, and captures the flexibility that they engender as Approximation Don't Cares, which are used for circuit simplification using traditional don't care based optimization techniques.
Abstract: Approximate computing has emerged as a new design paradigm that exploits the inherent error resilience of a wide range of application domains by allowing hardware implementations to forsake exact Boolean equivalence with algorithmic specifications. A slew of manual design techniques for approximate computing have been proposed in recent years, but very little effort has been devoted to design automation. We propose SALSA, a Systematic methodology for Automatic Logic Synthesis of Approximate circuits. Given a golden RTL specification of a circuit and a quality constraint that defines the amount of error that may be introduced in the implementation, SALSA synthesizes an approximate version of the circuit that adheres to the pre-specified quality bounds. We make two key contributions: (i) the rigorous formulation of the problem of approximate logic synthesis, enabling the generation of circuits that are correct by construction, and (ii) mapping the problem of approximate synthesis into an equivalent traditional logic synthesis problem, thereby allowing the capabilities of existing synthesis tools to be fully utilized for approximate logic synthesis. In order to achieve these benefits, SALSA encodes the quality constraints using logic functions called Q-functions, and captures the flexibility that they engender as Approximation Don't Cares (ADCs), which are used for circuit simplification using traditional don't care based optimization techniques. We have implemented SALSA using two off-the-shelf logic synthesis tools — SIS and Synopsys Design Compiler. We automatically synthesize approximate circuits ranging from arithmetic building blocks (adders, multipliers, MAC) to entire datapaths (DCT, FIR, IIR, SAD, FFT Butterfly, Euclidean distance), demonstrating scalability and significant improvements in area (1.1X to 1.85X for tight error constraints, and 1.2X to 4.75X for relaxed error constraints) and power (1.15X to 1.75X for tight error constraints, and 1.3X to 5.25X for relaxed error constraints).

316 citations


Journal ArticleDOI
13 May 2012
TL;DR: This paper presents major achievements of two decades of research on methods and tools for hardware/software codesign by starting with a historical survey of its roots, highlighting its major research directions and achievements until today, and predicting in which direction research in codesign might evolve in the decades to come.
Abstract: Hardware/software codesign investigates the concurrent design of hardware and software components of complex electronic systems. It tries to exploit the synergy of hardware and software with the goal to optimize and/or satisfy design constraints such as cost, performance, and power of the final product. At the same time, it targets to reduce the time-to-market frame considerably. This paper presents major achievements of two decades of research on methods and tools for hardware/software codesign by starting with a historical survey of its roots, by highlighting its major research directions and achievements until today, and finally, by predicting in which direction research in codesign might evolve in the decades to come.

275 citations


Journal ArticleDOI
TL;DR: This work presents the automated design and manufacture of static and locomotion objects in which functionality is obtained purely by the unconstrained 3-D distribution of materials and suggests that this approach to design automation opens the door to leveraging the full potential of the freeform multimaterial design space to generate novel mechanisms and deformable robots.
Abstract: We present the automated design and manufacture of static and locomotion objects in which functionality is obtained purely by the unconstrained 3-D distribution of materials. Recent advances in multimaterial fabrication techniques enable continuous shapes to be fabricated with unprecedented fidelity unhindered by spatial constraints and homogeneous materials. We address the challenges of exploitation of the freedom of this vast new design space using evolutionary algorithms. We first show a set of cantilever beams automatically designed to deflect in arbitrary static profiles using hard and soft materials. These beams were automatically fabricated, and their physical performance was confirmed within 0.5-7.6% accuracy. We then demonstrate the automatic design of freeform soft robots for forward locomotion using soft volumetrically expanding actuator materials. One robot was fabricated automatically and assembled, and its performance was confirmed with 15% error. We suggest that this approach to design automation opens the door to leveraging the full potential of the freeform multimaterial design space to generate novel mechanisms and deformable robots.

245 citations


Journal ArticleDOI
TL;DR: This paper presents an evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results.
Abstract: High-level synthesis (HLS) is an increasingly popular approach in electronic design automation (EDA) that raises the abstraction level for designing digital circuits. With the increasing complexity of embedded systems, these tools are particularly relevant in embedded systems design. In this paper, we present our evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results. Even though HLS tools are still lacking some maturity, they are constantly improving and the industry is now starting to adopt them into their design flows.

162 citations


Journal ArticleDOI
TL;DR: The development and deployment of web-based bioCAD software, DeviceEditor, which provides a graphical design environment that mimics the intuitive visual whiteboard design process practiced in biological laboratories, and enables users to create successful prototypes using standardized, functional, and visual abstractions.
Abstract: Biological Computer Aided Design (bioCAD) assists the de novo design and selection of existing genetic components to achieve a desired biological activity, as part of an integrated design-build-test cycle. To meet the emerging needs of Synthetic Biology, bioCAD tools must address the increasing prevalence of combinatorial library design, design rule specification, and scar-less multi-part DNA assembly. We report the development and deployment of web-based bioCAD software, DeviceEditor, which provides a graphical design environment that mimics the intuitive visual whiteboard design process practiced in biological laboratories. The key innovations of DeviceEditor include visual combinatorial library design, direct integration with scar-less multi-part DNA assembly design automation, and a graphical user interface for the creation and modification of design specification rules. We demonstrate how biological designs are rendered on the DeviceEditor canvas, and we present effective visualizations of genetic component ordering and combinatorial variations within complex designs. DeviceEditor liberates researchers from DNA base-pair manipulation, and enables users to create successful prototypes using standardized, functional, and visual abstractions. Open and documented software interfaces support further integration of DeviceEditor with other bioCAD tools and software platforms. DeviceEditor saves researcher time and institutional resources through correct-by-construction design, the automation of tedious tasks, design reuse, and the minimization of DNA assembly costs.

127 citations


Journal ArticleDOI
TL;DR: Particle swarm optimization has been utilized for accommodating required functionalities and performance specifications considering optimal sizing of analog integrated circuits with high optimization ability in short computational time in this work.
Abstract: Together with the increase in electronic circuit complexity, the design and optimization processes have to be automated with high accuracy. Predicting and improving the design quality in terms of performance, robustness and cost is the central concern of electronic design automation. Generally, optimization is a very difficult and time consuming task including many conflicting criteria and a wide range of design parameters. Particle swarm optimization (PSO) was introduced as an efficient method for exploring the search space and handling constrained optimization problems. In this work, PSO has been utilized for accommodating required functionalities and performance specifications considering optimal sizing of analog integrated circuits with high optimization ability in short computational time. PSO based design results are verified with SPICE simulations and compared to previous studies.

122 citations


Journal ArticleDOI
TL;DR: Novel methodologies for enabling Multidisciplinary Design Optimization of complex engineering products, and the concept of High Level CAD templates (HLCt) will be proposed and discussed as the building blocks of flexible and robust CAD models, which in turn enables high fidelity geometry in the MDO loop.

115 citations


Journal ArticleDOI
TL;DR: In this paper, a multi-storey timber building system based on modularization principles was developed and the customization process used in this system was illustrated using a configurable timber floor slab module.

78 citations


Journal ArticleDOI
TL;DR: It is argued that a formal and model driven design methodology can lead to systems which meet this requirement and a more concrete design example on computer-aided diagnosis and automated decision making is provided.
Abstract: Physiological signals, medical images, and biosystems can be used to access the health of a subject and they can support clinicians by improving the diagnosis for treatment purposes. Computer-aided diagnosis (CAD) in healthcare applications can help in automated decision making, visualization and extraction of hidden complex features to aid in the clinical diagnosis. These CAD systems focus on improving the quality of patient care with a minimum of fault due to device failures. In this paper, we argue that a formal and model driven design methodology can lead to systems which meet this requirement. Modeling is not new to CAD, but modeling for systems design is less explored. Therefore, we discuss selected systems design techniques and provide a more concrete design example on computer-aided diagnosis and automated decision making.

74 citations


Proceedings ArticleDOI
22 Feb 2012
TL;DR: This paper presents a mechanism by which complex designs may be efficiently and automatically partitioned among multiple FPGAs using explicitly programmed latency-insensitive links and describes the automatic synthesis of an area efficient, high performance network for routing these inter-FPGA links.
Abstract: Traditionally, hardware designs partitioned across multiple FPGAs have had low performance due to the inefficiency of maintaining cycle-by-cycle timing among discrete FPGAs. In this paper, we present a mechanism by which complex designs may be efficiently and automatically partitioned among multiple FPGAs using explicitly programmed latency-insensitive links. We describe the automatic synthesis of an area efficient, high performance network for routing these inter-FPGA links. By mapping a diverse set of large research prototypes onto a multiple FPGA platform, we demonstrate that our tool obtains significant gains in design feasibility, compilation time, and even wall-clock performance.

65 citations


Journal ArticleDOI
TL;DR: The authors carry out a numerical study that aims at finding a trade-off between the design cost and reliability of the SBO algorithms and demonstrates that the use of multiple models of different fidelity may be beneficial to reduce the designcost while maintaining the robustness of the optimisation process.
Abstract: Electromagnetic (EM) simulation has become an important tool in the design of contemporary antenna structures. However, accurate simulations of realistic antenna models are expensive and therefore design automation by employing EM solver within an optimisation loop may be prohibitive because of its high computational cost. Efficient EM-driven antenna design can be performed using surrogate-based optimisation (SBO). A generic approach to construct surrogate models of antennas involves the use of coarse-discretisation EM simulations (low-fidelity models). A proper selection of the surrogate model fidelity is a key factor that influences both the performance of the design optimisation process and its computational cost. Despite its importance, this issue has not yet been investigated in the literature. Here, the authors focus on a problem of proper surrogate model management. More specifically, the authors carry out a numerical study that aims at finding a trade-off between the design cost and reliability of the SBO algorithms. Our considerations are illustrated using several antenna design cases. Furthermore, the authors demonstrate that the use of multiple models of different fidelity may be beneficial to reduce the design cost while maintaining the robustness of the optimisation process. Recommendations regarding the selection of the surrogate model coarseness are also given.

Proceedings ArticleDOI
12 Mar 2012
TL;DR: The application of coding strategies is an established methodology to improve the characteristics of on-chip interconnect architectures and design methods are required which realize the corresponding encoders and decoders with as small as possible overhead in terms of power and delay.
Abstract: The application of coding strategies is an established methodology to improve the characteristics of on-chip interconnect architectures. Therefore, design methods are required which realize the corresponding encoders and decoders with as small as possible overhead in terms of power and delay. In the past, conventional design methods have been applied for this purpose.This work proposes an entirely new direction which exploits design methods for reversible circuits. Here, much progress has been made in the last years. The resulting reversible circuits represent one-to-one mappings which can inherently work as logical descriptions for the desired encoders and decoders. Both, an exact and a heuristic synthesis approach, are introduced which rely on reversible design principles but also incorporate objectives from on-chip interconnect architectures.Experiments show that significant improvements with respect to power consumption, area, and delay can be achieved using the proposed direction.

Journal ArticleDOI
TL;DR: Outcomes of a research project aimed at improving engineering automation capability through development of a tool for automatic rule based path-finding for the complex engineering task of aircraft electrical harness and pipe routing are presented.

Book ChapterDOI
01 Jan 2012
TL;DR: The emerging trend of outsourcing the design and fabrication services to external facilities as well as increasing reliance on third-party Intellectual Property cores and electronic design automation tools makes integrated circuits (ICs) increasingly vulnerable to hardware Trojan attacks at different stages of its life-cycle.
Abstract: Emerging trend of outsourcing the design and fabrication services to external facilities as well as increasing reliance on third-party Intellectual Property (IP) cores and electronic design automation tools makes integrated circuits (ICs) increasingly vulnerable to hardware Trojan attacks at different stages of its life-cycle. Figure 15.1 shows the modern IC design, fabrication, test, and deployment stages highlighting the level of trust at each stage. This scenario raises a new set of challenges for trust validation with respect to malicious design modification at various stages of an IC life-cycle, where untrusted components/personnel are involved [1]. In particular, it brings in the requirement for reliable detection of malicious design modification made in an untrusted fabrication facility, during post-manufacturing test. It also imposes a requirement for trust validation in IP cores obtained from untrusted thirdparty vendors.

Proceedings ArticleDOI
12 Mar 2012
TL;DR: This paper proposes a novel methodology for automated design of an embedded multiprocessor system, which can run multiple hard- real-time streaming applications simultaneously and enables the use of hard-real-time multiprocessionor scheduling theory to schedule the applications in a way that temporal isolation and a given throughput of each application are guaranteed.
Abstract: The increasing complexity of modern embedded streaming applications imposes new challenges on system designers nowadays. For instance, the applications evolved to the point that in many cases hard-real-time execution on multiprocessor platforms is needed in order to meet the applications' timing requirements. Moreover, in some cases, there is a need to run a set of such applications simultaneously on the same platform with support for accepting new incoming applications at run-time. Dealing with all these new challenges increases significantly the complexity of system design. However, the design time must remain acceptable. This requires the development of novel systematic and automated design methodologies driven by the aforementioned challenges. In this paper, we propose such a novel methodology for automated design of an embedded multiprocessor system, which can run multiple hard-real-time streaming applications simultaneously. Our methodology does not need the complex and time-consuming design space exploration phase, present in most of the current state-of-the art multiprocessor design frameworks. In contrast, our methodology applies very fast yet accurate schedulability analysis to determine the minimum number of processors, needed to schedule the applications, and the mapping of applications' tasks to processors. Furthermore, our methodology enables the use of hard-real-time multiprocessor scheduling theory to schedule the applications in a way that temporal isolation and a given throughput of each application are guaranteed. We evaluate an implementation of our methodology using a set of real-life streaming applications and demonstrate that it can greatly reduce the design time and effort while generating high quality hard-real-time systems.

Proceedings ArticleDOI
07 May 2012
TL;DR: A performance-oriented implementation flow for WCHB QDI asynchronous circuits aiming to be fully compatible with conventional EDA tools for synchronous designs, which achieves an end-to-end asynchronous throughput of 850Mflit/s in typical conditions, making it faster than all connected synchronous IPs.
Abstract: In this paper, we present a performance-oriented implementation flow for WCHB QDI asynchronous circuits aiming to be fully compatible with conventional EDA tools for synchronous designs. Starting from a simple standard-cell library for asynchronous logic, this flow builds pseudo-synchronous models of the cells. With these models, a simple set of pseudo-synchronous timing constraints can be given to industrial EDA tools to benefit from their optimization strategies, through all steps from synthesis to place & route. This flow was benchmarked against regular asynchronous implementation relying on maximum delay constraints. Pseudo-synchronous modeling allows achieving significantly better performance and regularity than asynchronous modeling, for faster run times and reduced design effort. The proposed flow was used for the physical implementation of a 20-node network-on-chip in the ST Microelectronics 65nm low-power technology. It achieves an end-to-end asynchronous throughput of 850Mflit/s in typical conditions, making it faster than all connected synchronous IPs.

Proceedings Article
25 Oct 2012
TL;DR: This work presents a modeling library on top of SystemC, targeting heterogeneous embedded system design, based on four models of computation, which has a formal basis where all elements are well defined and lead in construction of analyzable models.
Abstract: Electronic System Level (ESL) design of embedded systems proposes raising the abstraction level of the design entry to cope with the increasing complexity of such systems. To exploit the benefits of ESL, design languages should allow specification of models which are a) heterogeneous, to describe different aspects of systems; b) formally defined, for application of analysis and synthesis methods; c) executable, to enable early detection of specification; and d) parallel, to exploit the multi- and many-core platforms for simulation and implementation. We present a modeling library on top of SystemC, targeting heterogeneous embedded system design, based on four models of computation. The library has a formal basis where all elements are well defined and lead in construction of analyzable models. The semantics of communication and computation are implemented by the library, which allows the designer to focus on specifying the pure functional aspects. A key advantage is that the formalism is used to export the structure and behavior of the models via introspection as an abstract representation for further analysis and synthesis.

Journal ArticleDOI
TL;DR: A high speed 4x4 bit Vedic Multiplier (VM) based on Vertically & Crosswise method of Vedic mathematics, a general multiplication formulae equally applicable to all cases of multiplication is presented.
Abstract: The need of high speed multiplier is increasing as the need of high speed processors are increasing. A Multiplier is one of the key hardware blocks in most fast processing system which is not only a high delay block but also a major source of power dissipation. A conventional processor requires substantially more hardware resources and processing time in the multiplication operation, rather than addition and subtraction. This paper presents a high speed 4x4 bit Vedic Multiplier (VM) based on Vertically & Crosswise method of Vedic mathematics, a general multiplication formulae equally applicable to all cases of multiplication. It is based on generating all partial products and their sum in one step. The coding is done in VHDL (Very High Speed Integrated Circuit Hardware Descriptive Language) while the synthesis and simulation is done using EDA (Electronic Design Automation) tool XilinxISE12.1i. The combinational path delay of 4x4 bit Vedic multiplier obtained after synthesis is compared with normal multipliers and found that the proposed Vedic multiplier circuit seems to have better performance in

Patent
27 Jun 2012
TL;DR: In this paper, the authors describe an approach for allowing electronic design, verification and optimization tools to implement very efficient approaches to allow the tools to directly address the effects of manufacturing processes, e.g., to identify and prevent problems caused by lithography processing.
Abstract: A approach is described for allowing electronic design, verification, and optimization tools to implement very efficient approaches to allow the tools to directly address the effects of manufacturing processes, e.g., to identify and prevent problems caused by lithography processing. Fast models and pattern checking are employed to integrate lithography and manufacturing aware processes within EDA tools such as routers.

Proceedings ArticleDOI
Charles J. Alpert1, Zhuo Li1, Gi-Joon Nam1, Chin Ngai Sze1, Natarajan Viswanathan1, Samuel I. Ward1 
05 Nov 2012
TL;DR: This paper makes the case that placement is a “hot topic” in design automation and presents several placement formulations related to routability, clocking, datapath, timing, and constraint management to drive years of research.
Abstract: Placement is considered a fundamental physical design problem in electronic design automation. It has been around so long that it is commonly viewed as a solved problem. However, placement is not just another design automation problem; placement quality is at the heart of design quality in terms of timing closure, routability, area, power and most importantly, time-to-market. Small improvements in placement quality often translate into large improvements further down the design closure stack. This paper makes the case that placement is a "hot topic" in design automation and presents several placement formulations related to routability, clocking, datapath, timing, and constraint management to drive years of research.

01 Jan 2012
TL;DR: Product development processes are continuously challenged by demands for increased efficiency as engineering products become more and more complex, efficient tools and methods for integrated and integrated engineering products as discussed by the authors. But, as a
Abstract: Product development processes are continuously challenged by demands for increased efficiency As engineering products become more and more complex, efficient tools and methods for integrated and a

01 Jan 2012
TL;DR: An emerging technique, which has the potential to considerably improve the quality of cross-couplings and synergies between subsystems, is presented.
Abstract: In the design of complex engineering products it is essential to handle cross-couplings and synergies between subsystems. An emerging technique, which has the potential to considerably improve the ...

Book
24 May 2012
TL;DR: This survey describes characterization techniques that integrate infrared imaging with electric current measurements to generate runtime power maps and describes empirical power characterization techniques for software power analysis and for adaptive power-aware computing.
Abstract: In this survey we describe the main research directions in pre-silicon power modeling and post-silicon power characterization. We review techniques in power modeling and characterization for three computing substrates: general-purpose processors, system-on-chip-based embedded systems, and field programmable gate arrays. We describe the basic principles that govern power consumption in digital circuits, and utilize these principles to describe high-level power modeling techniques for designs of the three computing substrates. Once a computing device is fabricated, direct measurements on the actual device reveal a great wealth of information about the device's power consumption under various operating conditions. We describe characterization techniques that integrate infrared imaging with electric current measurements to generate runtime power maps. The power maps can be used to validate design-time power models and to calibrate computer-aided design tools. We also describe empirical power characterization techniques for software power analysis and for adaptive power-aware computing. Finally, we provide a number of plausible future research directions for power modeling and characterization.

Journal ArticleDOI
Kai Huang1, Wolfgang Haid1, Iuliana Bacivarov1, Matthias Keller1, Lothar Thiele1 
TL;DR: An MPSoC software design flow that allows for automatically generating the system implementation, together with an analysis model for system verification, is presented and modular performance analysis (MPA) is integrated into the distributed operation layer (DOL) MP soC programming environment.
Abstract: Modern real-time streaming applications are increasingly implemented on multiprocessor systems-on-chip (MPSoC). The implementation, as well as the verification of real-time applications executing on MPSoCs, are difficult tasks, however. A major challenge is the performance analysis of MPSoCs, which is required for early design space exploration and final system verification. Simulation-based methods are not well-suited for this purpose, due to long runtimes and non-exhaustive corner-case coverage. To overcome these limitations, formal performance analysis methods that provide guarantees for meeting real-time constraints have been developed. Embedding formal performance analysis into the MPSoC design cycle requires the generation of a faithful analysis model and its calibration with the system-specific parameters. In this article, a design flow that automates these steps is presented. In particular, we integrate modular performance analysis (MPA) into the distributed operation layer (DOL) MPSoC programming environment. The result is an MPSoC software design flow that allows for automatically generating the system implementation, together with an analysis model for system verification.

Journal ArticleDOI
TL;DR: This article outlines bio-design automation using two complementary design approaches, bottom-up modular construction from biological primitives and pathway-based approaches, and highlights future challenges for both.
Abstract: Through principled engineering methods, synthetic biology aims to build specialized biological components that can be modularly composed to create complex systems. This article outlines bio-design automation using two complementary design approaches, bottom-up modular construction from biological primitives and pathway-based approaches. The article also highlights future challenges for both.

Journal ArticleDOI
TL;DR: FabScalar aims to automate superscalar core design, opening up processor design to microarchitectural diversity and its many opportunities.
Abstract: Providing multiple superscalar core types on a chip, each tailored to different classes of instruction-level behavior, is an exciting direction for increasing processor performance and energy efficiency. Unfortunately, processor design and verification effort increases with each additional core type, limiting the microarchitectural diversity that can be practically implemented. FabScalar aims to automate superscalar core design, opening up processor design to microarchitectural diversity and its many opportunities.

Proceedings ArticleDOI
09 Jul 2012
TL;DR: An end-to-end design framework to automatically synthesize an interpolation based logic-in-memoryblock named interpolation memory, which combines a seed table with simple arithmetic logic to efficiently evaluate functions, and results show that the logic- in-memory computing method achieves orders of magnitude of energysaving compared with the traditional in-processor computing.
Abstract: This paper presents a design methodology forhardware synthesis of application-specific logic-in-memory(LiM) blocks. Logic-in-memory designs tightly integrate specializedcomputation logic with embedded memory, enablingmore localized computation, thus save energy consumption. Asa demonstration, we present an end-to-end design frameworkto automatically synthesize an interpolation based logic-in-memoryblock named interpolation memory, which combinesa seed table with simple arithmetic logic to efficiently evaluatefunctions. In order to support multiple consecutive seed dataaccess that is required in the interpolation operation, wesynthesize the physical memory into the novel rectangular accesssmart memory blocks. We evaluated a large designspace of interpolation memories in sub-20 nm commercialCMOS technology by using the proposed design framework.Furthermore, we implemented a logic-in-memory based computedtomography (CT) medical image reconstruction systemand our experimental results show that the logic-in-memorycomputing method achieves orders of magnitude of energysaving compared with the traditional in-processor computing.

Journal ArticleDOI
TL;DR: In this paper, a drain-extended FinFET was proposed for highvoltage and high-speed applications, and a 2 × better RON versus VBD tradeoff was shown from technology computer-aided design simulations for the proposed device.
Abstract: A novel drain-extended FinFET device is proposed in this letter for high-voltage and high-speed applications. A 2 × better RON versus VBD tradeoff is shown from technology computer-aided design simulations for the proposed device, when compared with a conventional device option. Moreover, a device design and optimization guideline has been provided for the proposed device.

Journal ArticleDOI
TL;DR: This paper introduces a gate reliability EDA tool (GREDA) that is able to estimate more accurately the reliability of CMOS gates by considering: 1) the gate's topology; 2) the variable probability of failure of the individual devices (PFDEV); 3) the applied input vector; 4)The reliability of the input signals; and 5) the input voltage variations.
Abstract: Generic as well as customized reliability electronic design automation (EDA) tools have been proposed in the literature and used to estimate the reliability of both present and future (nano)circuits. However, the accuracy of many of these EDA tools is questionable as they: 1) either assume that all gates have the same constant probability of failure (PFGATE=const.) , or 2) use very simple approaches to estimate the reliability of the elementary gates. In this paper, we introduce a gate reliability EDA tool (GREDA) that is able to estimate more accurately the reliability of CMOS gates by considering: 1) the gate's topology; 2) the variable probability of failure of the individual devices (PFDEV); 3) the applied input vector; 4) the reliability of the input signals; and 5) the input voltage variations (which can be linked to the allowed noise margins). GREDA can be used to calculate PFGATE due to different types of faults and/or defects, and to estimate the effects of enhancing PFDEV on PFGATE. Simulation results show that GREDA can improve on the accuracy of reliability calculations at the gate level.

Journal ArticleDOI
TL;DR: A computer-aided linkage design system for tracing prescribed open or closed planar curves based on the existing VRMDS (Virtual Reality Mechanism Design Studio) framework, which shows that the system returned a desired solution in seconds and developed a set of design interfaces that facilitate designers to intervene and steer the design process.
Abstract: This paper presents a computer-aided linkage design system for tracing prescribed open or closed planar curves. The mechanism design is considered a mixture of science and art. The former is about utilizing computers to rigorously size a mechanism in meeting a set of design requirements and the latter is about taking advantage of designers' experience to narrow down the design domain and speed up the design process. The ultimate goal of the presented design system is to incorporate both science and art into the linkage design process by (1) developing an automatic design framework that is based on library searching and optimization techniques and (2) developing an interactive design framework that is based on advanced human-computer interfaces. To enable the design automation framework, we first pre-built a library of open and closed planar curves generated by commonly used planar linkages. We then turned the classical linkage path generation problem into a library searching problem together with a local optimization problem. To enable the interactive design framework, we developed a set of design interfaces that facilitate designers to intervene and steer the design process. This hybrid design system was developed based on our existing VRMDS (Virtual Reality Mechanism Design Studio) framework. To demonstrate its functionalities, we provided four representative design cases of 4-bar and crank-slider linkages. The result shows that the system returned a desired solution in seconds. We also demonstrate the extensibility of the system by implementing designs of planar 4-bar and crank-slider linkages.