scispace - formally typeset
Search or ask a question

Showing papers on "Electronic design automation published in 2018"


Journal ArticleDOI
TL;DR: In this article, the state of this emerging photonic circuit design flow and its synergies with electronic design automation (EDA) is reviewed. And the similarities and differences between photonic and electronic design, and the challenges and opportunities that present themselves in the new photonic design landscape, such as variability analysis, photonic-electronic co-simulation and compact model definition.
Abstract: Silicon Photonics technology is rapidly maturing as a platform for larger-scale photonic circuits. As a result, the associated design methodologies are also evolving from componentoriented design to a more circuit-oriented design flow, that makes abstraction from the very detailed geometry and enables design on a larger scale. In this paper, we review the state of this emerging photonic circuit design flow and its synergies with electronic design automation (EDA). We cover the design flow from schematic capture, circuit simulation, layout and verification. We discuss the similarities and the differences between photonic and electronic design, and the challenges and opportunities that present themselves in the new photonic design landscape, such as variability analysis, photonic-electronic co-simulation and compact model definition. Silicon Photonics Circuit Design: Methods, Tools and

355 citations


Book
28 Mar 2018
TL;DR: This paper intends to provide treatment where contracts are precisely defined and characterized so that they can be used in design methodologies such as the ones mentioned above with no ambiguity, and provides an important link between interfaces and contracts to show similarities and correspondences.
Abstract: Recently, contract-based design has been proposed as an “orthogonal” approach that complements system design methodologies proposed so far to cope with the complexity of system design. Contract-based design provides a rigorous scaffolding for verification, analysis, abstraction/refinement, and even synthesis. A number of results have been obtained in this domain but a unified treatment of the topic that can help put contract-based design in perspective was missing. This monograph intends to provide such a treatment where contracts are precisely defined and characterized so that they can be used in design methodologies with no ambiguity. In particular, this monograph identifies the essence of complex system design using contracts through a mathematical “meta-theory”, where all the properties of the methodology are derived from a very abstract and generic notion of contract. We show that the meta-theory provides deep and illuminating links with existing contract and interface theories, as well as guidelines for designing new theories. Our study encompasses contracts for both software and systems, with emphasis on the latter. We illustrate the use of contracts with two examples: requirement engineering for a parking garage management, and the development of contracts for timing and scheduling in the context of the AUTOSAR methodology in use in the automotive sector.

238 citations


Journal ArticleDOI
21 Sep 2018-Science
TL;DR: A quantitative method to design regulatory circuits that encode sequential logic using NOT gates as the core unit of regulation, in which an input promoter drives the expression of a repressor protein that turns off an output promoter.
Abstract: INTRODUCTION Modern computing is based on sequential logic, in which the state of a circuit depends both on the present inputs as well as the input history (memory). Implementing sequential logic inside a living cell would enable it to be programmed to progress through discrete states. For example, cells could be designed to differentiate into a multicellular structure or order the multistep construction of a material. A key challenge is that sequential logic requires the implementation of regulatory feedback, which has proven difficult to design and scale. RATIONALE We present a quantitative method to design regulatory circuits that encode sequential logic. Our approach uses NOT gates as the core unit of regulation, in which an input promoter drives the expression of a repressor protein that turns off an output promoter. Each gate is characterized by measuring its response function, in other words, how changing the input affects the output at steady state. Mathematically, the response functions are treated as nullclines, and tools from nonlinear dynamics (phase plane and bifurcation analyses) are applied to predict how combining gates leads to multiple steady states and dynamics. The circuits can be connected to genetic sensors that respond to environmental information. This is used to implement checkpoint control, in which the cell waits for the right signals before continuing to the next state. Circuits are built that instruct Escherichia coli to proceed through a linear or cyclical sequence of states. RESULTS First, pairs of repressors are combined to build the simplest unit of sequential logic: a set-reset (SR) latch, which records a digital bit of information. The SR latches can be easily connected to each other and to sensors because they are designed such that the inputs and outputs are both promoters. Each latch requires two repressors that inhibit each other’s expression. A total of 11 SR latches were designed by using a phase plane analysis. The computation accurately predicts the existence of multiple steady states by using only the empirical NOT gate response functions. A set of 43 circuits was constructed that connects these latches to different combinations of sensors that respond to small molecules in the media. These circuits are shown to reliably hold their state for >48 hours over many cell divisions, only switching states in response to the sensors that connect to the set and reset inputs of the latch. Larger circuits are constructed by combining multiple SR latches and additional feedback loops. A gated data (D) latch, common in electronic integrated circuits, is constructed where one input sets the state of the circuit and the second input locks this state. Up to three SR latches (based on six repressors) are combined in a single cell, thus allowing three bits to be reversibly stored. The performances of these circuits closely match those predicted by the responses of the component gates and a bifurcation analysis. Circuits are designed to implement checkpoint control, in which cells wait indefinitely in a state until the correct signals are received to progress to the next state. The progression can be designed to be cyclical, analogous to cell cycle phases, during which cells progress through a series of states until returning to the starting state. The length of time in each state is indefinite, which is confirmed by demonstrating stability for days when the checkpoint conditions are not met. CONCLUSION This work demonstrates the implementation of sequential logic circuits in cells by combining reliable units of regulation according to simple rules. This approach is conducive to design automation software, which can use these rules to combine gates to build larger circuits. This provides a designable path to building regulatory networks with feedback loops, critical to many cellular functions and ubiquitous in natural networks. This represents a critical step toward performing advanced computing inside of cells.

92 citations


Proceedings ArticleDOI
25 Mar 2018
TL;DR: Examples applications include removing unnecessary design and modeling margins through correlation mechanisms, achieving faster design convergence through predictors of downstream flow outcomes that comprehend both tools and design instances, and corollaries such as optimizing the usage of design resources licenses and available schedule.
Abstract: In the late-CMOS era, semiconductor and electronics companies face severe product schedule and other competitive pressures. In this context, electronic design automation (EDA) must deliver "design-based equivalent scaling" to help continue essential industry trajectories. A powerful lever for this will be the use of machine learning techniques, both inside and "around" design tools and flows. This paper reviews opportunities for machine learning with a focus on IC physical implementation. Example applications include (1) removing unnecessary design and modeling margins through correlation mechanisms, (2) achieving faster design convergence through predictors of downstream flow outcomes that comprehend both tools and design instances, and (3) corollaries such as optimizing the usage of design resources licenses and available schedule. The paper concludes with open challenges for machine learning in IC physical design.

66 citations


Journal ArticleDOI
17 Sep 2018
TL;DR: This paper identifies, abstract, and formalize components of smart buildings, and presents a design flow that maps high-level specifications of desired building applications to their physical implementations under the PBD framework.
Abstract: Smart buildings today are aimed at providing safe, healthy, comfortable, affordable, and beautiful spaces in a carbon and energy-efficient way. They are emerging as complex cyber–physical systems with humans in the loop. Cost, the need to cope with increasing functional complexity, flexibility, fragmentation of the supply chain, and time-to-market pressure are rendering the traditional heuristic and ad hoc design paradigms inefficient and insufficient for the future. In this paper, we present a platform-based methodology for smart building design. Platform-based design (PBD) promotes the reuse of hardware and software on shared infrastructures, enables rapid prototyping of applications, and involves extensive exploration of the design space to optimize design performance. In this paper, we identify, abstract, and formalize components of smart buildings, and present a design flow that maps high-level specifications of desired building applications to their physical implementations under the PBD framework. A case study on the design of on-demand heating, ventilation, and air conditioning (HVAC) systems is presented to demonstrate the use of PBD.

63 citations


Journal ArticleDOI
TL;DR: This paper is the first paper that has applied text mining and KE for use in product development and can also reduce the time and cost of product design through the automation of repetitive design tasks.

52 citations


Journal ArticleDOI
TL;DR: In this paper, a module model library is proposed to accurately model microfluidic components involving layer interactions; and a co-layout synthesis tool, Columba, which generates AutoCAD-compatible designs that fulfill all designs rules and can be directly used for mask fabrication.
Abstract: Continuous-flow microfluidic large-scale integration (mLSI) shows increasing importance in biological/chemical fields, thanks to its advantages in miniaturization and high throughput. Current mLSI is designed manually, which is time-consuming and error-prone. In recent years, design automation research for mLSI has evolved rapidly, aiming to replace manual labor by computers. However, previous design automation approaches used to design each microfluidic layer separately and over-simplify the layer interactions to various degrees, which resulted in a gap between realistic requirements and automatically generated designs. In this paper, we propose a module model library to accurately model microfluidic components involving layer interactions; and we propose a co-layout synthesis tool, Columba, which generates AutoCAD-compatible designs that fulfill all designs rules and can be directly used for mask fabrication. Columba takes plain-text netlist descriptions as inputs, and performs simultaneous placement and routing for multiple layers while ensuring the planarity of each layer. We validate Columba by fabricating two of its output designs. Columba is the first design automation tool that can seamlessly synchronize with the manufacturing flow.

45 citations


Posted Content
TL;DR: The 55th Design Automation Conference (DAC) held its first System Design Contest (SDC) in 2018 as mentioned in this paper, which attracted more than 110 entries from 12 countries and included 95 categories and 150k images, and the hardware platforms include Nvidia's TX2 and Xilinx's PYNQ Z1.
Abstract: The 55th Design Automation Conference (DAC) held its first System Design Contest (SDC) in 2018. SDC'18 features a lower power object detection challenge (LPODC) on designing and implementing novel algorithms based object detection in images taken from unmanned aerial vehicles (UAV). The dataset includes 95 categories and 150k images, and the hardware platforms include Nvidia's TX2 and Xilinx's PYNQ Z1. DAC-SDC'18 attracted more than 110 entries from 12 countries. This paper presents in detail the dataset and evaluation procedure. It further discusses the methods developed by some of the entries as well as representative results. The paper concludes with directions for future improvements.

40 citations


Journal ArticleDOI
TL;DR: An open source standard cell library for design automation of large-scale transistor-level M3-D ICs is developed, thereby facilitating future research on the critical aspects of M3, as well as investigating the effect of the number of routing tracks on area, power, and delay characteristics.
Abstract: Monolithic 3-D (M3-D) integrated circuits (ICs) provide vertical interconnects with comparable size to on-chip metal vias, and therefore, achieve ultra-high density device integration. This fine-grained connectivity enabled by monolithic inter-tier vias reduces the silicon area, overall wirelength, and power consumption. An open source standard cell library for design automation of large-scale transistor-level M3-D ICs is developed, thereby facilitating future research on the critical aspects of M3-D technology. The cell library is based on full-custom design of each standard cell and is fully characterized by using existing design automation tools. The proposed open source cell library is utilized to demonstrate the M3-D implementation of several benchmark circuits of various sizes ranging from 2.7-K gates to 1.6-M gates. Both power and timing characteristics of the M3-D ICs are quantified. Several versions of the cell library are developed with different number of routing tracks to better understand the issue of routing congestion in the M3-D ICs. The effect of the number of routing tracks on area, power, and delay characteristics is investigated. Finally, the primary clock tree characteristics of the M3-D ICs are discussed.

38 citations


Proceedings ArticleDOI
27 May 2018
TL;DR: A novel technology mapping tool, called SFQmap, is presented, which provides optimization methods for minimizing first the circuit depth and path balancing overhead and then the worst-case stage delay of mapped SFQ circuits.
Abstract: Single flux quantum (SFQ) logic is a promising candidate to replace the CMOS logic for high speed and low power applications due to its superiority in providing high performance and energy efficient circuits. However, developing effective Electronic Design Automation (EDA) tools, which cater to special characteristics and requirements of SFQ circuits such as depth minimization and path balancing, are essential to automate the whole process of designing large SFQ circuits. In this paper, a novel technology mapping tool, called SFQmap, is presented, which provides optimization methods for minimizing first the circuit depth and path balancing overhead and then the worst-case stage delay of mapped SFQ circuits. Compared with the state-of-the-art technology mappers, SFQmap reduces the depth and path balancing overhead by an average of 14% and 31%, respectively.

37 citations


Journal ArticleDOI
26 Jun 2018
TL;DR: The lessons learned in the design and implementation of an experimental design automation tool suite, OpenMETA, for complex CPS in the vehicle domain are described, with experience and lessons learned by using Open META in drivetrain design and by adapting OpenMeta to substantially different CPS application domains.
Abstract: Design methods and tools evolved to support the principle of "separation of concerns" in order to manage engineering complexity. Accordingly, most engineering tool suites are vertically integrated but have limited support for integration across disciplinary boundaries. Cyber–physical systems (CPSs) challenge these established boundaries between disciplines, and thus, the status quo on the tools market. The question is how to create the foundations and technologies for semantically precise model and tool integration that enable reuse of existing commercial and open source tools in domain-specific design flows. In this paper, we describe the lessons learned in the design and implementation of an experimental design automation tool suite, OpenMETA, for complex CPS in the vehicle domain. The conceptual foundation for the integration approach is platform-based design: OpenMETA is architected by introducing two key platforms: the model integration platform and the tool integration platform. The model integration platform includes methods and tools for the precise representation of semantic interfaces among modeling domains. The key new components of the model integration platform are model integration languages and the mathematical framework and tool for the compositional specification of their semantics. The tool integration platform is designed for executing highly automated design-space exploration. Key components of the platform are tools for constructing design spaces and model composers for analytics workflows. The paper concludes with describing experience and lessons learned by using OpenMETA in drivetrain design and by adapting OpenMETA to substantially different CPS application domains.

Journal ArticleDOI
TL;DR: The digital design and manufacturing workflow is validated by designing, fabricating, and testing a series of structures that illustrate capabilities, show how it empowers the exploitation of new design freedom, and even challenges traditional design principles relating form, structure, and function.
Abstract: The integration of emerging technologies into a complete digital thread promises to disrupt design and manufacturing workflows throughout the value chain to enable efficiency and productivity transformation, while unlocking completely new design freedom. A particularly appealing aspect involves the simultaneous design and manufacture of the macroscale structural topology and material microstructure of a product. Here we demonstrate such a workflow that digitally integrates: design automation – conception and automation of a design problem based on multiscale topology optimization; material compilation – computational geometry algorithms that create spatially-variable, physically-realizable multimaterial microstructures; and digital fabrication – fabrication of multiscale optimized components via voxel-based additive manufacturing with material jetting of multiple photo-curable polymers. We validate the digital design and manufacturing workflow by designing, fabricating, and testing a series of structures that illustrate capabilities, show how it empowers the exploitation of new design freedom, and even challenges traditional design principles relating form, structure, and function.

Journal ArticleDOI
TL;DR: An And-Inverter Graphs based automated logic synthesis algorithm is described as an example to implement the EO logic, which offers an instruction for the design automation of high-speed integrated optical computing circuits.
Abstract: Integrated optical computing attracts increasing interest recently as Moore's law approaches the physical limitation. Among all the approaches of integrated optical computing, directed logic that takes the full advantage of integrated photonics and electronics has received lots of investigation since its first introduction in 2007. Meanwhile, as integrated photonics matures, it has become critical to develop automated methods for synthesizing optical devices for large-scale optical designs. In this paper, we propose a general electro-optic (EO) logic in a higher level to explore its potential in integrated computing. Compared to the directed logic, the EO logic leads to a briefer design with shorter optical paths and fewer components. Then a comprehensive gate library based on EO logic is summarized. At last, an And-Inverter Graphs (AIGs) based automated logic synthesis algorithm is described as an example to implement the EO logic, which offers an instruction for the design automation of high-speed integrated optical computing circuits.

Proceedings ArticleDOI
26 Aug 2018
TL;DR: A design automation process by combining GANs and topology optimization is proposed and has been applied to the wheel design of automobiles and has shown that an aesthetically superior and technically meaningful design can be automatically generated without human interventions.
Abstract: Recent advances in deep learning enable machines to learn existing designs by themselves and to create new designs. Generative adversarial networks (GANs) are widely used to generate new images and data by unsupervised learning. Certain limitations exist in applying GANs directly to product designs. It requires a large amount of data, produces uneven output quality, and does not guarantee engineering performance. To solve these problems, this paper proposes a design automation process by combining GANs and topology optimization. The suggested process has been applied to the wheel design of automobiles and has shown that an aesthetically superior and technically meaningful design can be automatically generated without human interventions.

Journal ArticleDOI
TL;DR: A novel integrated physical co-design methodology, which seamlessly integrates the flow-layer and control-layer design stages, is presented, which allows for an iterative placement refinement based on routing feedbacks.
Abstract: Flow-based microfluidic biochips are attracting increasing attention with successful applications in biochemical experiments, point-of-care diagnosis, etc. Existing works in design automation consider the flow-layer design and control-layer design separately, lacking a global optimization and hence resulting in degraded routability and reliability. This paper presents a novel integrated physical co-design methodology, which seamlessly integrates the flow-layer and control-layer design stages. In the flow-layer design stage, a sequence-pair-based placement method is presented, which allows for an iterative placement refinement based on routing feedbacks. In the control-layer design stage, the minimum cost flow formulation is adopted to further improve the routability. Besides that, effective placement adjustment strategies are proposed to iteratively enhance the solution quality of the overall control-layer design. Experimental results show that compared with the existing work, the proposed design flow obtains an average reduction of 40.44% in flow-channel crossings, 31.95% in total chip area, and 22.02% in total flow-channel length. Moreover, all the valves are successfully routed in the control-layer design stage.

Proceedings ArticleDOI
12 Mar 2018
TL;DR: This work proposes an approach to automate the implementation, optimization, and verification of TMR circuits in commercial technologies and shows the area and performance overhead for the different TMR implementations and the efficiency of the verification process with different logic optimization and performance optimizations.
Abstract: This work intends to overcome the issues encountered in the use of commercial EDA tools to design fault-tolerant circuits based on the Triple Modular Redundancy (TMR) technique. Circuit optimizations performed by the tool tend to remove the added redundant logic or induce to apply further constraints that lead to non optimal fault-tolerant designs. Thus, this work proposes an approach to automate the implementation, optimization, and verification of TMR circuits in commercial technologies. Three steps are added to the front-end design of ASICs. First, employing a post-synthesis netlist and according to the desired granularity level the TMR technique is applied, three different TMR versions of the circuit can be implemented automatically. Afterwards, gate sizing is performed over the resulting circuit in order to improve performance. Third, equivalence checking is used to verify both correct functionality and fault-tolerant capability of the TMR circuit with regards to the original circuit. The proposed approach is employed on a set of architectures of a case-study circuit. Results show the area and performance overhead for the different TMR implementations and the efficiency of the verification process with different logic optimization and performance optimizations.

Proceedings ArticleDOI
27 May 2018
TL;DR: Because some of the existing ML accelerators have used asynchronous design, the state of the art in asynchronous CAD support is reviewed, and opportunities for ML within these flows are identified.
Abstract: The rise of machine learning (ML) has introduced many opportunities for computer-aided-design, VLSI design, and their intersection. Related to computer-aided design, we review several classical CAD algorithms which can benefit from ML, outline the key challenges, and discuss promising approaches. In particular, because some of the existing ML accelerators have used asynchronous design, we review the state-of-the-art in asynchronous CAD support, and identify opportunities for ML within these flows.

Proceedings ArticleDOI
22 Jan 2018
TL;DR: This paper describes several near-term challenges and opportunities, along with concrete existence proofs, for application of learning-based methods within the ecosystem of commercial EDA, IC design, and academic research.
Abstract: Design-based equivalent scaling now bears much of the burden of continuing the semiconductor industry's trajectory of Moore's-Law value scaling. In the future, reductions of design effort and design schedule must comprise a substantial portion of this equivalent scaling. In this context, machine learning and deep learning in EDA tools and design flows offer enormous potential for value creation. Examples of opportunities include: improved design convergence through prediction of downstream flow outcomes; margin reduction through new analysis correlation mechanisms; and use of open platforms to develop learning-based applications. These will be the foundations of future design-based equivalent scaling in the IC industry. This paper describes several near-term challenges and opportunities, along with concrete existence proofs, for application of learning-based methods within the ecosystem of commercial EDA, IC design, and academic research.


Journal ArticleDOI
TL;DR: A means to foster the transition of academic methods to industrial practice through a comprehensible and comprehensive design automation task categorisation that allows practitioners to grasp the opportunities state-of-the-art design automation offers.
Abstract: Engineering design automation has been an active field of research and application for more than five decades. Despite a multitude of available methods stemming from research fields such as Knowled...

Proceedings ArticleDOI
25 Mar 2018
TL;DR: This paper focuses on commercial FPGA based logic emulation and presents various challenging problems in this area for the academic community.
Abstract: Functional verification is an important aspect of electronic design automation. Traditionally, simulation at the register transfer-level has been the mainstream functional verification approach. Formal verification and various static analysis checkers have been used to complement specific corners of logic simulation. However, as the size of IC designs grow exponentially, all the above approaches fail to scale with the design growth. In recent years, logic emulation have gained popularity in functional verification, partly due to their performance and scalability benefits. There are two main approaches to logic emulation: ASIC and commercial field-programmable gate array (FPGA). In this paper, we focus on commercial FPGA based logic emulation and present various challenging problems in this area for the academic community.

Journal ArticleDOI
01 Sep 2018
TL;DR: It is argued that seemingly unrelated research challenges, such as in machine learning and security, could also profit from the methods and superior modeling capabilities of self-aware systems.
Abstract: Future cyber–physical systems will host a large number of coexisting distributed applications on hardware platforms with thousands to millions of networked components communicating over open networks. These applications and networks are subject to continuous change. The current separation of design process and operation in the field will be superseded by a life-long design process of adaptation, infield integration, and update. Continuous change and evolution, application interference, environment dynamics and uncertainty lead to complex effects which must be controlled to serve a growing set of platform and application needs. Self-adaptation based on self-awareness and self-configuration has been proposed as a basis for such a continuous in-field process. Research is needed to develop automated in-field design methods and tools with the required safety, availability, and security guarantees. The paper shows two complementary use cases of self-awareness in architectures, methods, and tools for cyber–physical systems. The first use case focuses on safety and availability guarantees in self-aware vehicle platforms. It combines contracting mechanisms, tool based self-analysis and self-configuration. A software architecture and a runtime environment executing these tools and mechanisms autonomously are presented including aspects of self-protection against failures and security threats. The second use case addresses variability and long term evolution in networked MPSoC integrating hardware and software mechanisms of surveillance, monitoring, and continuous adaptation. The approach resembles the logistics and operation principles of manufacturing plants which gave rise to the metaphoric term of an Information Processing Factory that relies on incremental changes and feedback control. Both use cases are investigated by larger research groups. Despite their different approaches, both use cases face similar design and design automation challenges which will be summarized in the end. We will argue that seemingly unrelated research challenges, such as in machine learning and security, could also profit from the methods and superior modeling capabilities of self-aware systems.

Proceedings ArticleDOI
24 Jun 2018
TL;DR: Future design tools and flows that never require iteration demand new paradigms and core algorithms for parallel, cloud-based design automation, and the EDA and design ecosystem must develop new infrastructure for ML model development and sharing.
Abstract: To reduce time and effort in IC implementation, fundamental challenges must be solved. First, the need for (expensive) humans must be removed wherever possible. Humans are skilled at predicting downstream flow failures, evaluating key early decisions such as RTL floorplanning, and deciding tool/flow options to apply to a given design. Achieving human-quality prediction, evaluation and decision-making will require new machine learning-centric models of both tools and designs. Second, to reduce design schedule, focus must return to the long-held dream of single-pass design. Future design tools and flows that never require iteration (i.e., that never fail, but without undue conservatism) demand new paradigms and core algorithms for parallel, cloud-based design automation. Third, learning-based models of tools and flows must continually improve with additional design experiences. Therefore, the EDA and design ecosystem must develop new infrastructure for ML model development and sharing.

Proceedings ArticleDOI
24 Jun 2018
TL;DR: Experiments show that Columba S is able to generate mLSI designs with more than 200 functional units within three minutes, which enables the design of a platform for large and complex applications.
Abstract: Microfluidic large-scale integration (mLSI) is a promising platform for high-throughput biological applications. Design automation for mLSI has made much progress in recent years. Columba and its succeeding work Columba 2.0 proposed a mathematical modeling method that enables automatic design of manufacturing-ready chips within minutes. However, current approaches suffer from a huge computation load when the designs become larger. Thus, in this work, we propose Columba S with a focus on scalability. Columba S applies a new architectural framework and a straight channel routing discipline, and synthesizes multiplexers for efficient and reconfigurable valve control. Experiments show that Columba S is able to generate mLSI designs with more than 200 functional units within three minutes, which enables the design of a platform for large and complex applications.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: In this study, EDA and numerical analysis tools are operated collaboratively and sequentially to create an automated co-simulation based iterative design and optimization methodology for wideband power amplifier design.
Abstract: Design and optimization of wideband nonlinear RF circuits are time-consuming and challenging design issue. Both electronic design automation (EDA) and numerical analysis tools can be used to design microwave circuits. However, they have different capabilities regarding calculation and optimization processes. In this study, EDA and numerical analysis tools are operated collaboratively and sequentially to create an automated co-simulation based iterative design and optimization methodology for wideband power amplifier design. Co-simulation based optimization provides uninterrupted and un-intervened design and optimization process. Consequently, the required effort and time to achieve a high-performance circuit are reduced dramatically. Keysight ADS is used nonlinear analysis of the circuits and MATLAB used to control process and for optimization. Initial circuit parameters were generated by simplified real frequency technique (SRFT). A Class-AB X-Band high power amplifier including a GaN HEMT was designed and optimized with the proposed methodology.

Journal ArticleDOI
TL;DR: This study aims to investigate the level of integration between digital building models and crowd simulation, within the scope of design automation, and presents a methodology in which existing ontology tools facilitate knowledge representation and mining throughout the process.

Journal ArticleDOI
TL;DR: A hybrid design automation tool for asynchronous successive approximation register analog-to-digital converters (SAR ADCs) in Internet-of-Things applications is presented and the circuit design-driven tool uses a top-down design approach and generates circuits from specification to layout automatically.
Abstract: In this paper, a hybrid design automation tool for asynchronous successive approximation register analog-to-digital converters (SAR ADCs) in Internet-of-Things applications is presented. The circuit design-driven tool uses a top-down design approach and generates circuits from specification to layout automatically. A hybrid approach is introduced for different circuits of a SAR ADC: fully synthesized control logic; a script-based flow combining equations, library, and template-based design for the digital-to-analog converter; a lookup table approach combined with selective simulation-based fine tuning and template-based layout generation for the sample and hold; library-based comparator design and script-based layout generation. By balancing the automation and manual effort, the circuit design time is reduced from days down to minutes while still being able to maintain ADC performance. The proposed flow generated two ADC prototypes in 40-nm CMOS, an 8-bit 32 MS/s and a 12-bit 1 MS/s SAR ADC, and enabled excellent power efficiency. The two ADCs consume 187 and $16.7~\mu \text{W}$ at 1-V supply voltage, achieving 30.7 and 18.1 fJ/conversion-step, respectively.

Proceedings ArticleDOI
27 May 2018
TL;DR: In this paper, the authors present the hardware design and synthesis of a purely combinational BNN for ultra-low power near-sensor processing, which leverages the major opportunities raised by BNN models, which consist mostly of logical bit-wise operations and integer counting and comparisons.
Abstract: Design automation in general, and in particular logic synthesis, can play a key role in enabling the design of application-specific Binarized Neural Networks (BNN). This paper presents the hardware design and synthesis of a purely combinational BNN for ultra-low power near-sensor processing. We leverage the major opportunities raised by BNN models, which consist mostly of logical bit-wise operations and integer counting and comparisons, for pushing ultra-low power deep learning circuits close to the sensor and coupling them with binarized mixed-signal image sensor data. We analyze area, power and energy metrics of BNNs synthesized as combinational networks. Our synthesis results in GlobalFoundries 22 nm SOI technology shows a silicon area of 2.61 mm2 for implementing a combinational BNN with 32×32 binary input sensor receptive field and weight parameters fixed at design time. This is 2.2× smaller than a synthesized network with re-configurable parameters. With respect to other comparable techniques for deep learning near-sensor processing, our approach features a 10× higher energy efficiency.

Proceedings ArticleDOI
22 Jan 2018
TL;DR: It is shown how the measurements performed in the lab can accurately be modeled in order to be integrated in the design automation tool flow in the form of a Process Design Kit (PDK).
Abstract: Printed electronics offers certain technological advantages over its silicon based counterparts, such as mechanical flexibility, low process temperatures, maskless and additive manufacturing process, leading to extremely low cost manufacturing. However, to be exploited in applications such as smart sensors, Internet of Things and wearables, it is essential that the printed devices operate at low supply voltages. Electrolyte gated field effect transistors (EGFETs) using solution-processed inorganic materials which are fully printed using inkjet printers at low temperatures are very promising candidates to provide such solutions. In this paper, we discuss the technology, process, modeling, fabrication, and design aspect of circuits based on EGFETs. We show how the measurements performed in the lab can accurately be modeled in order to be integrated in the design automation tool flow in the form of a Process Design Kit (PDK). We also review some of the remaining challenges in this technology and discuss our future directions to address them.

Book ChapterDOI
01 Jan 2018
TL;DR: The paper presents practical validation of the methodology on the example of plumbing fittings design process, which includes five main stages of KBE system build: identification, acquisition of knowledge, system design, building system components and implementation.
Abstract: The paper presents a methodology of building a KBE system for design automation of product variants. The idea of the system is based on a web-based architecture, assuming that configuration of the product variant is performed by its customer, through a special user interface. Configuration data are transformed in a CAD software automatically, which allows to prepare the technical documentation without participation of qualified engineers. The methodology is presented as a procedure which includes five main stages of KBE system build: identification, acquisition of knowledge, system design, building system components and implementation. The paper presents practical validation of the methodology on the example of plumbing fittings design process.