scispace - formally typeset
Search or ask a question

Showing papers on "Electronic design automation published in 2020"


Journal ArticleDOI
01 Jul 2020
TL;DR: The development of neuro-inspired computing chips and their key benchmarking metrics are reviewed, providing a co-design tool chain and proposing a roadmap for future large-scale chips are provided and a future electronic design automation tool chain is proposed.
Abstract: The rapid development of artificial intelligence (AI) demands the rapid development of domain-specific hardware specifically designed for AI applications. Neuro-inspired computing chips integrate a range of features inspired by neurobiological systems and could provide an energy-efficient approach to AI computing workloads. Here, we review the development of neuro-inspired computing chips, including artificial neural network chips and spiking neural network chips. We propose four key metrics for benchmarking neuro-inspired computing chips — computing density, energy efficiency, computing accuracy, and on-chip learning capability — and discuss co-design principles, from the device to the algorithm level, for neuro-inspired computing chips based on non-volatile memory. We also provide a future electronic design automation tool chain and propose a roadmap for the development of large-scale neuro-inspired computing chips. This Review Article examines the development of neuro-inspired computing chips and their key benchmarking metrics, providing a co-design tool chain and proposing a roadmap for future large-scale chips.

303 citations


Journal ArticleDOI
TL;DR: This study has developed gates for yeast that are connected using RNA polymerase flux as the signal carrier and are insulated from each other and host regulation and constructed nine NOT/NOR gates with nearly identical response functions and 400-fold dynamic range.
Abstract: Cells can be programmed to monitor and react to their environment using genetic circuits. Design automation software maps a desired circuit function to a DNA sequence, a process that requires units of gene regulation (gates) that are simple to connect and behave predictably. This poses a challenge for eukaryotes due to their complex mechanisms of transcription and translation. To this end, we have developed gates for yeast (Saccharomyces cerevisiae) that are connected using RNA polymerase flux as the signal carrier and are insulated from each other and host regulation. They are based on minimal constitutive promoters (~120 base pairs), for which rules are developed to insert operators for DNA-binding proteins. Using this approach, we constructed nine NOT/NOR gates with nearly identical response functions and 400-fold dynamic range. In circuits, they are transcriptionally insulated from each other by placing ribozymes downstream of terminators to block nuclear export of messenger RNAs resulting from RNA polymerase readthrough. Based on these gates, Cello 2.0 was used to build circuits with up to 11 regulatory proteins. A simple dynamic model predicts the circuit response over days. Genetic circuit design automation for eukaryotes simplifies the construction of regulatory networks as part of cellular engineering projects, whether it be to stage processes during bioproduction, serve as environmental sentinels or guide living therapeutics. This study describes design automation and predictable gene regulatory network engineering in a eukaryotic microorganism.

76 citations


Journal ArticleDOI
TL;DR: Advances are presented that enable the complete encoding of an electronic chip in the DNA carried by Escherichia coli, an exemplar of design automation pushing engineering beyond that achievable “by hand”, essential for realizing the potential of biology.
Abstract: Synthetic genetic circuits offer the potential to wield computational control over biology, but their complexity is limited by the accuracy of mathematical models. Here, we present advances that enable the complete encoding of an electronic chip in the DNA carried by Escherichia coli (E. coli). The chip is a binary-coded digit (BCD) to 7-segment decoder, associated with clocks and calculators, to turn on segments to visualize 0-9. Design automation is used to build seven strains, each of which contains a circuit with up to 12 repressors and two activators (totaling 63 regulators and 76,000 bp DNA). The inputs to each circuit represent the digit to be displayed (encoded in binary by four molecules), and output is the segment state, reported as fluorescence. Implementation requires an advanced gate model that captures dynamics, promoter interference, and a measure of total power usage (RNAP flux). This project is an exemplar of design automation pushing engineering beyond that achievable "by hand", essential for realizing the potential of biology.

49 citations


Proceedings ArticleDOI
09 Mar 2020
TL;DR: The proposed recognition scheme organically detects layout constraints, such as symmetry and matching, whose identification is essential for high-quality hierarchical layout, and demonstrates a high degree of accuracy over a wide range of analog designs.
Abstract: Automated subcircuit identification and annotation enables the creation of hierarchical representations of analog netlists, and can facilitate a variety of design automation tasks such as circuit layout and optimization Subcircuit identification must navigate the numerous alternative structures that can implement any analog function, but traditional graph-based methods cannot easily identify the large number of such structural variants The novel approach in this paper is based on the use of a trained graph convolutional neural network (GCN) that identifies netlist elements for circuit blocks at upper levels of the design hierarchy Structures at lower levels of hierarchy are identified using graph-based algorithms The proposed recognition scheme organically detects layout constraints, such as symmetry and matching, whose identification is essential for high-quality hierarchical layout The subcircuit identification method demonstrates a high degree of accuracy over a wide range of analog designs, successfully identifies larger circuits that contain sub-blocks such as OTAs, LNAs, mixers, oscillators, and band-pass filters, and provides hierarchical decompositions of such circuits

48 citations


Journal ArticleDOI
TL;DR: This study proposes an efficient gradient search algorithm with numerical derivatives that is competitive to both the reference trust-region algorithm as well as its recently reported accelerated versions.
Abstract: Electromagnetic (EM) simulation tools are of primary importance in the design of contemporary antennas. The necessity of accurate performance evaluation of complex structures is a reason why the final tuning of antenna dimensions, aimed at improvement of electrical and field characteristics, needs to be based on EM analysis. Design automation is highly desirable and can be achieved by coupling EM solvers with numerical optimisation routines. Unfortunately, its computational overhead may be impractically high for conventional algorithms. This study proposes an efficient gradient search algorithm with numerical derivatives. The acceleration of the optimisation process is obtained by means of the two mechanisms developed to suppress some of finite-differentiation-based updates of the antenna response sensitivities that involve monitoring and quantifying the gradient changes as well as design relocation between the consecutive algorithm iterations. Both methods considerably reduce the need for finite differentiation, leading to significant computational savings. At the same time, excellent reliability and repeatability is maintained, which is demonstrated through statistics over multiple algorithm runs with random initial designs. The proposed approach is validated using a benchmark set of wideband antennas. The proposed algorithm is competitive to both the reference trust-region algorithm as well as its recently reported accelerated versions.

42 citations


Journal ArticleDOI
TL;DR: An automatic and circuit independent design framework that generates approximate circuits with dynamically reconfigurable accuracy at runtime that improves the energy by up to 41% for 2% error bound and by 17.5% under a pessimistic scenario that assumes full accuracy requirement in the 33% of the runtime.
Abstract: Leveraging the inherent error tolerance of a vast number of application domains that are rapidly growing, approximate computing arises as a design alternative to improve the efficiency of our computing systems by trading accuracy for energy savings. However, the requirement for computational accuracy is not fixed. Controlling the applied level of approximation dynamically at runtime is a key to effectively optimize energy, while still containing and bounding the induced errors at runtime. In this paper, we propose and implement an automatic and circuit independent design framework that generates approximate circuits with dynamically reconfigurable accuracy at runtime. The generated circuits feature varying accuracy levels, supporting also accurate execution. Extensive experimental evaluation, using industry strength flow and circuits, demonstrates that our generated approximate circuits improve the energy by up to 41% for 2% error bound and by 17.5% on average under a pessimistic scenario that assumes full accuracy requirement in the 33% of the runtime. To demonstrate further the efficiency of our framework, we considered two state-of-the-art technology libraries which are a 7nm conventional FinFET and an emerging technology that boosts performance at a high cost of increased dynamic power.

40 citations


Journal ArticleDOI
TL;DR: The second Design Automation for Power Electronics (DAPE) workshop was held in the beautiful historical city of Genova, Italy, on 6 September 2019 and provided outstanding presentations covering the wide field of design automation for power electronics.
Abstract: The second Design Automation for Power Electronics (DAPE) workshop was held in the beautiful historical city of Genova, Italy, on 6 September 2019. This was one day after the successful European Conference on Power Electronics and Applications (EPE ECCE). With approximately 40 participants and a nice mix of industry and academic affiliations, the workshop provided outstanding presentations covering the wide field of design automation for power electronics. A very novel exchange of views was performed by breakout sessions, and the use of online voting technology to rapidly poll the attendees on prepared questions of interest in DA provided valuable information (Figure 1).

30 citations


Proceedings ArticleDOI
02 Nov 2020
TL;DR: JKQ-a set of tools for quantum computing developed at the Johannes Kepler University (JKU) Linz which utilizes this design automation expertise to offer complementary approaches for many design problems in quantum computing such as simulation, compilation, or verification.
Abstract: With quantum computers on the brink of practical applicability, there is a lively community that develops toolkits for the design of corresponding quantum circuits. Many of the problems to be tackled here are similar to design problems from the classical realm for which sophisticated design automation tools have been developed in the previous decades. In this paper, we present JKQ---a set of tools for quantum computing developed at the Johannes Kepler University (JKU) Linz which utilizes this design automation expertise. By this, we offer complementary approaches for many design problems in quantum computing such as simulation, compilation, or verification. In the following, we provide an introduction of the tools for potential users who would like to work with them as well as potential developers aiming to extend them.

29 citations


15 Mar 2020
TL;DR: SkyNet as discussed by the authors is a hardware-efficient neural network to deliver the state-of-the-art detection accuracy and speed for embedded systems instead of following the common top-down flow for compact DNN (Deep Neural Network) design.
Abstract: Object detection and tracking are challenging tasks for resource-constrained embedded systems While these tasks are among the most compute-intensive tasks from the artificial intelligence domain, they are only allowed to use limited computation and memory resources on embedded devices In the meanwhile, such resource-constrained implementations are often required to satisfy additional demanding requirements such as real-time response, high-throughput performance, and reliable inference accuracy To overcome these challenges, we propose SkyNet, a hardware-efficient neural network to deliver the state-of-the-art detection accuracy and speed for embedded systems Instead of following the common top-down flow for compact DNN (Deep Neural Network) design, SkyNet provides a bottom-up DNN design approach with comprehensive understanding of the hardware constraints at the very beginning to deliver hardware-efficient DNNs The effectiveness of SkyNet is demonstrated by winning the competitive System Design Contest for low power object detection in the 56th IEEE/ACM Design Automation Conference (DAC-SDC), where our SkyNet significantly outperforms all other 100+ competitors: it delivers 0731 Intersection over Union (IoU) and 6733 frames per second (FPS) on a TX2 embedded GPU; and 0716 IoU and 2505 FPS on an Ultra96 embedded FPGA The evaluation of SkyNet is also extended to GOT-10K, a recent large-scale high-diversity benchmark for generic object tracking in the wild For state-of-the-art object trackers SiamRPN++ and SiamMask, where ResNet-50 is employed as the backbone, implementations using our SkyNet as the backbone DNN are 160X and 173X faster with better or similar accuracy when running on a 1080Ti GPU, and 3720X smaller in terms of parameter size for significantly better memory and storage footprint

27 citations


Journal ArticleDOI
TL;DR: A high-fidelity geometry definition methodology enabling Multidisciplinary Design, Analysis and Optimization of aircraft configurations, and a use case example of the geometric modeling API, where an automated aerodynamic analysis workflow is used to construct a prediction model for canard-wing configurations.

25 citations


Journal ArticleDOI
TL;DR: This work proposes design automation techniques for architecting efficient neural networks given a target hardware platform and demonstrates that such learning-based, automated design achieves superior performance and efficiency than the rule-based human design.
Abstract: Efficient deep learning inference requires algorithm and hardware codesign to enable specialization: we usually need to change the algorithm to reduce memory footprint and improve energy efficiency. However, the extra degree of freedom from the neural architecture design makes the design space much larger: it is not only about designing the hardware architecture but also codesigning the neural architecture to fit the hardware architecture. It is difficult for human engineers to exhaust the design space by heuristics. We propose design automation techniques for architecting efficient neural networks given a target hardware platform. We investigate automatically designing specialized and fast models, auto channel pruning, and auto mixed-precision quantization. We demonstrate that such learning-based, automated design achieves superior performance and efficiency than the rule-based human design. Moreover, we shorten the design cycle by 200× than previous work, so that we can afford to design specialized neural network models for different hardware platforms.

Journal ArticleDOI
TL;DR: A methodology based on hierarchical multilevel bottom-up design approaches is presented, where multiobjective optimization algorithms are used to design an entire RF system from the passive component level up to the system level.
Abstract: In recent years there has been a growing interest in electronic design automation methodologies for the optimization-based design of radio frequency (RF) circuits and systems. While for simple circuits several successful methodologies have been proposed, these very same methodologies exhibit significant deficiencies when the complexity of the circuit is increased. The majority of the published methodologies that can tackle RF systems are either based on high-level system specification tools or use models to estimate the system performances. Hence, such approaches do not usually provide the desired accuracy for RF systems. In this paper, a methodology based on hierarchical multilevel bottom-up design approaches is presented, where multiobjective optimization algorithms are used to design an entire RF system from the passive component level up to the system level. Furthermore, each level of the hierarchy is simulated with the highest accuracy possible: electromagnetic simulation accuracy at device-level and electrical simulations at circuit/system-level.

Posted Content
TL;DR: The LLHD multi-level IR (LLHD) as mentioned in this paper is designed as a reference description of a digital circuit, which can be used to transport designs through modern circuit design flows.
Abstract: Modern Hardware Description Languages (HDLs) such as SystemVerilog or VHDL are, due to their sheer complexity, insufficient to transport designs through modern circuit design flows. Instead, each design automation tool lowers HDLs to its own Intermediate Representation (IR). These tools are monolithic and mostly proprietary, disagree in their implementation of HDLs, and while many redundant IRs exists, no IR today can be used through the entire circuit design flow. To solve this problem, we propose the LLHD multi-level IR. LLHD is designed as simple, unambiguous reference description of a digital circuit, yet fully captures existing HDLs. We show this with our reference compiler on designs as complex as full CPU cores. LLHD comes with lowering passes to a hardware-near structural IR, which readily integrates with existing tools. LLHD establishes the basis for innovation in HDLs and tools without redundant compilers or disjoint IRs. For instance, we implement an LLHD simulator that runs up to 2.4x faster than commercial simulators but produces equivalent, cycle-accurate results. An initial vertically-integrated research prototype is capable of representing all levels of the IR, implements lowering from the behavioural to the structural IR, and covers a sufficient subset of SystemVerilog to support a full CPU design.

Book
01 Jan 2020
TL;DR: This book presents the methodologies and for embedded systems design, using field programmable gate array (FPGA) devices, for the most modern applications.
Abstract: This book presents the methodologies and for embedded systems design, using field programmable gate array (FPGA) devices, for the most modern applications. Coverage includes state-of-the-art research from academia and industry on a wide range of topics, including applications, advanced electronic design automation (EDA), novel system architectures, embedded processors, arithmetic, and dynamic reconfiguration.

Proceedings ArticleDOI
Yuzhe Ma1, Zhuolun He1, Wei Li1, Lu Zhang1, Bei Yu1 
30 Mar 2020
TL;DR: A set of key techniques for conducting machine learning on graphs is discussed and two case studies are presented to demonstrate the potential of graph learning on EDA applications.
Abstract: As the scale of integrated circuits keeps increasing, it is witnessed that there is a surge in the research of electronic design automation (EDA) to make the technology node scaling happen. Graph is of great significance in the technology evolution since it is one of the most natural ways of abstraction to many fundamental objects in EDA problems like netlist and layout, and hence many EDA problems are essentially graph problems. Traditional approaches for solving these problems are mostly based on analytical solutions or heuristic algorithms, which require substantial efforts in designing and tuning. With the emergence of the learning techniques, dealing with graph problems with machine learning or deep learning has become a potential way to further improve the quality of solutions. In this paper, we discuss a set of key techniques for conducting machine learning on graphs. Particularly, a few challenges in applying graph learning to EDA applications are highlighted. Furthermore, two case studies are presented to demonstrate the potential of graph learning on EDA applications.

Proceedings ArticleDOI
Hao Chen1, Keren Zhu1, Mingjie Liu1, Xiyuan Tang1, Nan Sun1, David Z. Pan1 
02 Nov 2020
TL;DR: Experimental results demonstrate the efficiency and effectiveness of the approach in optimizing circuit performance while satisfying the specified constraints and post-layout simulations prove that the detailed routing results can achieve sign-off quality.
Abstract: Detailed routing is an intricate and tedious procedure in design automation and has become a crucial step for advanced node enablement. Compared with its advances in digital design, detailed routing for analog/mixed-signal (AMS) integrated circuits (ICs) is still heavily manual. In AMS designs, the sensitive net coupling issues and analog-specific constraints make detailed routing even more challenging. This work presents a novel and efficient detailed routing framework for automated AMS layout synthesis considering industrial design rules as well as analog-specific geometric and electrical constraints. Experimental results demonstrate the efficiency and effectiveness of our approach in optimizing circuit performance while satisfying the specified constraints. Post-layout simulations further prove that our detailed routing results can achieve sign-off quality.

Journal ArticleDOI
TL;DR: The role of electronic design automation (EDA) tools is proved by fully supporting the complex design of a ULP complementary Class-B/C hybrid-mode VCO by performing aworst-case corner of worst-case tuning sizing optimization over a 108-dimensional performance space.
Abstract: Optimal voltage-controlled oscillator (VCO) design for ultralow-power (ULP) radios has to fulfill simultaneously multiple requirements such as frequency tuning range, phase noise, power consumption, and frequency pushing. The manual design struggles to approach the full potential that a given topology can achieve. In this work, we prove the role of electronic design automation (EDA) tools by fully supporting the complex design of a ULP complementary Class-B/C hybrid-mode VCO. In the 1st step of the EDA-assisted flow, we perform a worst-case corner of worst-case tuning sizing optimization over a 108-dimensional performance space, offering sizing solutions with power consumption down to $145~\mu \text{W}$ at the worst-case. In the 2nd step, we introduce an automatic layout generation tool to offer valuable insights into the post-layout design space and devise a ready-for-tape-out fine optimization strategy. The hybrid-mode VCO prototyped in 65-nm CMOS occupies a die area of 0.165 mm2 and dissipates $297~\mu \text{W}$ from a 0.8 V supply at 5.1 GHz. The phase noise at 1 MHz offset is −110.1 dBc/Hz, resulting in a competitive Figure-of-Merit (FoM) of 189.4 dBc/Hz well-suited for ULP applications.

Journal ArticleDOI
TL;DR: In this article, the authors present a computational design synthesis framework to automate the design of complex-shaped multi-flow nozzles, which can be used as building blocks to generate 3D part geometries.
Abstract: Additive manufacturing (AM) enables highly complex-shaped and functionally optimized parts. To leverage this potential the creation of part designs is necessary. However, as today’s computer-aided design (CAD) tools are still based on low-level, geometric primitives, the modeling of complex geometries requires many repetitive, manual steps. As a consequence, the need for an automated design approach is emphasized and regarded as a key enabler to quickly create different concepts, allow iterative design changes, and customize parts at reduced effort. Topology optimization exists as a computational design approach but usually demands a manual interpretation and redesign of a CAD model and may not be applicable to problems such as the design of parts with multiple integrated flows. This work presents a computational design synthesis framework to automate the design of complex-shaped multi-flow nozzles. The framework provides AM users a toolbox with design elements, which are used as building blocks to generate finished 3D part geometries. The elements are organized in a hierarchical architecture and implemented using object-oriented programming. As the layout of the elements is defined with a visual interface, the process is accessible to non-experts. As a proof of concept, the framework is applied to successfully generate a variety of customized AM nozzles that are tested using co-extrusion of clay. Finally, the work discusses the framework’s benefits and limitations, the impact on product development and novel AM applications, and the transferability to other domains.

Journal ArticleDOI
TL;DR: A novel abstract representation of applications and their associated configuration spaces is formulated, a similarity metric is introduced to compare quantitatively the configuration spaces of different applications, and a method to infer actionable information from a source space to a target space is presented.
Abstract: High-Level Synthesis (HLS) tools allow the generation of a large variety of hardware implementations from the same specification by setting different optimization directives. Each combination of HLS directives returns an implementation of the target application that is based on a particular microarchitecture. Designers are interested only in the subset of implementations that correspond to Pareto-optimal points in the performance versus cost design space. Finding this subset is hard because the relationship between the HLS directives and the Pareto-optimal implementations cannot be foreseen. Hence, designers must default to an exploration of the design space through many time-consuming HLS runs. We present a methodology that infers knowledge from past design explorations to identify high-quality directives for new target applications. To this end, we formulate a novel abstract representation of applications and their associated configuration spaces, introduce a similarity metric to compare quantitatively the configuration spaces of different applications, and a method to infer actionable information from a source space to a target space. The experimental results with the MachSuite benchmarks show that our approach retrieves close approximations of the Pareto frontier of best-performing implementations for the target application, in exchange for a small number of HLS runs.

Proceedings ArticleDOI
15 Mar 2020
TL;DR: A review of the most recent progress in antenna design optimization with a focus on methods which address the challenges of efficiency and optimization capability via machine learning techniques.
Abstract: Antenna design optimization continues to attract a lot of interest. This is mainly because traditional antenna design methodologies are exhaustive and have no guarantee of yielding successful outcomes due to the complexity of contemporary antennas in terms of topology and performance requirements. Though design automation via optimization complements conventional antenna design approaches, antenna design optimization still presents a number of challenges. The major challenges in antenna design optimization include the efficiency and optimization capability of available methods to address a broad scope of antenna design problems considering the growing stringent specifications of modern antennas. This paper presents a review of the most recent progress in antenna design optimization with a focus on methods which address the challenges of efficiency and optimization capability via machine learning techniques. The methods highlighted in this paper will likely have an impact on the future development of antennas for a multiplicity of applications.

Journal ArticleDOI
TL;DR: The review concludes with perspectives on the future of computer-aided microfluidics design, including the introduction of cloud computing, machine learning, new ideation processes, and hybrid optimization.
Abstract: Microfluidic devices developed over the past decade feature greater intricacy, increased performance requirements, new materials, and innovative fabrication methods. Consequentially, new algorithmic and design approaches have been developed to introduce optimization and computer-aided design to microfluidic circuits: from conceptualization to specification, synthesis, realization, and refinement. The field includes the development of new description languages, optimization methods, benchmarks, and integrated design tools. Here, recent advancements are reviewed in the computer-aided design of flow-, droplet-, and paper-based microfluidics. A case study of the design of resistive microfluidic networks is discussed in detail. The review concludes with perspectives on the future of computer-aided microfluidics design, including the introduction of cloud computing, machine learning, new ideation processes, and hybrid optimization.

Proceedings ArticleDOI
09 Mar 2020
TL;DR: In this article, the authors introduce hardware security for the Electronic Design Automation (EDA) community and review prior (academic) art for EDA-driven security evaluation and implementation of countermeasures and discuss strategies and challenges for advancing research and development toward secure composition of circuits and systems.
Abstract: Modern electronic systems become evermore complex, yet remain modular, with integrated circuits (ICs) acting as versatile hardware components at their heart. Electronic design automation (EDA) for ICs has focused traditionally on power, performance, and area. However, given the rise of hardware-centric security threats, we believe that EDA must also adopt related notions like secure by design and secure composition of hardware. Despite various promising studies, we argue that some aspects still require more efforts, for example: effective means for compilation of assumptions and constraints for security schemes, all the way from the system level down to the "bare metal"; modeling, evaluation, and consideration of security-relevant metrics; or automated and holistic synthesis of various countermeasures, without inducing negative cross-effects. In this paper, we first introduce hardware security for the EDA community. Next we review prior (academic) art for EDA-driven security evaluation and implementation of countermeasures. We then discuss strategies and challenges for advancing research and development toward secure composition of circuits and systems.

Proceedings ArticleDOI
16 Jun 2020
TL;DR: Back-end-of-line (BEOL) integration of multi-tier logic and memory established within a commercial foundry is reported, enabled by a low-temperature BEOL-compatible complementary carbon nanotube (CNT) field-effect transistor (CNFET) logic technology, alongside a BE OL compatible Resistive RAM (RRAM) technology.
Abstract: The inevitable slowing of two-dimensional scaling is motivating efforts to continue scaling along a new physical axis: the 3 rd dimension. Here we report back-end-of-line (BEOL) integration of multi-tier logic and memory established within a commercial foundry. This is enabled by a low-temperature BEOL-compatible complementary carbon nanotube (CNT) field-effect transistor (CNFET) logic technology, alongside a BEOL-compatible Resistive RAM (RRAM) technology. All vertical layers are fabricated sequentially over the same starting substrate, using conventional BEOL nano-scale inter-layer vias (ILVs) as vertical interconnects (e.g., monolithic 3D integration, rather than chip-stacking and bonding). In addition, we develop the entire VLSI design infrastructure required for a foundry technology offering, including an industry-practice monolithic 3D process design kit (PDK) as well as a complete monolithic 3D standard cell library. The initial foundry process integrates 4 device tiers (2 tiers of complementary CNFET logic and 2 tiers of RRAM memory) with 15 metal layers at a ~130 nm technology node. We fabricate and experimentally validate the standard cell library across all monolithic 3D tiers, as well as a range of sub-systems including memories (BEOL SRAM, 1T1R memory arrays) as well as logic (including the compute core of a 16-bit microprocessor) - all of which is fabricated in the foundry within the BEOL interconnect stack. All fabrication is VLSI-compatible and leverages existing silicon CMOS infrastructure, and the entire design flow is compatible with existing commercial electronic design automation tools.

Proceedings ArticleDOI
11 Jun 2020
TL;DR: LLHD is designed as simple, unambiguous reference description of a digital circuit, yet fully captures existing HDLs, and establishes the basis for innovation in HDLs and tools without redundant compilers or disjoint IRs.
Abstract: Modern Hardware Description Languages (HDLs) such as SystemVerilog or VHDL are, due to their sheer complexity, insufficient to transport designs through modern circuit design flows. Instead, each design automation tool lowers HDLs to its own Intermediate Representation (IR). These tools are monolithic and mostly proprietary, disagree in their implementation of HDLs, and while many redundant IRs exists, no IR today can be used through the entire circuit design flow. To solve this problem, we propose the LLHD multi-level IR. LLHD is designed as simple, unambiguous reference description of a digital circuit, yet fully captures existing HDLs. We show this with our reference compiler on designs as complex as full CPU cores. LLHD comes with lowering passes to a hardware-near structural IR, which readily integrates with existing tools. LLHD establishes the basis for innovation in HDLs and tools without redundant compilers or disjoint IRs. For instance, we implement an LLHD simulator that runs up to 2.4× faster than commercial simulators but produces equivalent, cycle-accurate results. An initial vertically-integrated research prototype is capable of representing all levels of the IR, implements lowering from the behavioural to the structural IR, and covers a sufficient subset of SystemVerilog to support a full CPU design.

Proceedings ArticleDOI
30 Mar 2020
TL;DR: This paper provides detailed discussions and fair power-performance-area (PPA) comparisons of state-of-the-art pseudo-3D design flows and provides a partitioning-first scheme to partitions-last design flow which increases design freedom with tolerable PPA degradation.
Abstract: Despite the recent academic efforts to develop Electronic Design Automation (EDA) algorithms for 3D ICs, the current market does not have commercial 3D computer-aided design (CAD) tools. Insteadpseudo-3D alternative design flows have been devised which utilize commercial 2D CAD engines with tricks that help them operate as a fairly-efficient 3D CAD tool. In this paper we provide detailed discussions and fair power-performance-area (PPA) comparisons of state-of-the-art pseudo-3D design flows. We also analyze the limitations of each design flow and provide solutions with better PPA and various design options. Our experiments using commercial PDK, GDS layouts, and sign-off simulations demonstrate that we achieve up to 26% wirelength and 10% power consumption reduction for pseudo-3D design flows. We also provide a partitioning-first scheme to partitioning-last design flow which increases design freedom with tolerable PPA degradation.

Proceedings ArticleDOI
TL;DR: This paper introduces hardware security for the EDA community, and reviews prior (academic) art for EDA-driven security evaluation and implementation of countermeasures.
Abstract: Modern electronic systems become evermore complex, yet remain modular, with integrated circuits (ICs) acting as versatile hardware components at their heart. Electronic design automation (EDA) for ICs has focused traditionally on power, performance, and area. However, given the rise of hardware-centric security threats, we believe that EDA must also adopt related notions like secure by design and secure composition of hardware. Despite various promising studies, we argue that some aspects still require more efforts, for example: effective means for compilation of assumptions and constraints for security schemes, all the way from the system level down to the "bare metal"; modeling, evaluation, and consideration of security-relevant metrics; or automated and holistic synthesis of various countermeasures, without inducing negative cross-effects. In this paper, we first introduce hardware security for the EDA community. Next we review prior (academic) art for EDA-driven security evaluation and implementation of countermeasures. We then discuss strategies and challenges for advancing research and development toward secure composition of circuits and systems.

Proceedings ArticleDOI
Cunxi Yu1
02 Nov 2020
TL;DR: A generic end-to-end and high-performance domain-specific, multi-stage multi-armed bandit framework for Boolean logic optimization that outperforms both hand-crafted flows and ML explored flows in quality of results, and is orders of magnitude faster compared to ML-based approaches.
Abstract: Recent years have seen increasing employment of decision intelligence in electronic design automation (EDA), which aims to reduce the manual efforts and boost the design closure process in modern toolflows. However, existing approaches either require a large number of labeled data for training or are limited in practical EDA toolflow integration due to computation overhead. This paper presents a generic end-to-end and high-performance domain-specific, multi-stage multi-armed bandit framework for Boolean logic optimization. This framework addresses optimization problems on a) And-Inv-Graphs (# nodes), b) Conjunction Normal Form (CNF) minimization (# clauses) for Boolean Satisfiability, c) post static timing analysis (STA) delay and area optimization for standard-cell technology mapping, and d) FPGA technology mapping for 6-in LUT architectures. Moreover, the proposed framework has been integrated with ABC [1], Yosys [2], VTR [3], and industrial tools. The experimental results demonstrate that our framework outperforms both hand-crafted flows [1] and ML explored flows [4], [5] in quality of results, and is orders of magnitude faster compared to ML-based approaches [4], [5].



Proceedings ArticleDOI
11 Oct 2020
TL;DR: A machine learning (ML) enabled constrained constrained multi-objective optimization solver is proposed to drastically reduce the amount of design iterations required for Pareto set discovery for power systems.
Abstract: With the rise of more electric and all-electric aviation power systems, engineering efforts of system optimization shift to the electrical domain. A substantial amount of time and resources are dedicated to finding the best system architecture and design specifications to meet energy efficiency goals and physical constraints. Current processes utilize models of power system components to determine the optimal designs. However, such modeling is computationally expensive as numerous iterations are required to settle on an optimal design. This paper proposes a machine learning (ML) enabled constrained multi-objective optimization solver to drastically reduce the amount of design iterations required for Pareto set discovery for power systems. The process contributes significantly to design automation. A heavy-duty vertical-takeoff-landing (VTOL) unmanned aerial vehicle (UAV) power system is selected to demonstrate the efficacy and limitation of ML enabled optimization. Two extreme trials were run: 1) a search throughout the entire design space with only 9% valid designs within constraints; 2) a search throughout the valid design space. While Trial 1 was unsuccessful in discovering the Pareto front, Trial 2 uncovered all Pareto optimal designs with a 99% reduction of iterations compared to a brute force method.