scispace - formally typeset
Search or ask a question

Showing papers on "Electronic design automation published in 2019"


Journal ArticleDOI
Sangeun Oh1, Yongsu Jung2, Seongsin Kim1, Ikjin Lee2, Namwoo Kang1 
TL;DR: In this article, an artificial intelligent (AI)-based design automation framework that is capable of generating numerous design options which are not only aesthetic but also optimized for engineering performance is proposed.
Abstract: Deep learning has recently been applied to various research areas of design optimization. This study presents the need and effectiveness of adopting deep learning for generative design (or design exploration) research area. This work proposes an artificial intelligent (AI)-based design automation framework that is capable of generating numerous design options which are not only aesthetic but also optimized for engineering performance. The proposed framework integrates topology optimization and deep generative models (e.g., generative adversarial networks (GANs)) in an iterative manner to explore new design options, thus generating a large number of designs starting from limited previous design data. In addition, anomaly detection can evaluate the novelty of generated designs, thus helping designers choose among design options. The 2D wheel design problem is applied as a case study for validation of the proposed framework. The framework manifests better aesthetics, diversity, and robustness of generated designs than previous generative design methods.

171 citations


Proceedings ArticleDOI
02 Jun 2019
TL;DR: The planned Alpha release of OpenROAD, an open-source end-to-end silicon compiler, will help realize the goal of “democratization of hardware design”, by reducing cost, expertise, schedule and risk barriers that confront system designers today.
Abstract: We describe the planned Alpha release of OpenROAD, an open-source end-to-end silicon compiler. OpenROAD will help realize the goal of "democratization of hardware design", by reducing cost, expertise, schedule and risk barriers that confront system designers today. The development of open-source, self-driving design tools is in and of itself a "moon shot" with numerous technical and cultural challenges. The open-source flow incorporates a compatible open-source set of tools that span logic synthesis, floorplanning, placement, clock tree synthesis, global routing and detailed routing. The flow also incorporates analysis and support tools for static timing analysis, parasitic extraction, power integrity analysis, and cloud deployment. We also note several observed challenges, or "lessons learned", with respect to development of open-source EDA tools and flows.

87 citations


Journal ArticleDOI
TL;DR: A review of recent research in design for additive manufacturing (DfAM), including additive manufacturing terminology, trends, methods, classification of DfAM methods and applications is presented in this article.
Abstract: PurposeThis paper aims to review recent research in design for additive manufacturing (DfAM), including additive manufacturing (AM) terminology, trends, methods, classification of DfAM methods and ...

82 citations


Journal ArticleDOI
Sangeun Oh1, Yongsu Jung2, Seongsin Kim1, Ikjin Lee2, Namwoo Kang1 
TL;DR: This work proposes an artificial intelligent (AI)-based design automation framework that is capable of generating numerous design options which are not only aesthetic but also optimized for engineering performance.
Abstract: Deep learning has recently been applied to various research areas of design optimization This study presents the need and effectiveness of adopting deep learning for generative design (or design exploration) research area This work proposes an artificial intelligent (AI)-based design automation framework that is capable of generating numerous design options which are not only aesthetic but also optimized for engineering performance The proposed framework integrates topology optimization and deep generative models (eg, generative adversarial networks (GANs)) in an iterative manner to explore new design options, thus generating a large number of designs starting from limited previous design data In addition, anomaly detection can evaluate the novelty of generated designs, thus helping designers choose among design options The 2D wheel design problem is applied as a case study for validation of the proposed framework The framework manifests better aesthetics, diversity, and robustness of generated designs than previous generative design methods

82 citations


Proceedings ArticleDOI
02 Jun 2019
TL;DR: Experimental results show the proposed GCN model has superior accuracy to classical machine learning models on difficult-to-observation nodes prediction, and compared with commercial testability analysis tools, the proposed observation point insertion flow achieves similar fault coverage.
Abstract: Applications of deep learning to electronic design automation (EDA) have recently begun to emerge, although they have mainly been limited to processing of regular structured data such as images. However, many EDA problems require processing irregular structures, and it can be non-trivial to manually extract important features in such cases. In this paper, a high performance graph convolutional network (GCN) model is proposed for the purpose of processing irregular graph representations of logic circuits. A GCN classifier is firstly trained to predict observation point candidates in a netlist. The GCN classifier is then used as part of an iterative process to propose observation point insertion based on the classification results. Experimental results show the proposed GCN model has superior accuracy to classical machine learning models on difficult-to-observation nodes prediction. Compared with commercial testability analysis tools, the proposed observation point insertion flow achieves similar fault coverage with an 11% reduction in observation points and a 6% reduction in test pattern count.

76 citations


Journal ArticleDOI
TL;DR: In this paper, a complete design and manufacturing workflow that simultaneously integrates material design, structural design, and product fabrication of functionally graded material (FGM) structures based on digital materials is presented.
Abstract: Voxel-based multimaterial jetting additive manufacturing allows fabrication of digital materials (DMs) at the meso-scale (∼1 mm) by controlling the deposition patterns of soft elastomeric and rigid glassy polymers at the voxel-scale (∼90 μm). The digital materials can then be used to create heterogeneous functionally graded material (FGM) structures at the macro-scale (∼10 mm) programmed to behave in a predefined manner. This offers huge potential for design and fabrication of novel and complex bespoke mechanical structures. This paper presents a complete design and manufacturing workflow that simultaneously integrates material design, structural design, and product fabrication of FGM structures based on digital materials. This is enabled by a regression analysis of the experimental data on mechanical performance of the DMs i.e., Young’s modulus, tensile strength and elongation at break. This allows us to express the material behavior simply as a function of the microstructural descriptors (in this case, just volume fraction) without having to understand the underlying microstructural mechanics while simultaneously connecting it to the process parameters. Our proposed design and manufacturing approach is then demonstrated and validated in two series of design exercises to devise complex FGM structures. First, we design, computationally predict and experimentally validate the behavior of prescribed designs of FGM tensile structures with different material gradients. Second, we present a design automation approach for optimal FGM structures. The comparison between the simulations and the experiments with the FGM structures shows that the presented design and fabrication workflow based on our modeling approach for DMs at meso-scale can be effectively used to design and predict the performance of FGMs at macro-scale.

58 citations


Journal ArticleDOI
TL;DR: An overview of the current and planned activities related to the ColdFlux project is presented and the design assumptions and decisions that were made to allow the development of design tools for million-gate circuits are justified.
Abstract: The IARPA SuperTools program requires the development of superconducting electronic design automation (S-EDA) and superconducting technology computer-aided design (S-TCAD) tools aimed at enabling the reliable design of complex superconducting digital circuits with millions of Josephson junctions. Within the SuperTools program, the ColdFlux project addresses S-EDA and S-TCAD tool research and development in four areas: 1) RTL synthesis, architectures and verification; 2) analog design and layout synthesis; 3) physical design and test; and 4) device and process modeling/simulation and cell library design. Capabilities include, but are not limited to, the following: device level modeling and simulation of Josephson junctions, modeling and simulation of the superconducting process manufacturing processes, powerful new electrical circuit simulation, parameterized schematic and layout libraries, optimization, compact SPICE-like model extraction, timing analysis, behavioral, register-transfer-level and logic syntheses, clock tree synthesis, placement and routing, layout-versus-schematic extraction, functional verification, and the evaluation of designs in the presence of magnetic fields and trapped flux. ColdFlux consists of six research groups from four continents. Here, we present an overview of the current and planned activities related to the project and justify the design assumptions and decisions that were made to allow the development of design tools for million-gate circuits.

54 citations


Posted Content
TL;DR: Instead of following the common top-down flow for compact DNN (Deep Neural Network) design, SkyNet provides a bottom-up DNN design approach with comprehensive understanding of the hardware constraints at the very beginning to deliver hardware-efficient DNNs.
Abstract: Object detection and tracking are challenging tasks for resource-constrained embedded systems. While these tasks are among the most compute-intensive tasks from the artificial intelligence domain, they are only allowed to use limited computation and memory resources on embedded devices. In the meanwhile, such resource-constrained implementations are often required to satisfy additional demanding requirements such as real-time response, high-throughput performance, and reliable inference accuracy. To overcome these challenges, we propose SkyNet, a hardware-efficient neural network to deliver the state-of-the-art detection accuracy and speed for embedded systems. Instead of following the common top-down flow for compact DNN (Deep Neural Network) design, SkyNet provides a bottom-up DNN design approach with comprehensive understanding of the hardware constraints at the very beginning to deliver hardware-efficient DNNs. The effectiveness of SkyNet is demonstrated by winning the competitive System Design Contest for low power object detection in the 56th IEEE/ACM Design Automation Conference (DAC-SDC), where our SkyNet significantly outperforms all other 100+ competitors: it delivers 0.731 Intersection over Union (IoU) and 67.33 frames per second (FPS) on a TX2 embedded GPU; and 0.716 IoU and 25.05 FPS on an Ultra96 embedded FPGA. The evaluation of SkyNet is also extended to GOT-10K, a recent large-scale high-diversity benchmark for generic object tracking in the wild. For state-of-the-art object trackers SiamRPN++ and SiamMask, where ResNet-50 is employed as the backbone, implementations using our SkyNet as the backbone DNN are 1.60X and 1.73X faster with better or similar accuracy when running on a 1080Ti GPU, and 37.20X smaller in terms of parameter size for significantly better memory and storage footprint.

49 citations


Journal ArticleDOI
TL;DR: GAGP achieves very high predictive power matching (and in some cases exceeding) that of state-of-the-art supervised learning methods, making it particularly useful in engineering design with big data.
Abstract: We introduce a novel method for Gaussian process (GP) modeling of massive datasets called globally approximate Gaussian process (GAGP). Unlike most large-scale supervised learners such as neural networks and trees, GAGP is easy to fit and can interpret the model behavior, making it particularly useful in engineering design with big data. The key idea of GAGP is to build a collection of independent GPs that use the same hyperparameters but randomly distribute the entire training dataset among themselves. This is based on our observation that the GP hyperparameter approximations change negligibly as the size of the training data exceeds a certain level, which can be estimated systematically. For inference, the predictions from all GPs in the collection are pooled, allowing the entire training dataset to be efficiently exploited for prediction. Through analytical examples, we demonstrate that GAGP achieves very high predictive power matching (and in some cases exceeding) that of state-of-the-art supervised learning methods. We illustrate the application of GAGP in engineering design with a problem on data-driven metamaterials, using it to link reduced-dimension geometrical descriptors of unit cells and their properties. Searching for new unit cell designs with desired properties is then achieved by employing GAGP in inverse optimization.

48 citations


Journal ArticleDOI
TL;DR: A simulation framework for evaluating the effect of location-dependent variability in photonic integrated circuits is presented and it is shown how variability aware design can be essential for future photonic circuit design, especially in a fabless ecosystem where details of the foundry processes are not available to the designers.
Abstract: We present a simulation framework for evaluating the effect of location-dependent variability in photonic integrated circuits. The framework combines a fast circuit simulator with circuit layout information and wafer maps of waveguide width and layer thickness variations to estimate the statistics of the circuit performance through Monte Carlo simulations. We illustrate this with ring resonator filters, a design sweep of Mach–Zehnder lattice filters, and the tolerance optimization of a Mach–Zehnder interferometer, and show how variability aware design can be essential for future photonic circuit design, especially in a fabless ecosystem where details of the foundry processes are not available to the designers.

41 citations


Proceedings ArticleDOI
01 Nov 2019
TL;DR: LSOracle is the first to exploit state-of-the-art And-Inverter Graph (AIG) and Majority-inverter graph (MIG) logic optimizers and relies on a Deep Neural Network (DNN) to automatically decide which optimizer should handle different portions of the circuit.
Abstract: The increasing complexity of modern Integrated Circuits (ICs) leads to systems composed of various different Intellectual Property (IPs) blocks, known as System-on-Chip (SoC). Such complexity requires strong expertise from engineers, that rely on expansive commercial EDA tools. To overcome such a limitation, an automated open-source logic synthesis flow is required. In this context, this work proposes LSOracle: a novel automated mixed logic synthesis framework. LSOracle is the first to exploit state-of-the-art And-Inverter Graph (AIG) and Majority-Inverter Graph (MIG) logic optimizers and relies on a Deep Neural Network (DNN) to automatically decide which optimizer should handle different portions of the circuit. To do so, LSOracle applies $k-way$ partitioning to split a DAG into multiple partitions and uses a to chose the best-fit optimizer. Post-tech mapping ASIC results, targeting the 7nm ASAP standard cell library, for a set of mixed-logic circuits, show an average improvement in area-delay product of 6.87% (up to 10.26%) and 2.70% (up to 6.27%) when compared to AIG and MIG, respectively. In addition, we show that for the considered circuits, LSOracle achieves an area close to AIGs (which delivered smaller circuits) with a similar performance of MIGs, which delivered faster circuits.

Journal ArticleDOI
TL;DR: It is shown that all electronics CAD tools—high-level synthesis, logic synthesis, physical design, verification, test, and post-silicon validation—are potential threat vectors to different degrees.
Abstract: Fabless semiconductor companies design system-on-chips (SoC) by using third-party intellectual property (IP) cores and fabricate them in offshore, potentially untrustworthy foundries. Owing to the globally distributed electronics supply chain, security has emerged as a serious concern. In this article, we explore electronics computer-aided design (CAD) software as a threat vector that can be exploited to introduce vulnerabilities into the SoC. We show that all electronics CAD tools—high-level synthesis, logic synthesis, physical design, verification, test, and post-silicon validation—are potential threat vectors to different degrees. We have demonstrated CAD-based attacks on several benchmarks, including the commercial ARM Cortex M0 processor [1].

Journal ArticleDOI
TL;DR: A comprehensive survey on well-developed methodologies and tools for data plane verification, control plane verify, data plane testing and control plane testing is performed.
Abstract: Networks have grown increasingly complicated. Violations of intended policies can compromise network availability and network reliability. Network operators need to ensure that their policies are correctly implemented. This has inspired a research field, network verification and testing, that enables users to automatically detect bugs and systematically reason their network. Furthermore, techniques ranging from formal modeling to verification and testing have been applied to help operators build reliable systems in electronic design automation and software. Inspired by its success, network verification has recently seen increased attention in the academic and industrial communities. As an area of current interest, it is an interdisciplinary subject (with fields including formal methods, mathematical logic, programming languages, and networks), making it daunting for a nonprofessional. We perform a comprehensive survey on well-developed methodologies and tools for data plane verification, control plane verification, data plane testing and control plane testing. This survey also provides lessons gained from existing solutions and a perspective of future research developments.

Journal ArticleDOI
TL;DR: A workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards is described, providing synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models.
Abstract: Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.

Journal ArticleDOI
TL;DR: Design automation is bringing more integration to design tools for power electronics to solve multiphysics problems within a shorter design time, and require multivariable optimization to obtain higher efficiencies and more compact designs.
Abstract: From component modeling and schematic capture to circuit simulation and practical implementation, design tools have long played a crucial role in the design and development of power electronics converters and motor drive systems. As the switching speeds rise and power densities increase, the pressure to improve automation in these design tools is mounting. Modern integrated power systems need to solve multiphysics problems within a shorter design time, and require multivariable optimization to obtain higher efficiencies and more compact designs. As a result, design automation is bringing more integration to design tools for power electronics.

Journal ArticleDOI
TL;DR: The learnings and best practices embodied in the design and fabrication capability of commercially deployed monolithically integrated coherent optical communication SOC are leveraged to develop an optimized and scalable integration platform for a turn-key foundry process.
Abstract: We review the state-of-the-art in monolithic-integrated InP-based system-on-chip (SOC) photonic integrated circuits (PICs) and the extension of this capability to a foundry offering. The learnings and best practices embodied in the design and fabrication capability of commercially deployed monolithically integrated coherent optical communication SOC are leveraged to develop an optimized and scalable integration platform for a turn-key foundry process. The design automation and infrastructure required to enable a consistent reproducible InP-based foundry offering is summarized.

Journal ArticleDOI
TL;DR: To gain a deep understanding of the emerging reliability issues, it is intuitive to explore the origin of such increasing threat.
Abstract: Reliability assurance is of great importance for any commercial products, and the integrated circuit (IC) is no exception. Unlike the yield issues that are time-independent and can be screened through burn-in before taping out, the reliability issues introduce time-dependent aging in performance and eventually cause the malfunction of the ICs. Such a process can take several months or even years to happen and thus cannot be detected before entering the market. For many years, reliability was not a big concern, and design rule check during physical verification was usually sufficient. However, as technology migrates to advanced nodes, the reliability is acting as a showstopper and challenging the entire IC industry [1]. To gain a deep understanding of the emerging reliability issues, it is intuitive to explore the origin of such increasing threat.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the enhanced prefix adder synthesis algorithm enhanced can achieve Pareto frontier of high quality over a wide design space, bridging the gap between architectural and physical designs.
Abstract: In spite of maturity to the modern electronic design automation (EDA) tools, optimized designs at architectural stage may become suboptimal after going through physical design flow. Adder design has been such a long studied fundamental problem in very large-scale integration industry yet designers cannot achieve optimal solutions by running EDA tools on the set of available prefix adder architectures. In this paper, we enhance a state-of-the-art prefix adder synthesis algorithm to obtain a much wider solution space in architectural domain. On top of that, a machine learning-based design space exploration methodology is applied to predict the Pareto frontier of the adders in physical domain, which is infeasible by exhaustively running EDA tools for innumerable architectural solutions. Considering the high cost of obtaining the true values for learning, an active learning algorithm is proposed to select the representative data during learning process, which uses less labeled data while achieving better quality of Pareto frontier. Experimental results demonstrate that our framework can achieve Pareto frontier of high quality over a wide design space, bridging the gap between architectural and physical designs. Source code and data are available at https://github.com/yuzhe630/adder-DSE .

Proceedings ArticleDOI
15 Apr 2019
TL;DR: An integrated toolchain is introduced that supports the architectural modeling of CPS with LECs, but also has extensive support for the engineering and integration of L ECs, including support for training data collection, L EC training, LEC evaluation and verification, and system software deployment.
Abstract: Recent advances in machine learning led to the appearance of Learning-Enabled Components (LECs) in Cyber-Physical Systems. LECs are being evaluated and used for various, complex functions including perception and control. However, very little tool support is available for design automation in such systems. This paper introduces an integrated toolchain that supports the architectural modeling of CPS with LECs, but also has extensive support for the engineering and integration of LECs, including support for training data collection, LEC training, LEC evaluation and verification, and system software deployment. Additionally, the toolsuite supports the modeling and analysis of safety cases - a critical part of the engineering process for mission and safety critical systems.

Journal ArticleDOI
TL;DR: Major developments of three important aspects related to hardware/software partitioning, which has great effects on the performance of embedded systems are focused on.
Abstract: In electronic design automation, hardware/software co-design significantly reduces the time-to-market and improves the performance of embedded systems. With the increasing scale of applications and complexity of hardware architecture of embedded systems, hardware/software co-design is still a research hotspot. As hardware/software co-design is a wide topic, this paper focuses on major developments of three important aspects related to hardware/software partitioning, which has great effects on the performance of embedded systems. Firstly, various partitioning models including hardware architectures and abstract models are surveyed. Secondly, classical and new algorithms for hardware/software partitioning are classified and analyzed. Thirdly, existing parallel algorithms for hardware/software co-design are discussed in details. Finally, possible research directions are pointed out in conclusion.

Journal ArticleDOI
TL;DR: The generic optimized finger design (GOFD) method is proposed which automates the design process of single- and multi-function finger grippers and substantially reduces the design time of fingers.

Proceedings ArticleDOI
21 Jan 2019
TL;DR: This work presents a design method which - for the first time - allows for the scalable design of FCN circuits that satisfy dedicated constraints of these technologies.
Abstract: Field-coupled Nanocomputing (FCN) technologies are considered as a solution to overcome physical boundaries of conventional CMOS approaches. But despite ground breaking advances regarding their physical implementation as e.g. Quantum-dot Cellular Automata (QCA), Nanomagnet Logic (NML), and many more, there is an unsettling lack of methods for large-scale design automation of FCN circuits. In fact, design automation for this class of technologies still is in its infancy - heavily relying either on manual labor or automatic methods which are applicable for rather small functionality only. This work presents a design method which - for the first time - allows for the scalable design of FCN circuits that satisfy dedicated constraints of these technologies. The proposed scheme is capable of handling around 40000 gates within seconds while the current state-of-the-art takes hours to handle around 20 gates. This is confirmed by experimental results on the layout level for various established benchmarks libraries.

Journal ArticleDOI
01 Jul 2019
TL;DR: In this paper, inductor performance properties are exploited to develop a two-step surrogate modeling strategy in order to evaluate the behavior of inductors with high efficiency and accuracy and an automated design flow for radiofrequency circuits using this surrogate modeling of passive components is presented.
Abstract: The knowledge-intensive radiofrequency circuit design and the scarce design automation support play against the increasingly stringent time-to-market demands. Optimization algorithms are starting to play a crucial role; however, their effectiveness is dramatically limited by the accuracy of the evaluation functions of objectives and constraints. Accurate performance evaluation of radiofrequency passive elements, e.g., inductors, is provided by electromagnetic simulators, but their computational cost makes their use within iterative optimization loops unaffordable. Surrogate modeling strategies, e.g., Kriging, support vector machines, artificial neural networks, etc., arise as a promising modeling alternative. However, their limited accuracy in this kind of applications has prevented a widespread use. In this paper, inductor performance properties are exploited to develop a two-step surrogate modeling strategy in order to evaluate the behavior of inductors with high efficiency and accuracy. An automated design flow for radiofrequency circuits using this surrogate modeling of passive components is presented. The methodology couples a circuit simulator with evolutionary computation algorithms such as particle swarm optimization, genetic algorithm or non-dominated sorting genetic algorithm (NSGA-II). This methodology ensures optimal performances within short computation times by avoiding electromagnetic simulations of inductors during the entire optimization process and using a surrogate model that has less than 1% error in inductance and quality factor when compared against electromagnetic simulations. Numerous real-life experiments of single-objective and multi-objective low-noise amplifier design demonstrate the accuracy and efficiency of the proposed strategies.

Journal ArticleDOI
TL;DR: A new formal structure is introduced that generalizes and consolidates a variety of well-known structures including many forms of plans, planning problems, and filters, into a single data structure called a procrustean graph, and gives these graph structures semantics in terms of ideas based in formal language theory.
Abstract: We address problems underlying the algorithmic question of automating the co-design of robot hardware in tandem with its apposite software. Specifically, we consider the impact that degradations of...

Posted Content
TL;DR: This work proposes design automation techniques for efficient neural networks, and investigates automatically designing specialized fast models, auto channel pruning, and auto mixed-precision quantization, and demonstrates such learning-based, automated design achieves superior performance and efficiency than rule-based human design.
Abstract: Efficient deep learning computing requires algorithm and hardware co-design to enable specialization: we usually need to change the algorithm to reduce memory footprint and improve energy efficiency. However, the extra degree of freedom from the algorithm makes the design space much larger: it's not only about designing the hardware but also about how to tweak the algorithm to best fit the hardware. Human engineers can hardly exhaust the design space by heuristics. It's labor consuming and sub-optimal. We propose design automation techniques for efficient neural networks. We investigate automatically designing specialized fast models, auto channel pruning, and auto mixed-precision quantization. We demonstrate such learning-based, automated design achieves superior performance and efficiency than rule-based human design. Moreover, we shorten the design cycle by 200x than previous work, so that we can afford to design specialized neural network models for different hardware platforms.

Journal ArticleDOI
TL;DR: A multiscale topology optimization approach for design automation, new computational geometry algorithms for material compilation, and voxel-based material jetting for digital fabrication are developed and experimentally validated.

Proceedings ArticleDOI
15 Jul 2019
TL;DR: An exploratory research using artificial neural networks (ANNs) is conducted to automate the placement task of analog IC layout design, abstracts the need to explicitly deal with topological constraints by learning reusable design patterns from validated legacy layout designs.
Abstract: In this paper, an exploratory research using artificial neural networks (ANNs) is conducted to automate the placement task of analog IC layout design. The proposed methodology abstracts the need to explicitly deal with topological constraints by learning reusable design patterns from validated legacy layout designs. The ANNs are trained on a dataset of an analog amplifier containing thousands of placement solutions for 12 different and conflicting layout styles/guidelines, and, used to output different placement alternatives, for sizing solutions outside the training set, at pushbutton speed. Ultimately, the methodology can offer the opportunity to reuse all the existent legacy layout information, either generated by layout designers or EDA tools.

Journal ArticleDOI
TL;DR: A new timing characterization method is presented here for SFQ logic cells, which relies on low-dimensional lookup tables (LUTs) to store the clock-to-output delay, setup, and hold times of clocked cells and input-to theoutput delay of nonclocked cells in an SFQ standard cell library.
Abstract: Single flux quantum (SFQ) logic families require the development of electronic design automation tools to generate large-scale circuits. The available methodologies or tools for performing timing analysis of SFQ circuits do not have a load-dependent timing characterization method for calculating the context-dependent delay of cells, such as the nonlinear delay model for complementary metal–oxide–semiconductor (CMOS) circuits. A new timing characterization method is presented here for SFQ logic cells, which relies on low-dimensional lookup tables (LUTs) to store the clock-to-output delay, setup, and hold times of clocked cells and input-to-output delay of nonclocked cells in an SFQ standard cell library. Although the delay of Josephson junction based logic cells depends on many parameters, this paper shows that it is possible to reduce this dependency to only a small number of well-chosen parameters. All LUTs are obtained from JSIM simulations for a given target process technology. The accuracy of the proposed LUT-based timing characterization method is compared against JSIM simulations, which shows a maximum error of only 2.1% of the tested clocked cells with different loads.

Proceedings ArticleDOI
21 Jan 2019
TL;DR: This work aims for providing an automatic and dedicated design scheme that explicitly takes the recent findings in this domain into account and presents automated methods that dedicatedly realize the desired function as an adiabatic circuit.
Abstract: Adiabatic circuits are heavily investigated since they allow for computations with an asymptotically close to zero energy dissipation per operation---serving as an alternative technology for many scenarios where energy efficiency is preferred over fast execution. Their concepts are motivated by the fact that the information lost from conventional circuits results in an entropy increase which causes energy dissipation. To overcome this issue, computations are performed in a (conditionally) reversible fashion which, additionally, have to satisfy switching rules that are different from conventional circuitry---crying out for dedicated design automation solutions. While previous approaches either focus on their electrical realization (resulting in small, hand-crafted circuits only) or on designing fully reversible building blocks (an unnecessary overhead), this work aims for providing an automatic and dedicated design scheme that explicitly takes the recent findings in this domain into account. To this end, we review the theoretical and technical background of adiabatic circuits and present automated methods that dedicatedly realize the desired function as an adiabatic circuit. The resulting methods are further optimized---leading to an automatic and efficient design automation for this promising technology. Evaluations confirm the benefits and applicability of the proposed solution.

Proceedings ArticleDOI
25 Nov 2019
TL;DR: A comparative study of the formulations and algorithms for reliability-based co-design is conducted, where the co- design problem is integrated with the RBDO framework to yield solutions consisting of an optimal system design and the corresponding control trajectory that satisfy all reliability constraints in the presence of parameter uncertainties.
Abstract: While integrated physical and control system co-design has been demonstrated successfully on several engineering system design applications, it has been primarily applied in a deterministic manner without considering uncertainties. An opportunity exists to study non-deterministic co-design strategies, taking into account various uncertainties in an integrated co-design framework. Reliability-based design optimization (RBDO) is one such method that can be used to ensure an optimized system design being obtained that satisfies all reliability constraints considering particular system uncertainties. While significant advancements have been made in co-design and RBDO separately, little is known about methods where reliability-based dynamic system design and control design optimization are considered jointly. In this article, a comparative study of the formulations and algorithms for reliability-based co-design is conducted, where the co-design problem is integrated with the RBDO framework to yield solutions consisting of an optimal system design and the corresponding control trajectory that satisfy all reliability constraints in the presence of parameter uncertainties. The presented study aims to lay the groundwork for the reliability-based co-design problem by providing a comparison of potential design formulations and problem–solving strategies. Specific problem formulations and probability analysis algorithms are compared using two numerical examples. In addition, the practical efficacy of the reliability-based co-design methodology is demonstrated via a horizontal-axis wind turbine structure and control design problem.