scispace - formally typeset
Search or ask a question

Showing papers on "Electronic design automation published in 2010"


01 Jan 2010
TL;DR: This journal special section will cover recent progress on parallel CAD research, including algorithm foundations, programming models, parallel architectural-specific optimization, and verification, as well as other topics relevant to the design of parallel CAD algorithms and software tools.
Abstract: High-performance parallel computer architecture and systems have been improved at a phenomenal rate. In the meantime, VLSI computer-aided design (CAD) software for multibillion-transistor IC design has become increasingly complex and requires prohibitively high computational resources. Recent studies have shown that, numerous CAD problems, with their high computational complexity, can greatly benefit from the fast-increasing parallel computation capabilities. However, parallel programming imposes big challenges for CAD applications. Fully exploiting the computational power of emerging general-purpose and domain-specific multicore/many-core processor systems, calls for fundamental research and engineering practice across every stage of parallel CAD design, from algorithm exploration, programming models, design-time and run-time environment, to CAD applications, such as verification, optimization, and simulation. This journal special section will cover recent progress on parallel CAD research, including algorithm foundations, programming models, parallel architectural-specific optimization, and verification. More specifically, papers with in-depth and extensive coverage of the following topics will be considered, as well as other topics relevant to the design of parallel CAD algorithms and software tools. 1. Parallel algorithm design and specification for CAD applications 2. Parallel programming models and languages of particular use in CAD 3. Runtime support and performance optimization for CAD applications 4. Parallel architecture-specific design and optimization for CAD applications 5. Parallel program debugging and verification techniques particularly relevant for CAD The papers should be submitted via the Manuscript Central website and should adhere to standard ACM TODAES formatting requirements (http://todaes.acm.org/). The page count limit is 25.

459 citations


Book
29 Oct 2010
TL;DR: TLM: An Overview and Brief History and TLM Modeling Techniques.
Abstract: TLM: An Overview and Brief History.- Transaction Level Modeling.- TLM Modeling Techniques.- Embedded Software Development.- Functional Verification.- Architecture Analysis and System Debugging.- Design Automation.

343 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: An overview of the post-silicon validation problem and how it differs from traditional pre- silicon verification and manufacturing testing is provided.
Abstract: Post-silicon validation is used to detect and fix bugs in integrated circuits and systems after manufacture. Due to sheer design complexity, it is nearly impossible to detect and fix all bugs before manufacture. Post-silicon validation is a major challenge for future systems. Today, it is largely viewed as an art with very few systematic solutions. As a result, post-silicon validation is an emerging research topic with several exciting opportunities for major innovations in electronic design automation. In this paper, we provide an overview of the post-silicon validation problem and how it differs from traditional pre-silicon verification and manufacturing testing. We also discuss major post-silicon validation challenges and recent advances.

147 citations


Journal ArticleDOI
TL;DR: This paper presents a multiagent control approach for a baggage handling system (BHS) using IEC 61499 Function Blocks and focuses on demonstrating a decentralized control system that is scalable, reconfigurable, and fault tolerant.
Abstract: Airport baggage handling is a field of automation systems that is currently dependent on centralized control systems and conventional automation programming techniques. In this and other areas of manufacturing and materials handling, these legacy automation technologies are increasingly limiting for the growing demand for systems that are reconfigurable, fault tolerant, and easy to maintain. IEC 61499 Function Blocks is an emerging architectural framework for the design of distributed industrial automation systems and their reusable components. A number of architectures have been suggested for multiagent and holonic control systems that incorporate function blocks. This paper presents a multiagent control approach for a baggage handling system (BHS) using IEC 61499 Function Blocks. In particular, it focuses on demonstrating a decentralized control system that is scalable, reconfigurable, and fault tolerant. The design follows the automation object approach, and produces a function block component representing a single section of conveyor. In accordance with holonic principles, this component is autonomous and collaborative, such that the structure and the behavior of a BHS can be entirely defined by the interconnection of these components within the function block design environment. Simulation is used to demonstrate the effectiveness of the agent-based control system and a utility is presented for real-time viewing of these systems. Tests on a physical conveyor test system demonstrated deployment to embedded control hardware.

116 citations


Journal ArticleDOI
TL;DR: This paper claims that a far superior result can be achieved by moving the design-to-manufacturing interface from design rules to a higher level of abstraction based on a defined set of pre-characterized layout templates and demonstrates how this methodology can simplify optical proximity correction and lithography processes for sub-32 nm technology nodes.
Abstract: The financial backbone of the semiconductor industry is based on doubling the functional density of integrated circuits every two years at fixed wafer costs and die yields. The increasing demands for 'computational' rather than 'physical' lithography to achieve the aggressive density targets, along with the complex device-engineering solutions needed to maintain the power density objectives, have caused a rapid escalation in systematic yield limiters that threaten scaling. Specifically, the traditional contract between design and manufacturing based solely on design rules is no longer sufficient to guarantee functional silicon and instead requires a convoluted set of restrictions that force complex modifications to the already costly design flows. In this paper, we claim that a far superior result can be achieved by moving the design-to-manufacturing interface from design rules to a higher level of abstraction based on a defined set of pre-characterized layout templates. We will demonstrate how this methodology can simplify optical proximity correction and lithography processes for sub-32 nm technology nodes, along with various digital block design examples for synthesized intellectual property (IP) cores. Furthermore, with a cost-per-good-die analysis we will show that this methodology will extend economical scaling to sub-32 nm technology nodes.

107 citations


Proceedings ArticleDOI
08 Mar 2010
TL;DR: The design of a low-power asynchronous Network-on-Chip which is implemented in a bottom-up approach using optimized hard-macros and achieves a 550Mflit/s throughput on silicon, and exhibits 86% power reduction compared to an equivalent synchronous NoC version.
Abstract: Requiring more bandwidth at reasonable power consumption, new communication infrastructures must provide adequate solutions to guarantee performance during physical integration. In this paper, we propose the design of a low-power asynchronous Network-on-Chip which is implemented in a bottom-up approach using optimized hard-macros. This architecture is fully testable and a new design flow is proposed to overcome CAD tools limitations regarding asynchronous logic. The proposed architecture has been successfully implemented in CMOS 65nm in a complete circuit. It achieves a 550Mflit/s throughput on silicon, and exhibits 86% power reduction compared to an equivalent synchronous NoC version.

107 citations


Journal ArticleDOI
TL;DR: A new design automation tool is presented, based on a modified genetic algorithm kernel, in order to improve efficiency on the analog IC design cycle and the resulting optimization tool and the improvement in design productivity is demonstrated for the design of CMOS operational amplifiers.

99 citations


Journal ArticleDOI
TL;DR: A novel automatic design approach for large BASs that covers the device selection, interoperability evaluation, and composition of BASs and follows a continuous top-down design with different levels of abstraction.
Abstract: The design of large building automation systems (BASs) with thousands of devices is a laborious task with a lot of recurrent works for identical automated rooms. The usage of prefabricated off-the-shelf devices and design patterns simplifies this task nowadays but creates new interoperability problems. As a result, the selection of devices is essential for a good system design but is often limited by a lack of information. This paper introduces a novel automatic design approach for large BASs that covers the device selection, interoperability evaluation, and composition of BASs. It follows a continuous top-down design with different levels of abstraction starting at requirement engineering and ending at a fully developed and industry-spanning BAS design.

87 citations


Book
22 Oct 2010
TL;DR: This book presents a new design automation methodology based on a modified genetic algorithm kernel, in order to improve efficiency on the analog IC design cycle and the resulting optimization tool and the improvement in design productivity is demonstrated.
Abstract: The microelectronics market trends present an ever-increasing level of complexity with special emphasis on the production of complex mixed-signal systems-on-chip. Strict economic and design pressures have driven the development of new methods to automate the analog design process. However, and despite some significant research efforts, the essential act of design at the transistor level is still performed by the trial and error interaction between the designer and the simulator. This book presents a new design automation methodology based on a modified genetic algorithm kernel, in order to improve efficiency on the analog IC design cycle. The proposed approach combines a robust optimization with corner analysis, machine learning techniques and distributed processing capability able to deal with multi-objective and constrained optimization problems. The resulting optimization tool and the improvement in design productivity is demonstrated for the design of CMOS operational amplifiers.

79 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: The Electronic Design Automation (EDA) industry is viewed as a key enabler to help bridge the gaps between pre-silicon and post- silicon validation, and extend the considerable intellectual wealth in pre- Silicon tools to the post-Silicon validation area.
Abstract: The challenges of post-silicon validation are continuously increasing, driven by higher levels of integration, increased circuit complexity, and platform performance requirements. The pressure of maintaining aggressive launch schedules and containing an increased cost of validation and debug, require a holistic approach to the entire design and validation process. Post-silicon validation is very diverse, and the work starts well before first silicon is available---for example, emulation, design-for-validation (DFV) features, specialized content development, etc. This will require enhancing pre-tape out validation to have healthier first silicon, developing more standard interfaces to our validation hooks, developing more predictive tools for circuit and platform simulation and post-silicon debug, adding more formal coverage methods, and improving survivability to mitigate in-the-field issues. We view the Electronic Design Automation (EDA) industry as a key enabler to help us bridge the gaps between presilicon and post-silicon validation, and extend the considerable intellectual wealth in pre-silicon tools to the post-silicon validation area.

74 citations


Journal ArticleDOI
TL;DR: This paper presents an exemplar-based method to provide intuitive way for users to generate 3D human body shape from semantic parameters, which involves simpler computation compared to non-linear methods while maintaining quality outputs.

Journal ArticleDOI
TL;DR: A design methodology to layout photonic devices within standard electronic complementary metal-oxide-semiconductor (CMOS) foundry data preparation flows is described and has enabled the fabrication of designs in three foundry scaled-CMOS processes from two semiconductor manufacturers.
Abstract: A design methodology to layout photonic devices within standard electronic complementary metal-oxide-semiconductor (CMOS) foundry data preparation flows is described. This platform has enabled the fabrication of designs in three foundry scaled-CMOS processes from two semiconductor manufacturers.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: It is shown that the correlation of the configurations within the multi-processor design space can be modeled successfully with analytical functions and, thus, speed up the overall exploration phase.
Abstract: Given the increasing complexity of multi-processor systems-on-chip, a wide range of parameters must be tuned to find the best trade-offs in terms of the selected system figures of merit (such as energy, delay and area). This optimization phase is called Design Space Exploration (DSE) consisting of a Multi-Objective Optimization (MOO) problem. In this paper, we propose an iterative design space exploration methodology exploiting the statistical properties of known system configurations to infer, by means of a correlation-based analysis, the next design points to be analyzed with low-level simulations. In fact, the knowledge of few design points is used to predict the expected improvement of unknown configurations. We show that the correlation of the configurations within the multi-processor design space can be modeled successfully with analytical functions and, thus, speed up the overall exploration phase. This makes the proposed methodology a model-assisted heuristic that, for the first time, exploits the correlation about architectural configurations to converge to the solution of the multi-objective problem.

Proceedings ArticleDOI
01 Dec 2010
TL;DR: The development and distribution of RapidSmith is described, a software library to facilitate the manipulation of XDL designs and upon which a complete CAD system can be based.
Abstract: Designer productivity for FPGA design is significantly limited by the time-consuming nature of the FPGA compilation process (synthesis, map, placement, and routing). However, experimentation on alternative CAD tools for this purpose for Xilinx devices has been somewhat limited. This paper describes the development and distribution of RapidSmith, a software library to facilitate the manipulation of XDL designs and upon which a complete CAD system can be based. The demonstration portion of this paper will show prototypes of representative CAD tools which can be easily built on top of the RapidSmith system.

Journal ArticleDOI
TL;DR: This paper introduces the common expression sharing and the complexity analysis on odd-term polynomials to achieve a lower gate bound than previous ASIC discussions and efficient FPGA implementations of bit parallel mixed Karatsuba-Ofman multipliers over GF(2m) .
Abstract: This paper presents complexity analysis [both in application-specific integrated circuits (ASICs) and on field-programmable gate arrays (FPGAs)] and efficient FPGA implementations of bit parallel mixed Karatsuba-Ofman multipliers (KOM) over GF(2m) . By introducing the common expression sharing and the complexity analysis on odd-term polynomials, we achieve a lower gate bound than previous ASIC discussions. The analysis is extended by using 4-input/6-input lookup tables (LUT) on FPGAs. For an arbitrary bit-depth, the optimum iteration step is shown. The optimum iteration steps differ for ASICs, 4-input LUT-based FPGAs and 6-input LUT-based FPGAs. We evaluate the LUT complexity and area-time product tradeoffs on FPGAs with different computer-aided design (CAD) tools. Furthermore, the experimental results on FPGAs for bit parallel modular multipliers are shown and compared with previous implementations. To the best of our knowledge, our bit parallel multipliers consume the least resources among known FPGA implementations to date.


28 Mar 2010
TL;DR: In this article, the authors provide a comprehensive knowledge of structural and algorithmic solutions that can be used to alleviate power dissipation during manufacturing test, and show how low-power circuits and systems can be tested safely without affecting yield and reliability.
Abstract: Managing the power consumption of circuits and systems is now considered as one of the most important challenges for the semiconductor industry. Elaborate power management strategies, such as voltage scaling, clock gating or power gating techniques, are used today to control the power dissipation during functional operation. The usage of these strategies has various implications on manufacturing test, and power-aware test is therefore increasingly becoming a major consideration during design-for-test and test preparation for low-power devices. This tutorial provides the fundamental and advanced knowledge in this area. It is organized into three main parts. The first one gives necessary background and discusses issues arising from excessive power dissipation during manufacturing test. The second part provides comprehensive knowledge of structural and algorithmic solutions that can be used to alleviate such problems. The last part surveys low-power design techniques and shows how low-power circuits and systems can be tested safely without affecting yield and reliability. Electronic Design Automation (EDA) solutions for testing low-power devices are also covered in the last part of the tutorial.

Journal ArticleDOI
01 Jul 2010
TL;DR: A worst-case performance model of the authors' CA is proposed so that the performance of the CA-based platform can be analyzed before its implementation, and a fully automated design flow to generate communication assist (CA) based multi-processor systems (CA-MPSoC) is presented.
Abstract: Future embedded systems demand multi-processor designs to meet real-time deadlines. The large number of applications in these systems generates an exponential number of use-cases. The key design automation challenges are designing systems for these use-cases and fast exploration of software and hardware implementation alternatives with accurate performance evaluation of these use-cases. These challenges cannot be overcome by current design methodologies which are semi-automated, time consuming and error prone. In this paper, we present a fully automated design flow to generate communication assist (CA) based multi-processor systems (CA-MPSoC). A worst-case performance model of our CA is proposed so that the performance of the CA-based platform can be analyzed before its implementation. The design flow provides performance estimates and timing guarantees for both hard real-time and soft real-time applications, provided the task to processor mappings are given by the user. The flow automatically generates a super-set hardware that can be used in all use-cases of the applications. The software for each of these use-cases is also generated including the configuration of communication architecture and interfacing with application tasks. CA-MPSoC has been implemented on Xilinx FPGAs for evaluation. Further, it is made available on-line for the benefit of the research community and in this paper, it is used for performance analysis of two real life applications, Sobel and JPEG encoder executing concurrently. The CA-based platform generated by our design flow records a maximum error of 3.4% between analyzed and measured periods. Our tool can also merge use-cases to generate a super-set hardware which accelerates the evaluation of these use-cases. In a case study with six applications, the use-case merging results in a speed up of 18 when compared to the case where each use-case is evaluated individually.

BookDOI
01 Jul 2010
TL;DR: A holistic view of FPGA security is presented, from formal top level specification to low level policy enforcement mechanisms, and this perspective integrates recent advances in the fields of computer security theory, languages, compilers, and hardware.
Abstract: The purpose of Handbook of FPGA Design Security is to provide a practical approach to managing security in FPGA designs for researchers and practitioners in the electronic design automation (EDA) and FPGA communities, including corporations, industrial and government research labs, and academics. Handbook of FPGA Design Security combines theoretical underpinnings with a practical design approach and worked examples for combating real world threats. To address the spectrum of lifecycle and operational threats against FPGA systems, a holistic view of FPGA security is presented, from formal top level specification to low level policy enforcement mechanisms. This perspective integrates recent advances in the fields of computer security theory, languages, compilers, and hardware. The net effect is a diverse set of static and runtime techniques that, working in cooperation, facilitate the composition of robust, dependable, and trustworthy systems using commodity components.

Journal ArticleDOI
TL;DR: It is demonstrated how TIGUAN can be combined with conventional structural ATPG to extract full benefit of the intrinsic strengths of both approaches to efficient utilization of the inherent parallelism of multi-core architectures.
Abstract: Efficient utilization of the inherent parallelism of multi-core architectures is a grand challenge in the field of electronic design automation (EDA). One EDA algorithm associated with a high computational cost is automatic test pattern generation (ATPG). We present the ATPG tool TIGUAN based on a thread-parallel SAT solver. Due to a tight integration of the SAT engine into the ATPG algorithm and a carefully chosen mix of various optimization techniques, multi-million-gate industrial circuits are handled without aborts. TIGUAN supports both conventional single-stuck-at faults and sophisticated conditional multiple stuck-at faults which allows to generate patterns for non-standard fault models. We demonstrate how TIGUAN can be combined with conventional structural ATPG to extract full benefit of the intrinsic strengths of both approaches.

Journal ArticleDOI
TL;DR: This work proposes a succinct QBF encoding for modeling sequential circuit behavior, which shows memory reductions in the order of 90 percent and demonstrate competitive runtimes compared to state-of-the-art SAT techniques.
Abstract: Formal CAD tools operate on mathematical models describing the sequential behavior of a VLSI design. With the growing size and state-space of modern digital hardware designs, the conciseness of this mathematical model is of paramount importance in extending the scalability of those tools, provided that the compression does not come at the cost of reduced performance. Quantified Boolean Formula satisfiability (QBF) is a powerful generalization of Boolean satisfiability (SAT). It also belongs to the same complexity class as many CAD problems dealing with sequential circuits, which makes it a natural candidate for encoding such problems. This work proposes a succinct QBF encoding for modeling sequential circuit behavior. The encoding is parametrized and further compression is achieved using time-frame windowing. Comprehensive hardware constructions are used to illustrate the proposed encodings. Three notable CAD problems, namely bounded model checking, design debugging and sequential test pattern generation, are encoded as QBF instances to demonstrate the robustness and practicality of the proposed approach. Extensive experiments on OpenCore circuits show memory reductions in the order of 90 percent and demonstrate competitive runtimes compared to state-of-the-art SAT techniques. Furthermore, the number of solved instances is increased by 16 percent. Admittedly, this work encourages further research in the use of QBF in CAD for VLSI.

Journal ArticleDOI
TL;DR: In this paper, a topological and parametric tune and prune (TP) 2 ) algorithm is proposed to solve the problem of sheet metal manufacturing with a graph-based approach.
Abstract: This paper describes an approach to automate the design for sheet metal parts that satisfy multiple objective functions such as material cost and manufacturability. Unlike commercial software tools such as PRO/SHEETMETAL, which aids the user in finalizing and determining the sequence of manufacturing operations for a specified component, our approach starts with spatial constraints in order to create the component geometries and helps the designer design. While there is an infinite set of parts that can feasibly be generated with sheet metal, it is difficult to define this space systematically. To solve this problem, we have created 108 design rules that have been developed for five basic sheet metal operations: slitting, notching, shearing, bending, and punching. A recipe of the operations for a final optimal design is then presented to the manufacturing engineers thus saving them time and cost. The technique revealed in this paper represents candidate solutions as a graph of nodes and arcs where each node is a rectangular patch of sheet metal, and modifications are progressively made to the sheet to maintain the parts manufacturability. This paper also discusses a new topological optimization technique to solve graph-based engineering design problems by decoupling parameters and topology changes. This paper presents topological and parametric tune and prune ((TP) 2 ) as a topology optimization method that has been developed specifically for domains representable by a graph grammar schema. The method is stochastic and incorporates distinct phases for modifying the topologies and modifying parameters stored within topologies. Thus far, with abovementioned sheet metal problem, (TP) 2 had proven better than genetic algorithm in terms of the quality of solutions and time taken to acquire them.

Journal ArticleDOI
TL;DR: This paper presents an efficient MCML optimization program that can be used to properly size MCML gates and has reduced the number of variables to N+1, in comparison to 7N+1 in the most recent work on this topic.
Abstract: MOS current-mode logic (MCML) is a low-noise alternative to CMOS logic. The lack of MCML automation tools, however, has deterred designers from applying MCML to complex digital functions. This paper presents an efficient MCML optimization program that can be used to properly size MCML gates. The delay model accuracy is adjusted by fitting measured gate delays by means of technology-dependent parameters. For an N number of logic gates, the proposed mathematical program has reduced the number of variables to N+1, in comparison to 7N+1 in the most recent work on this topic. The program has been implemented to efficiently optimize a 4-bit ripple carry adder and an 8-bit decoder in 0.18-μm CMOS technology.

Journal ArticleDOI
TL;DR: A summary of the state-of-the-art of MEMS-specific modeling techniques and validation of new models for a parametric component library can be found in this paper.
Abstract: This paper provides a brief summary of the state-of-the-art of MEMS-specific modeling techniques and describes the validation of new models for a parametric component library. Two recently developed 3D modeling tools are described in more detail. The first one captures a methodology for designing MEMS devices and simulating them together with integrated electronics within a standard electronic design automation (EDA) environment. The MEMS designer can construct the MEMS model directly in a 3D view. The resulting 3D model differs from a typical feature-based 3D CAD modeling tool in that there is an underlying behavioral model and parametric layout associated with each MEMS component. The model of the complete MEMS device that is shared with the standard EDA environment can be fully parameterized with respect to manufacturing- and design-dependent variables. Another recent innovation is a process modeling tool that allows accurate and highly realistic visualization of the step-by-step creation of 3D micro-fabricated devices. The novelty of the tool lies in its use of voxels (3D pixels) rather than conventional 3D CAD techniques to represent the 3D geometry. Case studies for experimental devices are presented showing how the examination of these virtual prototypes can reveal design errors before mask tape out, support process development before actual fabrication and also enable failure analysis after manufacturing.

Journal ArticleDOI
TL;DR: A methodology that can generatively reproduce variations to a design specification based on preset inputs which offer variety in layout without the loss of design aesthetics is examined.

Proceedings ArticleDOI
08 Mar 2010
TL;DR: A suite of optimizations for equivalence checking of RTL generated through behavioral synthesis exploit the high-level structure of the ESL description to ameliorate verification complexity.
Abstract: Behavioral synthesis is the compilation of an Electronic system-level (ESL) design into an RTL implementation. We present a suite of optimizations for equivalence checking of RTL generated through behavioral synthesis. The optimizations exploit the high-level structure of the ESL description to ameliorate verification complexity. Experiments on representative benchmarks indicate that the optimizations can handle equivalence checking of synthesized designs with tens of thousands of lines of RTL.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: Power-CAD as mentioned in this paper uses an electrothermal simulation methodology, a parasitic extraction tool, and an optimization algorithm that helps to achieve an optimal layout for a discrete power electronic module (PEM).
Abstract: Power Electronic Module (PEM) design requires simultaneous analysis of thermal, electrical, and mechanical parameters to design an optimal layout. The current design process being used by package designers involves a sequential procedure instead of a simultaneous process. Each design step involves the analysis of the thermal, electrical or mechanical aspects of the design. As a result, the designer has to iterate between the various design process steps in order to achieve an optimal design. This causes a substantial increase in the design cycle time. A new methodology has been developed and implemented in this work that helps to automate and optimize the PEM design process. Power-CAD uses an electrothermal simulation methodology, a parasitic extraction tool, and an optimization algorithm that helps to achieve an optimal layout for a discrete PEM. This approach promises to save time and money for the PEM design industry by significantly reducing the number of design cycles.

Proceedings ArticleDOI
14 Apr 2010
TL;DR: An approach called window optimization is described that does not consider the circuit as a whole, but smaller sub-circuits of it (so called windows), which shows that applying the proposed optimizations leads to significant reductions of the circuit cost.
Abstract: This paper considers the optimization of reversible and quantum circuits. Both represent the basis for emerging technologies e.g. in the area of quantum computation and low power design. An approach called window optimization is described that does not consider the circuit as a whole, but smaller sub-circuits of it (so called windows). Two schemes for extracting the windows and three approaches for their optimization are considered. Application scenarios show that applying the proposed optimizations leads to significant reductions of the circuit cost.

Book
21 Dec 2010
TL;DR: This paper presents a meta-modelling architecture for Mixed-Signal, Embedded System Design Automation that automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing mixed-signal systems.
Abstract: 1. An Overview of Mixed-Signal, Embedded System Design.- 2. Microcontroller Architecture.- 3. Hardware and Software Subsystems of Mixed-Signal Architectures.- 4. Performance Improvement by Customization.- 5. Programmable Data Communication Blocks.- 6. Continuous-Time, Analog Building Blocks.- 7. Switched Capacitor Blocks.- 8. Analog and Digital Filters.- 9. Analog to Digital Converters.- 10. Future Directions in Mixed-Signal Design Automation

Proceedings ArticleDOI
01 Dec 2010
TL;DR: A computer-aided design (CAD) tool for automated sizing and optimization of analog integrated circuits (ICs) using artificial neural networks (ANNs) in order to deduce the device sizes that optimize the performance objectives while satisfying the constraint specifications.
Abstract: This paper presents a computer-aided design (CAD) tool for automated sizing and optimization of analog integrated circuits (ICs). This tool uses artificial neural networks (ANNs) in order to deduce the device sizes that optimize the performance objectives while satisfying the constraint specifications. Neural networks can learn and generalize from data allowing model development even when component formulas are unavailable. The training data are obtained by various simulations in the HSPICE design environment with TSMC 0.18 μm CMOS process parameters. To evaluate the tool, one practical example is presented in 0.18 μm CMOS technology. The simulation results verify effectiveness of the proposed method for analog circuits sizing.