scispace - formally typeset
Search or ask a question

Showing papers on "Electronic design automation published in 2006"


Journal ArticleDOI
TL;DR: Efficient quantum-logic circuits that perform two tasks are discussed: 1) implementing generic quantum computations, and 2) initializing quantum registers that are asymptotically optimal for respective tasks.
Abstract: The pressure of fundamental limits on classical computation and the promise of exponential speedups from quantum effects have recently brought quantum circuits (Proc. R. Soc. Lond. A, Math. Phys. Sci., vol. 425, p. 73, 1989) to the attention of the electronic design automation community (Proc. 40th ACM/IEEE Design Automation Conf., 2003), (Phys. Rev. A, At. Mol. Opt. Phy., vol. 68, p. 012318, 2003), (Proc. 41st Design Automation Conf., 2004), (Proc. 39th Design Automation Conf., 2002), (Proc. Design, Automation, and Test Eur., 2004), (Phys. Rev. A, At. Mol. Opt. Phy., vol. 69, p. 062321, 2004), (IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst., vol. 22, p. 710, 2003). Efficient quantum-logic circuits that perform two tasks are discussed: 1) implementing generic quantum computations, and 2) initializing quantum registers. In contrast to conventional computing, the latter task is nontrivial because the state space of an n-qubit register is not finite and contains exponential superpositions of classical bitstrings. The proposed circuits are asymptotically optimal for respective tasks and improve earlier published results by at least a factor of 2. The circuits for generic quantum computation constructed by the algorithms are the most efficient known today in terms of the number of most expensive gates [quantum controlled-NOTs (CNOTs)]. They are based on an analog of the Shannon decomposition of Boolean functions and a new circuit block, called quantum multiplexor (QMUX), which generalizes several known constructions. A theoretical lower bound implies that the circuits cannot be improved by more than a factor of 2. It is additionally shown how to accommodate the severe architectural limitation of using only nearest neighbor gates, which is representative of current implementation technologies. This increases the number of gates by almost an order of magnitude, but preserves the asymptotic optimality of gate counts

545 citations


Proceedings ArticleDOI
06 Mar 2006
TL;DR: This work develops the first systematic droplet routing method that can be integrated with biochip synthesis, which minimizes the number of cells used fordroplet routing, while satisfying constraints imposed by throughput considerations and fluidic properties.
Abstract: Recent advances in microfluidics are expected to lead to sensor systems for high-throughput biochemical analysis. CAD tools are needed to handle increased design complexity for such systems. Analogous to classical VLSI synthesis, a top-down design automation approach can shorten the design cycle and reduce human effort. We focus here on the droplet routing problem, which is a key issue in biochip physical design automation. We develop the first systematic droplet routing method that can be integrated with biochip synthesis. The proposed approach minimizes the number of cells used for droplet routing, while satisfying constraints imposed by throughput considerations and fluidic properties. A real-life biochemical application is used to evaluate the proposed method.

228 citations


Journal ArticleDOI
TL;DR: This paper proves the feasibility and effectiveness of the proposed approach to desynchronization by showing its application to a set of real designs, including a complete implementation of the DLX microprocessor architecture.
Abstract: Asynchronous implementation techniques, which measure logic delays at runtime and activate registers accordingly, are inherently more robust than their synchronous counterparts, which estimate worst case delays at design time and constrain the clock cycle accordingly. Desynchronization is a new paradigm to automate the design of asynchronous circuits from synchronous specifications, thus, permitting widespread adoption of asynchronicity without requiring special design skills or tools. In this paper, different protocols for desynchronization are first studied, and their correctness is formally proven using techniques originally developed for distributed deployment of synchronous language specifications. A taxonomy of existing protocols for asynchronous latch controllers, covering, in particular, the four-phase handshake protocols devised in the literature for micropipelines, is also provided. A new controller that exhibits provably maximal concurrency is then proposed, and the performance of desynchronized circuits is analyzed with respect to the original synchronous optimized implementation. Finally, this paper proves the feasibility and effectiveness of the proposed approach by showing its application to a set of real designs, including a complete implementation of the DLX microprocessor architecture

194 citations


Book
01 Jun 2006
TL;DR: This paper reviews and compares hybrid system tools by highlighting their differences in terms of their underlying semantics, expressive power and mathematical mechanisms, and suggests the need for a unifying approach to hybrid systems design.
Abstract: The explosive growth of embedded electronics is bringing information and control systems of increasing complexity to every aspects of our lives. The most challenging designs are safety-critical systems, such as transportation systems (e.g., airplanes, cars, and trains), industrial plants and health care monitoring. The difficulties reside in accommodating constraints both on functionality and implementation. The correct behavior must be guaranteed under diverse states of the environment and potential failures; implementation has to meet cost, size, and power consumption requirements. The design is therefore subject to extensive mathematical analysis and simulation. However, traditional models of information systems do not interface well to the continuous evolving nature of the environment in which these devices operate. Thus, in practice, different mathematical representations have to be mixed to analyze the overall behavior of the system. Hybrid systems are a particular class of mixed models that focus on the combination of discrete and continuous subsystems. There is a wealth of tools and languages that have been proposed over the years to handle hybrid systems. However, each tool makes different assumptions on the environment, resulting in somewhat different notions of hybrid system. This makes it difficult to share information among tools. Thus, the community cannot maximally leverage the substantial amount of work that has been directed to this important topic. In this paper, we review and compare hybrid system tools by highlighting their differences in terms of their underlying semantics, expressive power and mathematical mechanisms. We conclude our review with a comparative summary, which suggests the need for a unifying approach to hybrid systems design. As a step in this direction, we make the case for a semantic-aware interchange format, which would enable the use of joint techniques, make a formal comparison between different approaches possible, and facilitate exporting and importing design representations.

188 citations


Journal ArticleDOI
TL;DR: A taxonomy for ESL tools and methodologies is presented that combines UC Berkeley's platform-based design terminologies with Dan Gajski's Y-chart work to help stem the tide of confusion in the ESL world.
Abstract: This article presents a taxonomy for ESL tools and methodologies that combines UC Berkeley's platform-based design terminologies with Dan Gajski's Y-chart work. This is timely and necessary because in the ESL world we seem to be building tools without first establishing an appropriate design flow or methodology, thereby creating a lot of confusion. This taxonomy can help stem the tide of confusion

173 citations


Journal ArticleDOI
TL;DR: The design flow is utilized in the integration of state-of-the-art technology approaches, including a wireless terminal architecture, a network-on-chip, and multiprocessing utilizing RTOS in a SoC.
Abstract: This paper describes a complete design flow for multiprocessor systems-on-chips (SoCs) covering the design phases from system-level modeling to FPGA prototyping. The design of complex heterogeneous systems is enabled by raising the abstraction level and providing several system-level design automation tools. The system is modeled in a UML design environment following a new UML profile that specifies the practices for orthogonal application and architecture modeling. The design flow tools are governed in a single framework that combines the subtools into a seamless flow and visualizes the design process. Novel features also include an automated architecture exploration based on the system models in UML, as well as the automatic back and forward annotation of information in the design flow. The architecture exploration is based on the global optimization of systems that are composed of subsystems, which are then locally optimized for their particular purposes. As a result, the design flow produces an optimized component allocation, task mapping, and scheduling for the described application. In addition, it implements the entire system for FPGA prototyping board. As a case study, the design flow is utilized in the integration of state-of-the-art technology approaches, including a wireless terminal architecture, a network-on-chip, and multiprocessing utilizing RTOS in a SoC. In this study, a central part of a WLAN terminal is modeled, verified, optimized, and prototyped with the presented framework.

171 citations


Journal ArticleDOI
TL;DR: Measurement-based experimental results have demonstrated that the secure digital design flow is a functional technique to thwart side-channel power analysis, and successfully protects a prototype Advanced Encryption Standard (AES) IC fabricated in an 0.18-mum CMOS.
Abstract: Small embedded integrated circuits (ICs) such as smart cards are vulnerable to the so-called side-channel attacks (SCAs). The attacker can gain information by monitoring the power consumption, execution time, electromagnetic radiation, and other information leaked by the switching behavior of digital complementary metal-oxide-semiconductor (CMOS) gates. This paper presents a digital very large scale integrated (VLSI) design flow to create secure power-analysis-attack-resistant ICs. The design flow starts from a normal design in a hardware description language such as very-high-speed integrated circuit (VHSIC) hardware description language (VHDL) or Verilog and provides a direct path to an SCA-resistant layout. Instead of a full custom layout or an iterative design process with extensive simulations, a few key modifications are incorporated in a regular synchronous CMOS standard cell design flow. The basis for power analysis attack resistance is discussed. This paper describes how to adjust the library databases such that the regular single-ended static CMOS standard cells implement a dynamic and differential logic style and such that 20 000+ differential nets can be routed in parallel. This paper also explains how to modify the constraints and rules files for the synthesis, place, and differential route procedures. Measurement-based experimental results have demonstrated that the secure digital design flow is a functional technique to thwart side-channel power analysis. It successfully protects a prototype Advanced Encryption Standard (AES) IC fabricated in an 0.18-mum CMOS

159 citations


Journal ArticleDOI
TL;DR: A proposed four-phase design flow assists with computations by transforming a quantum algorithm from a high-level language program into precisely scheduled physical actions.
Abstract: Compilers and computer-aided design tools are essential for fine-grained control of nanoscale quantum-mechanical systems. A proposed four-phase design flow assists with computations by transforming a quantum algorithm from a high-level language program into precisely scheduled physical actions.

151 citations


Book
25 Oct 2006
TL;DR: All major steps in FPGA design flow which includes: routing and placement, circuit clustering, technology mapping and architecture-specific optimization, physical synthesis, RT-level and behavior-level synthesis, and power optimization are covered.
Abstract: Design automation or computer-aided design (CAD) for field programmable gate arrays (FPGAs) has played a critical role in the rapid advancement and adoption of FPGA technology over the past two decades. The purpose of this paper is to meet the demand for an up-to-date comprehensive survey/tutorial for FPGA design automation, with an emphasis on the recent developments within the past 5-10 years. The paper focuses on the theory and techniques that have been, or most likely will be, reduced to practice. It covers all major steps in FPGA design flow which includes: routing and placement, circuit clustering, technology mapping and architecture-specific optimization, physical synthesis, RT-level and behavior-level synthesis, and power optimization. We hope that this paper can be used both as a guide for beginners who are embarking on research in this relatively young yet exciting area, and a useful reference for established researchers in this field.

147 citations


Proceedings ArticleDOI
22 Feb 2006
TL;DR: A preliminary evaluation of performance of a cell-FPGA-like architecture for future hybrid "CMOL" circuits, which will combine a semiconduc-tor-transistor (CMOS) stack and a two-level nanowire crossbar with molecular-scale two-terminal nanodevices (program-mable diodes) formed at each crosspoint, shows that CMOL FPGA circuits may provide a density advantage of more than two orders of magnitude.
Abstract: This report describes a preliminary evaluation of performance of a cell-FPGA-like architecture for future hybrid "CMOL" circuits. Such circuits will combine a semiconduc-tor-transistor (CMOS) stack and a two-level nanowire crossbar with molecular-scale two-terminal nanodevices (program-mable diodes) formed at each crosspoint. Our cell-based architecture is based on a uniform CMOL fabric of "tiles". Each tile consists of 12 four-transistor basic cells and one (four times larger) latch cell. Due to high density of nanodevices, which may be used for both logic and routing functions, CMOL FPGA may be reconfigured around defective nanodevices to provide high defect tolerance. Using a semi-custom set of design automation tools we have evaluated CMOL FPGA performance for the Toronto 20 benchmark set, so far without optimization of several parameters including the power supply voltage and nanowire pitch. The results show that even without such optimization, CMOL FPGA circuits may provide a density advantage of more than two orders of magnitude over the traditional CMOS FPGA with the same CMOS design rules, at comparable time delay, acceptable power consumption and potentially high defect tolerance.

117 citations



Journal ArticleDOI
TL;DR: The results indicate that the multilevel single-objective optimization engine locates near-optimal spiral inductor geometries with significantly fewer function evaluations than current techniques, whereas the overall synthesis methodology efficiently optimizes inductor designs with an improvement of up to 51% in key design constraints while reducing the impact of process variation and modeling error.
Abstract: To successfully design spiral inductors in increasingly complex and integrated mixed-signal systems, effective design automation techniques must be created. In this paper, the authors develop an automated synthesis methodology for integrated spiral inductors to efficiently generate Pareto-optimal designs based on application requirements. At its core, the synthesis approach employs a scalable multilevel single-objective optimization engine that integrates the flexibility of deterministic pattern search optimization with the rapid convergence of local nonlinear convex optimization. Multiobjective optimization techniques and surrogate functions are utilized to approximate Pareto surfaces in the design space to locate Pareto-optimal spiral inductor designs. Using the synthesis methodology, the authors also demonstrate how to reduce the impact of process variation and other sources of modeling error on spiral inductors. The results indicate that the multilevel single-objective optimization engine locates near-optimal spiral inductor geometries with significantly fewer function evaluations than current techniques, whereas the overall synthesis methodology efficiently optimizes inductor designs with an improvement of up to 51% in key design constraints while reducing the impact of process variation and modeling error

Proceedings ArticleDOI
01 Sep 2006
TL;DR: The xPilot system is presented, being developed at UCLA, to provide novel behavioral synthesis capability for automatically generating efficient RTL code from a C or SystemC description for a given system platform and optimizing the logic, interconnects, performance, and power simultaneously.
Abstract: With the rapid increase of complexity in system-on-a-chip (SoC) design, the electronic design automation (EDA) community is moving from RTL (Register Transfer Level) synthesis to behavioral-level and system-level synthesis. The needs of system-level verification and software/hardware co-design also prefer behavior-level executable specifications, such as C or SystemC. In this paper we present the platform-based synthesis system, named xPilot, being developed at UCLA. The first objective of xPilot is to provide novel behavioral synthesis capability for automatically generating efficient RTL code from a C or SystemC description for a given system platform and optimizing the logic, interconnects, performance, and power simultaneously. The second objective of xPilot is to provide a platform-based system-level synthesis capability, including both synthesis for application-specific configurable processors and heterogeneous multi-core systems. Preliminary experiments on FPGAs demonstrate the efficacy of our approach on a wide range of applications and its value in exploring various design tradeoffs.

Proceedings ArticleDOI
06 Mar 2006
TL;DR: This paper proposes and evaluates various automated single and multi-objective optimizations for exploring out-of-order processor designs and concludes that the newly proposed genetic local search algorithm outperforms all other search algorithms in terms of accuracy.
Abstract: Previous work on efficient customized processor design primarily focused on in-order architectures. However, with the recent introduction of out-of-order processors for high-end high-performance embedded applications, researchers and designers need to address how to automate the design process of customized out-of-order processors. Because of the parallel execution of independent instructions in out-of-order processors, in-order processor design methodologies which subdivide the search space in independent components are unlikely to be effective in terms of accuracy for designing out-of-order processors. In this paper we propose and evaluate various automated singleand multi-objective optimizations for exploring out-of-order processor designs. We conclude that the newly proposed genetic local search algorithm outperforms all other search algorithms in terms of accuracy. In addition, we propose two-phase simulation in which the first phase explores the design space through statistical simulation; a region of interest is then simulated through detailed simulation in the second phase. We show that simulation time speedups can be obtained of a factor 2.2times to 7.3times using two-phase simulation

01 Jan 2006
TL;DR: In this paper, the authors present a set of tools for analog and mixed-signal integrated circuits, including layout tools for Analog ICs and Mixed-Signal SoCs.
Abstract: Design Flows. Logic Synthesis. Power Analysis and Optimization from Circuit to Register Transfer Levels. Equivalence checking. Digital Layout - Placement. Static Timing Analysis. Structured Digital Design. Routing. Exploring Challenges of Libraries for Electronic Design. Design Closure. Tools for Chip-Package Codesign. Design Databases. FPGA Synthesis and Physical Design. Simulation of Analog and Radio Frequency Circuits and Systems. Simulation and Modeling for Analog and Mixed-Signal Integrated Circuits. Layout Tools for Analog ICs and Mixed-Signal SoCs. Design Rule Checking. Resolution Enhancement Technology and Mask Data Preparation. Design for Manufacturability in the Nanometer Era. Power Supply Network Design and Analysis. Noise Considerations in Digital ICs. Layout Extraction. Mixed-Signal Noise Coupling in System-on-Chip Design: Modeling, Analysis and Validation. Process Simulation. Device Modeling: From Physics to Electrical Parameter Extraction. High-Accuracy Parasitic Extraction.

Proceedings ArticleDOI
24 Jul 2006
TL;DR: This is the first work that adopts a topological representation to solve the placement problem of digital microfluidic biochips with a tree-based topological representations, called T-tree.
Abstract: Droplet-based microfluidic biochips have recently gained much attention and are expected to revolutionize the biological laboratory procedure. As biochips are adopted for the complex procedures in molecular biology, its complexity is expected to increase due to the need of multiple and concurrent assays on a chip. In this paper, we formulate the placement problem of digital microfluidic biochips with a tree-based topological representation, called T-tree. To the best knowledge of the authors, this is the first work that adopts a topological representation to solve the placement problem of digital microfluidic biochips. Experimental results demonstrate that our approach is much more efficient and effective, compared with the previous unified synthesis and placement framework.

Proceedings ArticleDOI
06 Mar 2006
TL;DR: This work proposes an integrated approach where state-of-the-art platform modeling infrastructures, at the IP core level and at the system level, meet to provide the designer with maximum openness and flexibility in terms of design space exploration.
Abstract: In recent years, increasing manufacturing density has allowed the development of Multi-Processor Systems-on-Chip (MPSoCs). Application-Specific Instruction Set Processors (ASIPs) stand out as one of the most efficient design paradigms and could be especially effective as SoC computing engines. However, multiple hurdles which are hindering the productivity of SoC designers and researchers must be solved first. Among them, the difficulty of thoroughly exploring the design space by simultaneously sweeping axes like processing elements, memory hierarchies and chip interconnect fabrics. We tackle this challenge by proposing an integrated approach where state-of-the-art platform modeling infrastructures, at the IP core level and at the system level, meet to provide the designer with maximum openness and flexibility in terms of design space exploration.

Journal ArticleDOI
TL;DR: New benchmark-based design strategies for single-chip heterogeneous multiprocessors are described which are referred to as scenario-oriented design, which are motivated by the needs of this hybrid class of computing.
Abstract: Single-chip heterogeneous multiprocessors (SCHMs) are arising to meet the computational demands of portable and handheld devices. These computing systems are not fully custom designs traditionally targeted by the design automation community, general-purpose designs traditionally targeted by the computer architecture community, nor pure embedded designs traditionally targeted by the real-time community. An entirely new design philosophy will be needed for this hybrid class of computing. The programming of the device will be drawn from a narrower set of applications with execution that persists in the system over a longer period of time than for general-purpose programming. However, the devices will still be programmable, not only at the level of the individual processing element, but across multiple processing elements and even the entire chip. The design of other programmable single chip computers has enjoyed an era where the design tradeoffs could be captured in simulators such as SimpleScalar and performance could be evaluated to the SPEC benchmarks. Motivated by this, we describe new benchmark-based design strategies for SCHMs which we refer to as scenario-oriented design. We include an example and results

Proceedings ArticleDOI
24 Jul 2006
TL;DR: A tools perspective is presented, including the primary effects such as HCI, NBTI and EM for which EDA tools are available, types of tools and necessary reliability infrastructure and flows that have been working in practice, and developing areas and future opportunities are addressed.
Abstract: Recent progress in EDA tools allows IC designs to be accurately verified with consequent improvements in yield and performance through reduced guard bands. This paper will present a tools perspective, including the primary effects such as HCI, NBTI and EM for which EDA tools are available, types of tools (dynamic simulation vs. static rule checking) and necessary reliability infrastructure and flows that have been working in practice. Finally, developing areas and future opportunities will be addressed.

Proceedings ArticleDOI
24 Jan 2006
TL;DR: MEVA-3D, an automated physical design and architecture performance estimation flow for 3D architectural evaluation which includes 3D floorplanning, routing, interconnect pipelining and automated thermal via insertion, and associated die size, performance, and thermal modeling capabilities, is developed.
Abstract: Although the emerging three-dimensional integration technology can significantly reduce interconnect delay, chip area, and power dissipation in nanometer technologies, its impact on overall system performance is still poorly understood due to the lack of tools and systematic flows to evaluate 3D microarchitectural designs. The contribution of this paper is the development of MEVA-3D, an automated physical design and architecture performance estimation flow for 3D architectural evaluation which includes 3D floorplanning, routing, interconnect pipelining and automated thermal via insertion, and associated die size, performance, and thermal modeling capabilities. We apply this flow to a simple, out-of-order superscalar microprocessor to evaluate the performance and thermal behavior in 2D and 3D designs, and demonstrate the value of MEVA-3D in providing quantitative evaluation results to guide 3D architecture designs. In particular, we show that it is feasible to manage thermal challenges with a combination of thermal vias and double-sided heat sinks, and report modest system performance gains in 3D designs for these simple test examples.

Journal ArticleDOI
TL;DR: An architecture that combines VLIW (very long instruction word) processing with the capability to introduce application-specific customized instructions and highly parallel combinational hardware functions for the acceleration of signal processing applications is presented.
Abstract: This paper presents an architecture that combines VLIW (very long instruction word) processing with the capability to introduce application-specific customized instructions and highly parallel combinational hardware functions for the acceleration of signal processing applications. To support this architecture, a compilation and design automation flow is described for algorithms written in C. The key contributions of this paper are as follows: (1) a 4-way VLIW processor implemented in an FPGA, (2) large speedups through hardware functions, (3) a hardware/software interface with zero overhead, (4) a design methodology for implementing signal processing applications on this architecture, (5) tractable design automation techniques for extracting and synthesizing hardware functions. Several design tradeoffs for the architecture were examined including the number of VLIW functional units and register file size. The architecture was implemented on an Altera Stratix II FPGA. The Stratix II device was selected because it offers a large number of high-speed DSP (digital signal processing) blocks that execute multiply-accumulate operations. Using the MediaBench benchmark suite, we tested our methodology and architecture to accelerate software. Our combined VLIW processor with hardware functions was compared to that of software executing on a RISC processor, specifically the soft core embedded NIOS II processor. For software kernels converted into hardware functions, we show a hardware performance multiplier of up to 230 times that of software with an average 63 times faster. For the entire application in which only a portion of the software is converted to hardware, the performance improvement is as much as 30X times faster than the nonaccelerated application, with a 12X improvement on average.

Journal ArticleDOI
TL;DR: Experimental results show that this new layout synthesis tool is capable of producing high quality layouts comparable to those manually done by layout experts but with much less design time.
Abstract: In this paper, a layout synthesis tool for the design of analog integrated circuits (ICs) is presented. This tool offers great flexibility that allows analog circuit designers to bring their special design knowledge and experiences into the synthesis process to create high-quality analog circuit layouts. Different from conventional layout systems that are limited to the optimization of single devices, our layout generation tool attempts to optimize more complex modules. This tool includes a complete tool suite that covers the following three major analog physical designs stages. 1) Module Generation: designers can develop and maintain their own technology- and application-independent module generators for subcircuits using an in-house developed description language. 2) Placement: a two-stage placement technique, tailored for the analog placement design, is proposed. In particular, this placement algorithm features a novel genetic placement stage followed by a fast simulated reannealing scheme. 3) Routing: the minimum-Steiner-tree-based global routing is developed, and it is actually integrated into the placement procedure to improve reliability and routability of the placement solutions. Following the global routing, a compaction-based constructive detailed routing finally completes the interconnection of the entire layout. Several testing circuits have been applied to demonstrate the design efficiency and the effectiveness of this tool. Experimental results show that this new layout tool is capable of producing high quality layouts comparable to those manually done by layout experts but with much less design time

Proceedings ArticleDOI
05 Nov 2006
TL;DR: The proposed approach integrates high-level synthesis and statistical static timing analysis into a simulated annealing engine to simultaneously explore solution space while meeting design objectives and shows that the area reduction is in the average of 14% when 95% performance yield is imposed with the same total completion time constraint.
Abstract: Meeting timing constraint is one of the most important issues for modern design automation tools. This situation is exacerbated with the existence of process variation. Current high-level synthesis tools, performing task scheduling, resource allocation and binding, may result in unexpected performance discrepancy due to the ignorance of the impact of process variation, which requires a shift in the design paradigm, from today's deterministic design to statistical or probabilistic design. In this paper, we present a variation-aware performance yield-guaranteed high-level synthesis algorithm. The proposed approach integrates high-level synthesis and statistical static timing analysis into a simulated annealing engine to simultaneously explore solution space while meeting design objectives. Our results show that the area reduction is in the average of 14% when 95% performance yield is imposed with the same total completion time constraint.

BookDOI
01 Jul 2006
TL;DR: A Hierarchical approach to Assess the Impacts of Transport Policies and a method for Estimating Land Use Transition Probability using Raster Data are studied.
Abstract: Land Use Simulation and Visualisation.- Can Decision Making Processes Benefit from a User Friendly Land Use and Transport Interaction Model?.- Development of a Hierarchical Approach to Assess the Impacts of Transport Policies.- Development of a Support System for Community-Based Disaster Mitigation Planning Integrated with a Fire Spread Simulation Model Using CA.- Transition Rule Elicitation Methods for Urban Cellular Automata Models.- A Method for Estimating Land Use Transition Probability Using Raster Data.- Linking Land Use Modelling and 3D Visualisation.- Multi-Agent Models for Movement Simulation.- Crowd Modeling and Simulation.- Exploring Heuristics Underlying Pedestrian Shopping Decision Processes.- Scale A street case library for environmental design with agent interfaces.- Approach to Design Behavioural Models for Traffic Network Users.- Shape Morphing of Intersection Layouts Using Curb Side Oriented Driver Simulation.- Multi-Agent Models for Urban Development.- Gentrification Waves in the Inner-City of Milan.- Multi-Agent Model to Multi-Process Transformation.- Research on New Residential Areas Using GIS.- A Comparison Study of the Allocation Problem of Undesirable Facilities Based on Residential Awarenes.- Decision-Making on Olympic Urban Development.- Usage of Planning Support Systems.- Managing and Deploying Design Knowledge.- Sieving Pebbles and Growing Profiles.- Concept Formation in a Design Optimization Tool.- A Framework for Situated Design Optimization.- Learning from Main Streets.- Culturally Accepted Green Architecture Toolbox.- Urban Decision-Making.- An Urban Decision Room Based on Mathematical Optimisation.- Forms of Participation in Urban Redevelopment Projects.- The Neighbourhood Wizard.- Design Interactivity and Design Automation.- A Proposal for Morphological Operators to Assist Architectural Design.- Generative Design in an Evolutionary Procedure.- Interactive Rule-Based Design.- Automatic Semantic Comparison of STEP Product Models.- Virtual Environments and Augmented Reality.- Design Tools for Pervasive Computing in Urban Environments.- 1:1 Spatially Augmented Reality Design Environment.

Book
01 Mar 2006
TL;DR: The Electronic Design Automation HandbookFormal VerificationMachine Intelligence in design AutomationThree-Dimensional Integrated Circuit DesignALGORITHMS VLSI DESIGN AUTOMATIONControl Circuits in Power ElectronicsVLSI Physical Design: From Graph Partitioning to Timing Closure
Abstract: The Electronic Design Automation HandbookFormal VerificationMachine Intelligence in Design AutomationThree-Dimensional Integrated Circuit DesignALGORITHMS VLSI DESIGN AUTOMATIONControl Circuits in Power ElectronicsVLSI Physical Design: From Graph Partitioning to Timing ClosureOptical Integrated CircuitsAnalog Layout SynthesisUnderstanding Fabless IC TechnologyCompact Models for Integrated Circuit Design (Open Access)Integrated Circuit and System Design. Power and Timing Modeling, Optimization and SimulationDigital VLSI Chip Design with Cadence and Synopsys CAD ToolsEssential Electronic Design Automation (EDA)Nanoelectronics and PhotonicsOpen Source TCAD/EDA for Compact ModelingEDA for IC Implementation, Circuit Design, and Process TechnologyElectronic Design Automation for Integrated Circuits Handbook 2 Volume SetDigital Integrated CircuitsCMOS IC LayoutDigital Systems Design with FPGAs and CPLDsCounterfeit Integrated CircuitsDesign With Operational Amplifiers And Analog Integrated CircuitsElectronic Design Automation for IC Implementation, Circuit Design, and Process TechnologyIntegrated Circuit Test EngineeringEDA for IC System Design, Verification, and TestingAnalog/RF and Mixed-Signal Circuit Systematic DesignElectronic Design AutomationFundamentals of Layout Design for Electronic CircuitsVLSI Circuit Design Methodology DemystifiedCMOSDesign Automation Techniques for Approximation CircuitsEnabling the Internet of ThingsSilicon Photonics DesignMachine Learning in VLSI Computer-Aided DesignVLSI Design Methodology DevelopmentAnalog Design and Simulation Using OrCAD Capture and PSpiceMixed-Signal Methodology GuideUsing Artificial Neural Networks for Analog Integrated Circuit Design AutomationComputer-Aided Design of Analog Integrated Circuits and Systems

Proceedings ArticleDOI
25 Jun 2006
TL;DR: In this article, a real-time Health Monitoring System (HEMS) based on Sensor-Embedded Radio Frequency Identification (SE-RFID) is proposed and analyzed using EDA software.
Abstract: Sensor-Embedded Radio Frequency Identification (SE-RFID) is introduced to enhance the sensing functions of the current RFID systems. Two innovative architectures for SE-RFID systems are proposed and analyzed; the preliminary simulation of the proposed SE-RFID systems based on EDA software has been conducted successfully. An effort to design, simulate and develop a real-time Health Monitoring System (HEMS) based on SE-RFID is now under way.

Proceedings ArticleDOI
24 Jan 2006
TL;DR: How OA improves interoperability among applications in an EDA flow is described and how OA benefits developers of both EDA tools and flows is detailed.
Abstract: The OpenAccess database provides a comprehensive open standard data model and robust implementation for IC design flows. This paper describes how it improves interoperability among applications in an EDA flow. It details how OA benefits developers of both EDA tools and flows. Finally, it outlines how OA is being used in the industry, at semiconductor design companies, EDA tool vendors, and universities.

Proceedings ArticleDOI
05 Nov 2006
TL;DR: The decade of the 1990s saw the first wave of practical "post-SPICE" tools for analog designs, and some pragmatic prognostications for what the next wave might (or, more bluntly, should) focus on next, as pressure to improve AMS design productivity grows.
Abstract: The decade of the 1990s saw the first wave of practical "post-SPICE" tools for analog designs. A range of synthesis, optimization, layout and modeling techniques made their way from academic prototypes to first-generation commercial offerings. We offer some pragmatic prognostications for what the next wave might (or, more bluntly, should) focus on next, as pressure to improve AMS design productivity grows.

Journal ArticleDOI
TL;DR: An XML-based standard for describing electronic intellectual property - that is, blocks of electronic logic suitable for inclusion in complex integrated circuits, commonly know as systems on chips (SoCs) is developed.
Abstract: The paper aims to develop an XML-based standard for describing electronic intellectual property - that is, blocks of electronic logic suitable for inclusion in complex integrated circuits, commonly know as systems on chips (SoCs). This work, which is based on the Spirit Consortium's IP-XACT specification, has transferred to the IEEE for standardization. The IP-XACT specification provides a metadata schema for describing IP, enabling it to be compatible with automated integration techniques, and an API for tool access to this schema. Tools that implement the standard would be able to automatically interpret, configure, integrate, and manipulate IP blocks that are delivered with metadata that conforms to the proposed IP metadata description, and the IP-XACT APIs provides a standard method for linking multiple tools through a single exchange-metadata format. This automatic integration of tools and IP from multiple vendors creates an IP-XACT-enabled environment

Proceedings ArticleDOI
J. Friedman1
06 Mar 2006
TL;DR: Engineers need to decouple algorithm development and verification from the availability of hardware, which means OEMs and suppliers around the world are switching to model-based design.
Abstract: Automotive systems are becoming increasingly difficult and expensive to design successfully as the market demands increasing complexity. Body electronics are particularly affected by this trend, a good example being power windows design. This seemingly mundane area involves meeting market and legislative requirements, which means creating a control system that combines the input from several sensors and follows complex behavioral rules (Prabhu and Mosterman, 2004). Traditional design methodologies involve writing a text specification and implementing algorithms in C. However, algorithms cannot be verified without hardware. This approach leaves the engineer in the unenviable position of waiting for the last piece of hardware to arrive to enable them to test their system. To avoid these problems, engineers need to decouple algorithm development and verification from the availability of hardware. To address this need, OEMs and suppliers around the world are switching to model-based design