scispace - formally typeset
Search or ask a question

Showing papers on "Electronic design automation published in 2001"


Proceedings ArticleDOI
04 Nov 2001
TL;DR: A model is proposed that allows a designer to predict the battery time-to-failure for a given load and provides a cost metric for lifetime optimization algorithms and allows for a tradeoff between the accuracy and the amount of computation performed.
Abstract: Once the battery becomes fully discharged, a battery-powered portable electronic system goes off-line. Therefore, it is important to take the battery behavior into account. A system designer needs an adequate high-level model in order to make battery-aware decisions that target maximization of the system's lifetime on-line. We propose such a model: it allows a designer to predict the battery time-to-failure for a given load and provides a cost metric for lifetime optimization algorithms. Our model also allows for a tradeoff between the accuracy and the amount of computation performed. The quality of the proposed model is evaluated using a detailed low-level simulation of a lithium-ion electrochemical cell.

263 citations


Patent
28 Dec 2001
TL;DR: In this article, the authors present a system that takes a RTL model of an electronic design and maps it into an efficient, high level hierarchical representation of the hardware implementation of the design.
Abstract: An electronic design automation system provides optimization of RTL models of electronic designs, to produce detailed constraints and data precisely defining the requirements for the back-end flows leading to design fabrication. The system takes a RTL model of an electronic design and maps it into an efficient, high level hierarchical representation of the hardware implementation of the design. Automatic partitioning partitions the hardware representation into functional partitions, and creates a fully characterized performance envelope for a range of feasible implementations for each of the partitions, using accurate placement based wire load models. Chip-level optimization selects and refines physical implementations of the partitions to produce compacted, globally routed floorplans. Chip-level optimization iteratively invokes re-partitioning passes to refine the partitions and to recompute the feasible implementations. In this fashion, a multiple-pass process converges on an optimal selection of physical implementations for all partitions for the entire chip that meet minimum timing requirements and other design goals. The system outputs specific control and data files which thoroughly define the implementation details of the design through the entire back-end flow process, thereby guaranteeing that the fabricated design meets all design goals without costly and time consuming design iterations.

249 citations


Proceedings ArticleDOI
Preeti Ranjan Panda1
30 Sep 2001
TL;DR: The features of SystemC that make it an attractive language for design specification, verification, and synthesis at different levels of abstraction are outlined, with particular emphasis on the new features included in SystemC 2.0 that support system-level design.
Abstract: SystemC is a C++ based modeling platform supporting design abstractions at the register-transfer, behavioral, and system levels. Consisting of a class library and a simulation kernel, the language is an attempt at standardization of a C/C++ design methodology, and is supported by the Open SystemC Initiative (OSCI), a consortium of a wide range of system houses, semiconductor companies, intellectual property (IP) providers, embedded software developers, and design automation tool vendors. The advantages of SystemC include the establishment of a common design environment consisting of C++ libraries, models and tools, thereby setting up a foundation for hardware-software co-design; the ability to exchange IP easily and efficiently; and the ability to reuse test benches across different levels of modeling abstraction. We outline the features of SystemC that make it an attractive language for design specification, verification, and synthesis at different levels of abstraction, with particular emphasis on the new features included in SystemC 2.0 that support system-level design.

212 citations


Patent
26 Feb 2001
TL;DR: In this article, a multi-faceted portal site acts as a server in the context of an n-tier client/server network, and connects electronic designers and design teams to design and verification tool and service providers on the other through a single portal site.
Abstract: A multi-faceted portal site acts as a server in the context of an n-tier client/ server network, and connects electronic designers and design teams to design and verification tool and service providers on the other through a single portal site. Tools and services accessible to users through the portal site include electronic design automation (EDA) software tools, electronic component information, electronic component databases of parts (or dynamic parts), computing and processing resources, virtual circuit blocks, design expert assistance, and integrated circuit fabrication. Such tools and services may be provided in whole or part by suppliers connected to the portal site. Users accessing the portal site are presented with options in a menu or other convenient format identifying the tools and services available, and are able to more rapidly complete circuit designs by having access to a wide variety of tools and services in a single locale. The portal site may facilitate purchase, lease or other acquisition of the tools and services offered through it. The portal site tracks the movements of users through the portal site in order to learn about the design preferences and design approaches of users individually and in the aggregate. Previous actions taken by the user and by similarly-situated users may be considered in determining which information presented to the user, or in what order to present information to the user, thereby providing contextually-driven access.

197 citations


Patent
27 Jun 2001
TL;DR: In this paper, the authors present a microfluidic circuit design method that includes developing synthesizable computer code for a design, including a plurality of symbols for micro-fluid components.
Abstract: The present invention generally relates to microfluidics and more particularly to the design of customized microfluidic systems using a microfluidic computer aided design system. In one embodiment of the present invention a microfluidic circuit design method is provided. The method includes developing synthesizable computer code for a design. Next, a microfluidic circuit schematic, including a plurality of symbols for microfluidic components, is generated either interactively or using the synthesizable computer code. The microfluidic circuit schematic is then functionally simulated. The microfluidic components are placed and routed on a template to form a physical layout. Then the physical layout is physically simulated using dynamic simulation models of the microfluidic components; and the physical layout is written to a layout file.

158 citations


Journal ArticleDOI
01 Apr 2001
TL;DR: The interconnect parasitic effects are reviewed and their impact on circuit behavior and their increase due to lithography reduction are examined, with special emphasis on propagation delay, lateral coupling, and crosstalk-induced delay.
Abstract: Advances in interconnect technologies, such as the increase in the number of metal layers, stacked vias, and the reduced routing pitch, have played a key role to continuously improve integrated circuit design and operating speed. However several parasitic effects jeopardize the benefits of scale-down. Understanding and predicting interconnect behavior is vital for designing high-performance integrated circuit design. Our paper first reviews the interconnect parasitic effects and examines their impact on circuit behavior and their increase due to lithography reduction, with special emphasis on propagation delay, lateral coupling, and crosstalk-induced delay. The problem of signal integrity characterization is then discussed. In our review of the different well-established measurement methodologies such as direct probing, S-parameters, e-beam sampling and on-chip sampling, we point out weaknesses, frequency ranges, drawbacks, and overall performances of these techniques. Subsequently, the on-chip sampling system is described. This features a precise line-domain characterization of the voltage waveform directly within the interconnect and shows its application in the accurate evaluation of propagation delay, crosstalk, and crosstalk-induced delay along interconnects in deep-submicrometer technology. The sensor parts are described in detail, together with signal integrity patterns and their implementation in 0.18-/spl mu/m CMOS technology. Measurements obtained with this technique are presented. In the third part, we discuss the simulation issues, describe the two- and three-dimensional interconnect modeling problems, and review the active device models applicable to deep-submicrometer technologies in order to agree on measurements and simulations. These studies result in a set of guidelines concerning the choice of interconnect models. The last part outlines the design rules to be used by designers and their implementation within computer-aided design (CAD) tools to achieve signal integrity compliance. From a 0.18-/spl mu/m technology are derived critical variables such as crosstalk tolerance margin, maximum coupling length, and the criteria for adding a signal repeater. From these, values for low-dielectric and copper interconnects have been selected.

153 citations


Proceedings ArticleDOI
Shekhar Borkar1
30 Jan 2001
TL;DR: In this paper, the authors discuss a few techniques that reduce active and leakage power, and deliver higher performance, and point out some potential paradigm shifts in the design of circuits beyond 0.18 micron.
Abstract: Technology scaling will become difficult beyond 0.18 micron. For continued growth in performance, transistor density, and reduced energy per computation, circuit design will have to employ a new set of design techniques, with adequate design automation tools support. This paper discusses a few such techniques that reduce active and leakage power, and deliver higher performance. It concludes by pointing out some of the potential paradigm shifts.

108 citations


Journal ArticleDOI
TL;DR: This work presents the first technique that leverages the unique characteristics of field-programmable gate arrays (FPGAs) to protect commercial investment in intellectual property through fingerprinting.
Abstract: As current computer-aided design (CAD) tool and very large scale integration technology capabilities create a new market of reusable digital designs, the economic viability of this new core-based design paradigm is pending on the development of techniques for intellectual property protection. This work presents the first technique that leverages the unique characteristics of field-programmable gate arrays (FPGAs) to protect commercial investment in intellectual property through fingerprinting. A hidden encrypted mark is embedded into the physical layout of a digital circuit when it is placed and routed onto the FPGA. This mark uniquely identifies both the circuit origin and original circuit recipient, yet is difficult to detect and/or remove, even via recipient collusion. While this approach imposes additional constraints on the backend CAD tools for circuit place and route, experiments indicate that the performance and area impacts are minimal.

103 citations


Journal ArticleDOI
TL;DR: Large on-chip memories are desirable but difficult to implement, and challenges range from design automation to fabrication to test algorithms and memory redundancy and repair.
Abstract: Large on-chip memories are desirable but difficult to implement. Challenges range from design automation to fabrication to test algorithms and memory redundancy and repair.

101 citations


Journal ArticleDOI
TL;DR: This work uses integer linear programming to minimize the processing time by automatically extracting parallelism from a biochemical assay and applies the optimization method to the polymerase chain reaction, an important step in many lab-on-a-chip biochemical applications.
Abstract: We present an architectural design and optimization methodology for performing biochemical reactions using two-dimensional (2-D) electrowetting arrays. We define a set of basic microfluidic operations and leverage electronic design automation principles for system partitioning, resource allocation, and operation scheduling. Fluidic operations are carried out through the electrostatic configuration of a set of grid points. While concurrency is desirable to minimize processing time, the size of the 2-D array limits the number of concurrent operations of any type. Furthermore, functional dependencies between the operations also limit concurrency. We use integer linear programming to minimize the processing time by automatically extracting parallelism from a biochemical assay. As a case study, we apply our optimization method to the polymerase chain reaction, which is an important step in many lab-on-a-chip biochemical applications.

99 citations


Journal ArticleDOI
TL;DR: A modular approach for enhancing instruction level simulators with cycle-accurate simulation of energy dissipation in embedded systems and a profiler that relates energy consumption to the source code is presented.
Abstract: Energy-efficient design of battery-powered systems demands optimizations in both hardware and software. We present a modular approach for enhancing instruction level simulators with cycle-accurate simulation of energy dissipation in embedded systems. Our methodology has tightly coupled component models thus making our approach more accurate. Performance and energy computed by our simulator are within a 5% tolerance of hardware measurements on the SmartBadge. We show how the simulation methodology can be used for hardware design exploration aimed at enhancing the SmartBadge with real-time MPEG video feature. In addition, we present a profiler that relates energy consumption to the source code. Using the profiler we can quickly and easily redesign the MP3 audio decoder software to run in real time on the SmartBadge with low energy consumption. Performance increase of 92% and energy consumption decrease of 77% over the original executable specification have been achieved.

Journal ArticleDOI
TL;DR: Assembling a system on a chip using IP blocks is an error-prone, labor-intensive, and time-consuming process but emerging high-level tools can help by automating many of the design tasks.
Abstract: Assembling a system on a chip using IP blocks is an error-prone, labor-intensive, and time-consuming process. Emerging high-level tools can help by automating many of the design tasks.

Journal ArticleDOI
TL;DR: Its underlying cognitive model serves as a framework to analyse six CBD systems and to identify gaps in CBD research, and the findings may be relevant for other design domains as well.
Abstract: In the 1990s, Case-Based Design (CBD) seemed an appealing approach to develop intelligent design support. Based on an alternative view of human cognition, CBD systems find new design solutions by adapting similar experiences from the past. Although several CBD applications have been built, a convincing breakthrough by these systems has yet to come. In search of reasons for this limited success, this article embarks on a critical review of the CBD approach. Its underlying cognitive model serves as a framework to analyse six CBD systems and to identify gaps in CBD research. The article focuses primarily on CBD applications for architecture, yet the findings may be relevant for other design domains as well.

Journal ArticleDOI
01 Mar 2001
TL;DR: Limits to how design technology can enable the implementation of single-chip microelectronic systems that take full advantage of manufacturing technology with respect to such criteria as layout density performance, and power dissipation are explored.
Abstract: As manufacturing technology moves toward fundamental limits of silicon CMOS processing, the ability to reap the full potential of available transistors and interconnect is increasingly important. Design technology (DT) is concerned with the automated or semi-automated conception, synthesis, verification, and eventual testing of microelectronic systems. While manufacturing technology faces fundamental limits inherent in physical laws or material properties, design technology faces fundamental limitations inherent in the computational intractability of design optimizations and in the broad and unknown range of potential applications within various design processes. In this paper, we explore limitations to how design technology can enable the implementation of single-chip microelectronic systems that take full advantage of manufacturing technology with respect to such criteria as layout density performance, and power dissipation.

Journal ArticleDOI
TL;DR: A new algorithm for the recognition of features specific to cooling system design is developed and used to solve the initial design problem of a plastic part with a complex shape that is decomposed into simpler shape features.
Abstract: Most existing work on the design of cooling systems of plastic injection moulds has been focused on the detailed analysis or the optimization of the cooling system. However, before a cooling system can be analysed or optimized, an initial design has to be developed. We explore a new design synthesis approach to solve this initial design problem. A plastic part with a complex shape is decomposed into simpler shape features. The cooling systems of the individual features are first obtained, they are then combined to form the cooling system of the entire part. Decomposing a complex shape into shape features is a feature recognition problem. A new algorithm for the recognition of features specific to cooling system design is developed. Design examples generated by the design synthesis process are analysed by C-Mold to verify the feasibility of the approach.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: This paper is not intended as a comprehensive review, rather as a starting point for understanding power-aware design methodologies and techniques targeted toward embedded systems.
Abstract: Power-efficient design requires reducing power dissipation in all parts of the design and during all stages of the design process subject to constraints on the system performance and quality of service (QoS). Power-aware high-level language compilers, dynamic power management policies, memory management schemes, bus encoding techniques, and hardware design tools are needed to meet these often-conflicting design requirements. This paper reviews techniques and tools for power-efficient embedded system design, considering the hardware platform, the application software, and the system software. Design examples from an Intel StrongARM based system are provided to illustrate the concepts and the techniques. This paper is not intended as a comprehensive review, rather as a starting point for understanding power-aware design methodologies and techniques targeted toward embedded systems.

Journal ArticleDOI
29 Oct 2001
TL;DR: A new technique to reduce the order of transmission line circuits simultaneously with respect to multiple parameters is presented, based on multi-dimensional congruence transformation.
Abstract: This paper presents a new technique to reduce the order of transmission line circuits simultaneously with respect to multiple parameters. The reduction is based on multi-dimensional congruence transformation. The proposed algorithm provides efficient means to estimate the response of large distributed circuits simultaneously as a function of frequency and other design parameters.

Proceedings ArticleDOI
01 Jul 2001
TL;DR: This work integrates a modified functional hazard assessment method and Use cases, which generates valuable results used as design requirements and dependability analysis input.
Abstract: Mass produced products are becoming more and more complex, which forces the designers to model the functionality early in the design process. UML Use cases was found to be a useful method for this purpose at Volvo Cars and is currently used for modeling all functions implemented in the electrical network. When using Use cases in the design of complex safety critical systems, there is still an uncovered demand for early hazard analysis at a functional level. This work integrates a modified functional hazard assessment method and Use cases. The analysis generates valuable results used as design requirements and dependability analysis input. The methods results have exceeded our expectations. An example is included, showing how the method works.

Proceedings ArticleDOI
Shekhar Borkar1
30 Jan 2001
TL;DR: This paper discusses a few design techniques that reduce active and leakage power, and deliver higher performance, and concludes by pointing out some of the potential paradigm shifts.
Abstract: Technology scaling will become difficult beyond 0.18 micron. For continued growth in performance, transistor density, and reduced energy per computation, circuit design will have to employ a new set of design techniques, with adequate design automation tools support. This paper discusses a few such techniques that reduce active and leakage power, and deliver higher performance. It concludes by pointing out some of the potential paradigm shifts.

Proceedings ArticleDOI
12 Jul 2001
TL;DR: An intrinsic approach to hardware evolution of analog electronic circuits using a Field Programmable Transistor Array (FPTA), where all environmental conditions present on the device under test have to be taken into account by the evolutionary algorithm.
Abstract: This paper describes and discusses an intrinsic approach to hardware evolution of analog electronic circuits using a Field Programmable Transistor Array (FPTA). The FPTA is fabricated in a 0.6 /spl mu/m CMOS process and consists of 16/spl times/16 transistor cells. The chip allows to configure the gate geometry as well as the connectivity of each of the 256 transistors. Evolutionary algorithms are to be run on a commercial PC to produce the new circuit configurations that are downloaded to the chip via a PCI card. In contrast to extrinsic hardware evolution all environmental conditions present on the device under test have to be taken into account by the evolutionary algorithm. Thus a selection pressure is raised towards solutions that actually work on real dice.

Proceedings ArticleDOI
22 Jun 2001
TL;DR: This paper starts from an already existing system running a set of applications and the design problem is to implement new functionality so that the already running applications are not disturbed and there is a good chance that, later, new functionality can be added to the resulted system.
Abstract: In this paper we present an approach to incremental design of dis-tributed embedded systems for hard real-time applications. We start from an already existing system running a set of applications and the design problem is to implement new functionality so that the already running applications are not disturbed and there is a good chance that, later, new functionality can easily be added to the resulted sys-tem. The mapping and scheduling problem are considered in the con-text of a realistic communication model based on a TDMA protocol.

Proceedings ArticleDOI
22 Jun 2001
TL;DR: This paper presents a methodology targeted for standard-cell or structured-custom design styles that start from standard-cells created in a manner in which all issues regarding generation of AltPSM are effectively considered, and are then used in a typical cell-based (synthesis-automatic place and route) flow to produce design layouts that are ready for cost-effective silicon manufacturing.
Abstract: As the semiconductor industry enters the subwavelength era where silicon features are much smaller that the wavelength of the light used to create them, a number of “subwavelength” technologies such as Optical Proximity Correction (OPC) and Phase-Shifting Masks (PSM) have been introduced to produce integrated circuits (ICs) with acceptable yields. An effective approach to subwavelength IC production includes a combination of these techniques, including OPC and PSM. Nevertheless, as we approach silicon features of 0.10&mgr and below, Alternating PSM (AltPSM) becomes a critical part of the technology portfolio needed to achieve IC requirements. An effective EDA methodology that generates AltPSM ICs must guarantee correct generation of AltPSM layouts, maintain or improve today's design productivity, and leverage existing tools and flows. The implementation of such a methodology becomes more complex as phase shifting is applied to all critical features, including those outside of transistor gates. In this paper, we present a methodology targeted for standard-cell or structured-custom design styles. We also present examples of designs that start from standard-cells created in a manner in which all issues regarding generation of AltPSM are effectively considered, and are used in a typical cell-based (synthesis-Automatic Place & Route) flow to produce design layouts that are ready for cost-effective silicon manufacturing.

Journal ArticleDOI
TL;DR: In this research, a conventional design process for a TV glass design has been improved by an axiomatic approach, and a software system is designed for the automation of the design process.
Abstract: The automation concept is being applied to many areas as the automation system in the manufacturing field works more efficiently. Automation of the design process is also very important for the reduction of the entire engineering cost, and can be achieved by an excellent design process and software development. Design axioms have been announced as a general theoretical framework for all design fields. Application of the design axioms is investigated, and automation is obtained by computer programs. The design process can be analyzed and newly defined to satisfy the axioms. A software system can be designed according to the newly defined design process. In this research, a conventional design process for a TV glass design has been improved by an axiomatic approach, and a software system is designed for the automation of the design process. It is found that the conventional process is coupled, and the coupling causes inefficiencies. A new process is established by the application of axioms. A software design is conducted based on the new process and software development is carried out according to the software design. The developed software is exploited well in the real design.

Dissertation
01 May 2001
TL;DR: This work demonstrates how multiple sources of information can be combined for feature detection in strokes and applies this technique using two approaches to signal processing, one using simple average based thresholding and a second using scale space.
Abstract: Freehand sketching is both a natural and crucial part of design, yet is unsupported by current design automation software. We are working to combine the flexibility and ease of use of paper and pencil with the processing power of a computer to produce a design environment that feels as natural as paper, yet is considerably smarter. One of the most basic steps in accomplishing this is converting the original digitized pen strokes in the sketch into the intended geometric objects using feature point detection and approximation. We demonstrate how multiple sources of information can be combined for feature detection in strokes and apply this technique using two approaches to signal processing, one using simple average based thresholding and a second using scale space. Thesis Supervisor: Randall Davis Title: Department of Electrical Engineering and Computer Science

Book ChapterDOI
01 Nov 2001
TL;DR: OBDDs are the state-of-the-art data structure for representing switching functions in various branches of electronic design automation.
Abstract: Ordered Binary Decision Diagrams (OBDDs) play a key role in the automated synthesis and formal verification of digital systems. They are the state-of-the-art data structure for representing switching functions in various branches of electronic design automation. In the following we discuss the properties of this data structure, characterize its algorithmic behavior, and describe some prominent applications.

Patent
28 Aug 2001
TL;DR: In this paper, a method for design validation of complex IC with use of a combination of electronic design automation (EDA) tools and a design test station at high speed and low cost is presented.
Abstract: A method for design validation of complex IC with use of a combination of electronic design automation (EDA) tools and a design test station at high speed and low cost. The EDA tools and device simulator are linked to the event based test system to execute the original design simulation vectors and testbench and make modifications in the testbench and event based test vectors until satisfactory results are obtained. The event based test vectors are test vectors in an event format in which an event is any change in a signal which is described by its timing and the event based test system is a test system for testing an IC by utilizing the event based test vectors. Because EDA tools are linked with the event based test system, these modifications are captured to generate a final testbench that provides satisfactory results.

Journal ArticleDOI
TL;DR: The distinct requirements of embedded computing, coupled with emerging technologies, will stimulate system and processor specialization, customization and computer architecture automation.
Abstract: The distinct requirements of embedded computing, coupled with emerging technologies, will stimulate system and processor specialization, customization and computer architecture automation.

Patent
05 Sep 2001
TL;DR: In this article, a method correlates a timing target for electronic design automation (EDA) design tools by comparing slack distributions, which can include calculating and comparing autocorrelation functions of slack distributions.
Abstract: A method correlates a timing target for electronic design automation (EDA) design tools by comparing slack distributions. A method of designing an integrated circuit can include designing an integrated circuit by RTL synthesis with embedded timing analysis and optimization and placement of cells with embedded timing analysis and optimization. The method can also include designing an integrated circuit by routing with embedded timing analysis and optimization; performing reference timing analysis; performing reference timing analysis and embedded timing analysis using a parasitic estimation model. The method can also include comparing at least two slack distributions resulting from timing analyses. The method can include calculating and comparing autocorrelation functions of slack distributions. The method can include calculating interrcorrelation functions of slack distributions. An embodiment teaches an integrated circuit designed by the method taught. Another embodiment teaches a computer program product according to the method taught.

Proceedings ArticleDOI
30 Apr 2001
TL;DR: This work has developed and employed a full three-dimensional-motion-characterization system for MEMS to observe the response of a gimballed microactuator, a multi-degree-of-freedom microdevice.
Abstract: Advanced testing methods for the dynamics of microdevices are necessary to develop reliable marketable microelectromechanical systems (MEMS). The main purpose for MEMS testing is to provide feedback to the design-and-simulation process in an engineering development effort. This feedback should include device behavior, system parameters, and material properties. An essential part of more effective microdevice development is high-speed visualization of the dynamics of MEMS structures. We have developed and employed a full three-dimensional-motion-characterization system for MEMS to observe the response of a gimballed microactuator, a multi-degree-of-freedom microdevice.

Proceedings ArticleDOI
William N. N. Hung1, Xiaoyu Song
23 Sep 2001
TL;DR: This work is the first successful experience of using scatter search approach in design automation area and the approach can be applied to many design automation applications.
Abstract: Reduced ordered binary decision diagrams (BDDs) are a data structure for representation and manipulation of Boolean functions which are frequently used in VLSI design automation. The variable ordering largely influences the size of the BDD, varying from linear to exponential. In this paper we study BDD minimization problem based on scatter search optimization. The results we obtained are very encouraging in comparison with other heuristics (genetic and simulated annealing). This work is the first successful experience of using scatter search approach in design automation area. The approach can be applied to many design automation applications.