scispace - formally typeset
Search or ask a question

Showing papers on "Electronic design automation published in 2003"


Journal ArticleDOI
TL;DR: Based on a metamodel with formal semantics that developers can use to capture designs, Metropolis provides an environment for complex electronic-system design that supports simulation, formal analysis, and synthesis.
Abstract: Today, the design chain lacks adequate support, with most system-level designers using a collection of unlinked tools. The implementation then proceeds with informal techniques involving numerous human-language interactions that create unnecessary and unwanted iterations among groups of designers in different companies or different divisions. The move toward programmable platforms shifts the design implementation task toward embedded software design. When embedded software reaches the complexity typical of today's designs, the risk that the software will not function correctly increases exponentially. The Metropolis project seeks to develop a unified framework that can cope with this challenge. Based on a metamodel with formal semantics that developers can use to capture designs, Metropolis provides an environment for complex electronic-system design that supports simulation, formal analysis, and synthesis.

549 citations


Journal ArticleDOI
Wayne Wolf1
TL;DR: The term hardware/software codesign, coined about 10 years ago, describes a confluence of problems in integrated circuit design that tells us about the performance and energy consumption of single CPUs and multiprocessors.
Abstract: The term hardware/software codesign, coined about 10 years ago, describes a confluence of problems in integrated circuit design. By the 1990s, it became clear that microprocessor-based systems would be an important design discipline for IC designers as well. Large 16- and 32-bit microprocessors had already been used in board-level designs, and Moore's law ensured that chips would soon be large enough to include both a CPU and other subsystems. Multiple disciplines inform hardware/software codesign. Computer architecture tells us about the performance and energy consumption of single CPUs and multiprocessors. Real-time system theory helps analyze the deadline-driven performance of embedded systems. Computer-aided design assists hardware cost evaluation and design space exploration.

271 citations


Journal ArticleDOI
TL;DR: An overview of methods to automatically generate posynomial response surface models for the performance characteristics of analog integrated circuits based on numerical simulation data, capable of generatingPosynomial performance expressions for both linear and nonlinear circuits and circuit characteristics, at SPICE-level accuracy.
Abstract: This paper presents an overview of methods to automatically generate posynomial response surface models for the performance characteristics of analog integrated circuits based on numerical simulation data. The methods are capable of generating posynomial performance expressions for both linear and nonlinear circuits and circuit characteristics, at SPICE-level accuracy. This approach allows for automatic generation of an accurate sizing model for a circuit that composes a geometric program that fully describes the analog circuit sizing problem. The automatic generation avoids the time-consuming and approximate nature of handcrafted analytic model generation. The methods are based on techniques from design of experiments and response surface modeling. Attention is paid to estimating the relative "goodness-of-fit" of the generated models. Experimental results illustrate the capabilities and effectiveness of the presented methods.

171 citations


Proceedings ArticleDOI
02 Jun 2003
TL;DR: The vision of some of the key changes that will emerge in the design of complex Systems-on-a-Chip for nanometer-scale semiconductor technologies and their impact on design automation requirements, from the perspective of a broad range SoC supplier is presented.
Abstract: In this paper, we analyze the emerging trends in the design of complex Systems-on-a-Chip for nanometer-scale semiconductor technologies and their impact on design automation requirements, from the perspective of a broad range SoC supplier. We present our vision of some of the key changes that will emerge in the next five years. This vision is characterized by two major paradigm changes. The first is that SoC design will become divided into four mostly non-overlapping distinct abstraction levels. Very different competences and design automation tools will be needed at each level. The second paradigm change is the emergence of domain-specific S/W programmable SoC platforms consisting of large, heterogeneous sets of embedded processors. These will be complemented by embedded reconfigurable hardware and networks-on-chip. A key enabler for the effective use of these flexible SoC platforms is a high-level parallel programming model supporting automatic specification-to-platform mapping.

150 citations


Journal ArticleDOI
TL;DR: A new method is described which gives the designer access to the design space boundaries of a circuit topology, all with transistor-level accuracy, using multiobjective genetic optimization, the hypersurface of Pareto-optimal design points is calculated.
Abstract: A new method is described which gives the designer access to the design space boundaries of a circuit topology, all with transistor-level accuracy. Using multiobjective genetic optimization, the hypersurface of Pareto-optimal design points is calculated. Tradeoff analysis of competing performances at the design space boundaries is made possible by the application of multivariate regression techniques. This new methodology is illustrated with the presentation of the design space for two different types of circuits: a Miller-compensated operational transconductance amplifier and an LC-tank voltage-controlled oscillator.

136 citations


Journal ArticleDOI
TL;DR: It is hypothesized that the design space exploration for network processors should be separated into multiple stages, each having a different level of abstraction, and it would be appropriate to use analytical evaluation frameworks during the initial stages and resort to simulation techniques only when a relatively small set of potential architectures is identified.

115 citations


Journal ArticleDOI
TL;DR: In this paper, interfacing knowledge oriented tools and CAD application is identified as a technical gap for intelligent product development, and the concept of associative feature is introduced.

112 citations


Journal ArticleDOI
01 May 2003
TL;DR: By identifying the human operator's mental models and using them as templates for automating different tasks, the hypothesis that natural and safe interaction between human operator and automation is facilitated is experimentally supported.
Abstract: Engineers, business managers, and governments are increasingly aware of the importance and difficulty of integrating technology and humans. The presence of technology can enhance human comfort, efficiency, and safety, but the absence of human-factors analysis can lead to uncomfortable, inefficient, and unsafe systems. Systematic human-centered design requires a basic understanding of how humans generate and manage tasks. A very useful model of human behavior generation can be obtained by recognizing the task-specific role of mental models in not only guiding execution of skills but also managing initiation and termination of these skills. By identifying the human operator's mental models and using them as templates for automating different tasks, we experimentally support the hypothesis that natural and safe interaction between human operator and automation is facilitated by this model-based human-centered approach. The design of adaptive cruise control (ACC) systems is used as a case study in the design of model-based task automation systems. Such designs include identifying ecologically appropriate perceptual states, identifying perceptual triggering events for managing transitions between skilled behaviors, and coordinating the actions of automation and operator.

106 citations


Proceedings ArticleDOI
01 Jun 2003
TL;DR: A standard-cell library for MOSIS scaleable CMOS rules has been developed, intended for use with Synopsys Design Compiler, Cadence Silicon Ensemble, and Cadence Virtuoso or Magic, targeted for the AMI 0.5 /spl mu/m process.
Abstract: A standard-cell library for MOSIS scaleable CMOS rules has been developed. It is intended for use with Synopsys Design Compiler, Cadence Silicon Ensemble, and Cadence Virtuoso or Magic. The library is targeted for the AMI 0.5 /spl mu/m process, which currently offers the smallest feature size in the MOSIS educational program. The library also includes I/O pad cells and fully places and routes a padframe if desired. All steps in the design flow are fully automated with only three scripts and have been tested successfully in a large VLSI design class at the Illinois Institute of Technology. To customize and run these three scripts, for a given design, typically takes less than five minutes, since all details are transparent to the students, allowing them to focus on the design instead of worrying about the tools.

104 citations


Book
01 Jan 2003
TL;DR: The paper presents a meta-modelling architecture suitable for bipolar transistors and Circuits, and some examples show how this architecture can be modified for bipolar switching.
Abstract: 1 Deep Submicron Digital IC Design 2 MOS Transistors 3 Fabrication, Layout and Simulation 4 MOS Inverter Circuits 5 Static MOS Gate Circuits 6 High-Speed CMOS Logic Design 7 Transfer Gate and Dynamic Logic Design 8 Semiconductor Memory Design 9 Additional Topics in Memory Design 10 Interconnect Design 11 Power Grid and Clock Design Appendix A A Brief Introduction to Spice Appendix B Bipolar Transistors and Circuits

98 citations


Journal ArticleDOI
TL;DR: Research directions and various levels of design abstraction to handle the interconnect challenges are described, which include approaches to adopt new analytical methods for interconnects, physical design levels and ways to face these challenges early in a higher level of the design process.
Abstract: The migration to using ultra deep submicron (UDSM) process, 0.25 /spl mu/m or below, necessitates new design methodologies and EDA tools to address the new design challenges. One of the main challenges is noise. All different types of deep submicron such as cross talk, leakage, supply noise and process variations are obstacles in the way of achieving the desired level of noise immunity without giving up the improvement achieved in performance and energy efficiency. This article describes research directions and various levels of design abstraction to handle the interconnect challenges. These directions include approaches to adopt new analytical methods for interconnects, physical design levels and finally ways to face these challenges early in a higher level of the design process.

Proceedings ArticleDOI
Raghavan Kumar1
02 Jun 2003
TL;DR: This paper describes the key challenges, design methods, CAD and learnings in the area of interconnect and noise immunity design for the Intel Pentium 4 processor, and describes a proprietary noise simulator and noise robust cell library that were critical to noise robustness.
Abstract: This paper describes the key challenges, design methods, CAD and learnings in the area of interconnect and noise immunity design for the Intel Pentium/spl reg/ 4 processor. This high frequency (currently at 3 GHz with 6 GHz execution core) design required aggressive domino, pulsed and other novel high speed circuit families that are very noise sensitive. Controlling interconnect delay, capacitive and inductive coupling is of paramount importance at such high frequencies and edge rates, made more difficult by die cost pressures of a high volume chip. We first describe our wire/repeater design methods and silicon results. We then describe a proprietary noise simulator (NoisePad) and our noise robust cell library, both of which were critical to noise robustness. Finally, our test chip results and use of a distributed power grid to manage inductance is described.

01 Jan 2003
TL;DR: It is shown that this class of generative representations improves the scalability of evolutionary design systems by automatically learning inductive bias of the design problem thereby capturing design dependencies and better enabling search of large design spaces.
Abstract: Generative Representations for Evolutionary Design Automation A dissertation presented to the Faculty of the Graduate School of Arts and Sciences of Brandeis University, Waltham, Massachusetts by Gregory S. Hornby In this thesis the class of generative representations is de ned and it is shown that this class of representations improves the scalability of evolutionary design systems by automatically learning inductive bias of the design problem thereby capturing design dependencies and better enabling search of large design spaces. First, properties of representations are identi ed as: combination, controlow, and abstraction. Using these properties, representations are classi ed as non-generative, or generative. Whereas non-generative representations use elements of encoded artifacts at most once in translation from encoding to actual artifact, generative representations have the ability to reuse parts of the data structure for encoding artifacts through controlow (using iteration) and/or abstraction (using labeled procedures). Unlike non-generative representations, which do not scale with design complexity because they cannot capture design dependencies in their structure, it is argued that evolution with generative representations can better scale with design complexity because of their ability to hierarchically create assemblies of modules for reuse, thereby enabling better search of large design spaces. Second, GENRE, an evolutionary design system using a generative representation, is described. Using this system, a non-generative and a generative representation are compared on four classes of designs: three-dimensional static structures constructed from voxels; neural networks; actuated robots controlled by oscillator networks; and neural network controlled robots. Results from evolving designs in these substrates show that the evolutionary design system is capable of nding solutions of higher tness with the generative representation than with the non-generative representation. This improved performance is shown to be a result of the generative representation's ability to

Proceedings ArticleDOI
09 Apr 2003
TL;DR: An application framework is discussed for developing CCM-based applications beyond just the hardware configuration that allows dynamic circuit configurations that include data folding optimizations based on user input and the resulting system aids in creating applications that are potentially more intuitive, easier to develop, and better performing.
Abstract: FPGA-based (field programmable gate array) configurable computing machines (CCMs) offer powerful and flexible general-purpose computing platforms. However, development for FPGA-based designs using modern CAD (computer aided design) tools is geared mainly toward an ASIC-like process. This is inadequate for the needs of CCM application development. This paper discusses an application framework for developing CCM-based applications beyond just the hardware configuration. This framework leverages the advantages of CCMs (availability, programmability, visibility, and controllability) to help create CCM-based applications throughout the entire development process (i.e. design, debug, and deploy). The framework itself is deployed with the final application, thus permitting dynamic circuit configurations that include data folding optimizations based on user input. The resulting system aids in creating applications that are potentially more intuitive, easier to develop, and better performing. An example application demonstrates the use of the application framework and the potential benefits.

Proceedings ArticleDOI
01 Jan 2003
TL;DR: A data visualization interface that facilitates a design by shopping paradigm, allowing a decision-maker to form a preference by viewing a rich set of good designs and use this preference to choose an optimal design.
Abstract: We have developed a data visualization interface that facilitates a design by shopping paradigm, allowing a decision-maker to form a preference by viewing a rich set of good designs and use this preference to choose an optimal design Design automation has allowed us to implement this paradigm, since a large number of designs can be synthesized in a short period of time The interface allows users to visualize complex design spaces by using multi-dimensional visualization techniques that include customizable glyph plots, parallel coordinates, linked views, brushing, and histograms As is common with data mining tools, the user can specify upper and lower bounds on the design space variables, assign variables to glyph axes and parallel coordinate plots, and dynamically brush variables Additionally, preference shading for visualizing a user’s preference structure and algorithms for visualizing the Pareto frontier have been incorporated into the interface to help shape a decision-maker’s preference Use of the interface is demonstrated using a satellite design example by highlighting different preference structures and resulting Pareto frontiers The capabilities of the design by shopping interface were driven by real industrial customer needs, and the interface was demonstrated at a spacecraft design conducted by a team at Lockheed Martin, consisting of Mars spacecraft design expertsCopyright © 2003 by ASME

Proceedings ArticleDOI
04 Jan 2003
TL;DR: The task graph extraction tool described in this paper reduces the potential for error and the time required to design an embedded system by automating the task graphs extraction process, and can drastically improve designer productivity.
Abstract: Consumer demand and improvements in hardware have caused distributed real-time embedded systems to rapidly increase in complexity. As a result, designers faced with time-to-market constraints are forced to rely on intelligent design tools to enable them to keep up with demand. These tools are continually being used earlier in the design process when the design is at higher levels of abstraction. At the highest level of abstraction are hardware/software co-synthesis tools which take a system specification as input. Although many embedded systems are described in C, the system specifications for many of these tools are often in the form of one or more task graphs. These tools are very effective at solving the co-synthesis problem using task graphs but require that designers manually transform the specification from C code to task graphs, a tedious and error-prone job. The task graph extraction tool described in this paper reduces the potential for error and the time required to design an embedded system by automating the task graph extraction process. Such a tool can drastically improve designer productivity. As far as we know, this is the first tool of its kind. It has been made available on the web.

Journal ArticleDOI
TL;DR: The 40th anniversary of the design automation conference with a keynote lecture intended to place in perspective the most relevant research results presented at DAC in all these years and to identify trends and challenges for the future of electronic design automation (EDA).
Abstract: The 40th anniversary of the design automation conference with a keynote lecture intended to place in perspective the most relevant research results presented at DAC in all these years and to identify trends and challenges for the future of electronic design automation (EDA). EDA is a unique, wonderful field where research, innovation, and business have come together for many years, as demonstrated by its accomplishments over the past 40 years.

Journal ArticleDOI
TL;DR: The Flux-1 chip is an RSFQ implementation of a small general-purpose processing engine with target clock frequency of 20 GHz and over 5000 gates connected in an irregular pattern, so lessons learned from this effort are presented.
Abstract: The Flux-1 chip is an RSFQ implementation of a small general-purpose processing engine with target clock frequency of 20 GHz and over 5000 gates (over 60 K Josephson junctions) connected in an irregular pattern. The scale of this design task forced us to re-think conventional RSFQ design methodology and implement new approaches suitable for digital systems of this level of complexity and beyond. This paper presents lessons learned from the Flux-1 effort, mostly concentrating on chip physical design. Here we discuss our approach to the circuit design and verification of individual gates, gate interconnect using passive transmission lines and use of CAD tools for design automation and verification.

Journal ArticleDOI
01 Feb 2003
TL;DR: Relative timing (RT) is introduced as a method for asynchronous design and enables improved performance, area, power, and functional testability of up to a factor of 3/spl times/ in all three cases.
Abstract: Relative timing (RT) is introduced as a method for asynchronous design Timing requirements of a circuit are made explicit using relative timing Timing can be directly added, removed, and optimized using this style RT synthesis and verification are demonstrated on three example circuits, facilitating transformations from speed-independent circuits to burst-mode and pulse-mode circuits Relative timing enables improved performance, area, power, and functional testability of up to a factor of 3/spl times/ in all three cases This method is the foundation of optimized timed circuit designs used in an industrial test chip, and may be formalized and automated

Journal ArticleDOI
TL;DR: A consistent prototyping environment to implement a prototyping design from first idea to final implementation is presented, and the design of a prototype for a MIMO system with four transmit and four receive antennas, based on the current UMTS FDD downlink standard is reported.

Journal ArticleDOI
TL;DR: This work introduces local watermarks, an IP protection technique which facilitates watermark detection in many realistic design and adversarial scenarios, while satisfying the demand for low overhead and design transparency.
Abstract: Recently, the electronic design automation industry has adopted the intellectual property (IP) business model as a dominant system-on-chip development platform. Since copyright fraud has been recognized as the most devastating obstruction to this model, a number of techniques for IP protection have been introduced. Most of them rely on a selection of a global solution to a design optimization problem according to a unique user-specific digital signature. Although such techniques provide strong proof of authorship, they fail to provide an effective procedure for watermark detection when a protected core design is augmented into a larger design. To address this fundamental issue, we introduce local watermarks, an IP protection technique which facilitates watermark detection in many realistic design and adversarial scenarios, while satisfying the demand for low overhead and design transparency. We demonstrate the efficiency of the new IP protection paradigm by applying its principles to a set of behavioral synthesis tasks such as operation scheduling and template matching.

Proceedings ArticleDOI
03 Mar 2003
TL;DR: This paper shows that it is a better EDA system architecture to implement reflection/introspection at a meta-layer in a design framework, and there are relatively unexplored territories of design automation, such as behavioral typing of component interfaces, corresponding type-theory, and their implication in automating component composition, interface synthesis, and validation, which can be better incorporated if the introspection is implemented at aMeta-layer.
Abstract: Reflection and automated introspection of a design in system level design frameworks are seen as necessities for the CAD tools to manipulate the designs within the tools. These features are also useful for debuggers, class and object browsers, design analyzers, composition validation, type checking, compatibility checking, etc. However, the central question is whether such features should be integrated into the language, or if we should build frameworks which feature these capabilities in a meta-layer, leaving the system-level language intact. In our recent interactions with designers, we have found differing opinions. Especially in the context of SystemC, the temptation to integrate reflective APIs into the language is great, because C++ is expressive, and already has type introspective packages available. In this paper, we analyze this issue and show that (i) it is a better EDA system architecture to implement reflection/introspection at a meta-layer in a design framework (ii) there are relatively unexplored territories of design automation, such as behavioral typing of component interfaces, corresponding type-theory, and their implication in automating component composition, interface synthesis, and validation, which can be better incorporated if the introspection is implemented at a meta-layer.

Journal ArticleDOI
TL;DR: This paper presents the detailed object definition, design, and implementation of a Standard Component Library within a mould design software package, QuickMould, with the advantages of the compressed data structure, the ease of use and simple customisation.
Abstract: Very often, outsourced components (known as standard components) are used for reducing design effort and product cost in mould design. They are usually manufactured by specialised suppliers. 3D CAD parametric models of these components will significantly reduce mould design lead-time and cost, and enhance design flexibility. This paper presents the detailed object definition, design, and implementation of a Standard Component Library within a mould design software package, QuickMould. With many components from different suppliers implemented, it is believed that the object design has generic nature and can be expanded to include most mechanical components in a collaborative environment. The advantages of this implementation are the compressed data structure, the ease of use and simple customisation.

Patent
11 Apr 2003
TL;DR: In this paper, a manufacturing process for LSIs using an event tester to avoid prototype hold is described, which includes the steps of: designing an LSI under an EDA (electronic design automation) environment to produce design data of a designed LSI, performing logic simulation on a device model of the LSI design in the EDA environment with use of a testbench and producing a test vector file of an event format as a result of the logic simulation.
Abstract: A manufacturing process for LSIs uses an event tester to avoid prototype hold. The LSI manufacturing method includes the steps of: designing an LSI under an EDA (electronic design automation) environment to produce design data of a designed LSI, performing logic simulation on a device model of the LSI design in the EDA environment with use of a testbench and producing a test vector file of an event format as a result of the logic simulation, verifying simulation data files with use of the design data and the testbench by operating an event tester simulator, producing a prototype LSI through a fabrication provider by using the design data, and testing the prototype LSI by an event tester by using the test vector file and the simulation data files and feedbacking test results to the EDA environment or the fabrication provider.

Patent
05 May 2003
TL;DR: In this article, a machine implemented, design automation method that assists a designer in the understanding of a software and/or hardware source code specification by transforming the source code into a simplified specification called a program slice is presented.
Abstract: The present invention is a machine implemented, design automation method that assists a designer in the understanding of a software and/or hardware source code specification by transforming the source code into a simplified specification called a program slice. The present invention extends graph-based program slicing to the hardware-software interface that is commonly found in embedded systems. In addition to the known benefits of program slicing applied to a pure software or pure hardware, the present invention aids a designer in understanding the complex interaction between software procedures and hardware processing elements in the context of a codesign methodology.

Journal ArticleDOI
TL;DR: The design issues and tradeoffs of a high-speed high-accuracy Nyquist-rate analog-to-digital (A/D) converter are described and the tradeoffs are elaborated.
Abstract: The design issues and tradeoffs of a high-speed high-accuracy Nyquist-rate analog-to-digital (A/D) converter are described. The presented design methodology covers the complete flow from specifications to verified layout and is supported by both commercial and internally developed computer-aided design tools. The major decisions to be made during the converter's design at both the architectural and the circuit level are described and the tradeoffs are elaborated. The approach is demonstrated for a real-life test case, where a Nyquist-rate 8-bit 200-MS/s 4-2 interpolating/averaging A/D converter was developed in a 0.35-/spl mu/m CMOS technology. The signal-to-noise-plus-distortion ratio at 40 MHz is 42.7 dB and the total power consumption is 655 mW.

Proceedings ArticleDOI
09 Nov 2003
TL;DR: The loss/benefit in quality of results due to Hierarchical approaches is investigated and some of the design automation problem formulations and solutions needed for FPGAs versus known standard cell ASIC approaches are compared.
Abstract: The recent past has seen a tremendous increase in the size of design circuits that can be implemented in a single FPGA. These large design sizes significantly impact cycle time due to design automation software runtimes and an increased number of performance based iterations. New FPGA physical design approaches need to be utilized to alleviate some of these problems. Hierarchical approaches to divide and conquer, the design, early estimation tools for design exploration, and physical optimizations are some of the key methodologies that have to be introduced in the FPGA physical design tools. This paper will investigate the loss/benefit in quality of results due to hierarchical approaches and compare and contrast some of the design automation problem formulations and solutions needed for FPGAs versus known standard cell ASIC approaches.

Proceedings ArticleDOI
09 Nov 2003
TL;DR: Although this work considers wirelength prediction inaccuracies, the probabilistic strategy could be extended trivially to consider fabrication variability in wire parasitics, and many problems are proved to be NP-Complete.
Abstract: This work presents a formal probabilistic approach for solving optimization problems in design automation. Prediction accuracy is very low especially at high levels of design flow. This can be attributed mainly to unawareness of low level layout information and variability in fabrication process. Hence a traditional deterministic design automation approach where each cost function is represented as a fixed value becomes obsolete. A new approach is gaining attention in which the cost functions are represented as probability distributions and the optimization criteria is probabilistic too. This design optimization philosophy is demonstrated through the classic buffer insertion problem. Formally, we capture wirelengths as probability distributions (as compared to the traditional approach which considers wirelength as fixed values) and present several strategies for optimizing the probabilistic criteria. During the course of this work many problems are proved to be NP-Complete. Comparisons are made with the Van-Ginneken "optimal under fixed wire-length" algorithm. Results show that the Van-Ginneken approach generated delay distributions at the root of the fanout wiring tree which had large probability (0.91 in the worst case and 0.55 on average) of violating the delay constraint. Our algorithms could achieve 100% probability of satisfying the delay constraint with similar buffer penalty. Although this work considers wirelength prediction inaccuracies, our probabilistic strategy could be extended trivially to consider fabrication variability in wire parasitics.

Book
01 Jan 2003
TL;DR: The Electronic Design Automation Handbook as discussed by the authors describes tools and techniques for high performance ASIC-design, as well as the best practices for creating reusable designs in an SoC design methodology.
Abstract: The Electronic Design Automation Handbook carefully details design tools and techniques for high performance ASIC-design. It shows the best practices for creating reusable designs in an SoC design methodology. The Electronic Design Automation Handbook was developed by colleagues from the Universities of Applied Sciences, Germany, who are engaged in the design of integrated electronics in education and research and which form the MPC Group of the Universities of Applied Sciences of Baden-Wrttemberg /Germany. MPC works as network of partners to industry and is able, due to the wide varying experiences of the institutes involved, to cover the entire range of the modern day circuit design. Each year more than 600 students are educated in the laboratories of MPC-members. Our personal experience from student and industry-projects ensures authenticity. The practical and theoretical experience from our projects has been used in the basis of this handbook.

Journal ArticleDOI
TL;DR: The technique of Case-Informed Reasoning has been developed and applied to the task of fluid power circuit design, a configuration design task, and offers a practical approach to conceptual design automation in domains where design knowledge is lacking.
Abstract: Conceptual design is considered to be the most difficult phase of engineering design, with success dependent to a great extent on the expertise of the designer. Automation of some aspects of this phase would be of immense practical benefit. It is suggested that the generation of design solutions can be brought about through the application of heuristic knowledge. However, this sort of knowledge is in short supply and has proven difficult to acquire for computer systems. Nevertheless, actual examples of designers' work are more readily available, and the heuristics applied by designers may be considered to be implicit in these examples. The technique of Case-Informed Reasoning has been developed to try to exploit this potential source of knowledge, and applied to the task of fluid power circuit design, a configuration design task. This technique offers a practical approach to conceptual design automation in domains where design knowledge is lacking.