scispace - formally typeset
Search or ask a question

Showing papers on "Electronic design automation published in 2011"


Journal ArticleDOI
TL;DR: This article presents an industrial-strength asynchronous ASIC CAD flow that enables the automatic synthesis and physical design of high-level specifications into GHz silicon, greatly reducing design time and enabling far wider use of asynchronous technology.
Abstract: Editors' note:The high-performance benefits of asynchronous design have hitherto been obtained only using full-custom design. This article presents an industrial-strength asynchronous ASIC CAD flow that enables the automatic synthesis and physical design of high-level specifications into GHz silicon, greatly reducing design time and enabling far wider use of asynchronous technology.

79 citations


Book
03 Nov 2011
TL;DR: In this paper, the authors investigated the performance of adiabatic logic in terms of energy saving potential and optimum operating frequency, as well as degradation related issues, and proposed a power-clock gating mechanism.
Abstract: Adiabatic logic is a potential successor for static CMOS circuit design when it comes to ultra-low-power energy consumption. Future development like the evolutionary shrinking of the minimum feature size as well as revolutionary novel transistor concepts will change the gate level savings gained by adiabatic logic. In addition, the impact of worsening degradation effects has to be considered in the design of adiabatic circuits. The impact of the technology trends on the figures of merit of adiabatic logic, energy saving potential and optimum operating frequency, are investigated, as well as degradation related issues. Adiabatic logic benefits from future devices, is not susceptible to Hot Carrier Injection, and shows less impact of Bias Temperature Instability than static CMOS circuits. Major interest also lies on the efficient generation of the applied power-clock signal. This oscillating power supply can be used to save energy in short idle times by disconnecting circuits. An efficient way to generate the power-clock is by means of the synchronous 2N2P LC oscillator, which is also robust with respect to pattern-induced capacitive variations. An easy to implement but powerful power-clock gating supplement is proposed by gating the synchronization signals. Diverse implementations to shut down the system are presented and rated for their applicability and other aspects like energy reduction capability and data retention. Advantageous usage of adiabatic logic requires compact and efficient arithmetic structures. A broad variety of adder structures and a Coordinate Rotation Digital Computer are compared and rated according to energy consumption and area usage, and the resulting energy saving potential against static CMOS proves the ultra-low-power capability of adiabatic logic. In the end, a new circuit topology has to compete with static CMOS also in productivity. On a 130nm test chip, a large scale test vehicle containing an FIR filter was implemented in adiabatic logic, utilizing a standard, library-based design flow, fabricated, measured and compared to simulations of a static CMOS counterpart, with measured saving factors compliant to the values gained by simulation. This leads to the conclusion that adiabatic logic is ready for productive design due to compatibility not only to CMOS technology, but also to electronic design automation (EDA) tools developed for static CMOS system design.

74 citations


Journal ArticleDOI
TL;DR: Novel Ontology-based Device Descriptions are presented along with a layered ontology architecture, a specific ontology view approach with virtual properties, a generic access interface, a triple store-based database backend, and a generic search mask GUI with underlying query generation algorithm that enables a formal, unified, and extensible specification of building automation devices.
Abstract: Device descriptions play an important role in the design and commissioning of modern building automation systems and help reducing the design time and costs. However, all established device descriptions are specialized for certain purposes and suffer from several weaknesses. This hinders a further design automation, which is strongly needed for the more and more complex building automation systems. To overcome these problems, this paper presents novel Ontology-based Device Descriptions (ODDs) along with a layered ontology architecture, a specific ontology view approach with virtual properties, a generic access interface, a triple store-based database backend, and a generic search mask GUI with underlying query generation algorithm. It enables a formal, unified, and extensible specification of building automation devices, ensures their comparability, and facilitates a computerenabled retrieval, selection, and interoperability evaluation, which is essential for an automated design. The scalability of the approach to several ten thousand devices is demonstrated.

70 citations


Proceedings ArticleDOI
14 Mar 2011
TL;DR: This paper is the first to describe a P 1687 design automation tool which constructs and optimizes P1687 networks, and considers the concurrent and sequential access schedule types, and is demonstrated in experiments on industrial SOCs, reporting total access time and average access time.
Abstract: The IEEE P1687 (IJTAG) standard proposal aims at standardizing the access to embedded test and debug logic (instruments) via the JTAG TAP. P1687 specifies a component called Segment Insertion Bit (SIB) which makes it possible to construct a multitude of alternative P1687 instrument access networks for a given set of instruments. Finding the best access network with respect to instrument access time and the number of SIBs is a time-consuming task in the absence of EDA support. This paper is the first to describe a P1687 design automation tool which constructs and optimizes P1687 networks. Our EDA tool, called PACT, considers the concurrent and sequential access schedule types, and is demonstrated in experiments on industrial SOCs, reporting total access time and average access time.

61 citations


Journal ArticleDOI
TL;DR: MOJITO is a system that performs structural synthesis of analog circuits, returning designs that are trustworthy by construction, and generalizes to other problem domains which have accumulated structural domain knowledge, such as robotic structures, car assemblies, and modeling biological systems.
Abstract: This paper presents MOJITO, a system that performs structural synthesis of analog circuits, returning designs that are trustworthy by construction. The search space is defined by a set of expert-specified, trusted, hierarchically-organized analog building blocks, which are organized as a parameterized context-free grammar. The search algorithm is a multiobjective evolutionary algorithm that uses an age-layered population structure to balance exploration versus exploitation. It is validated with experiments to search across >;100 000 different one-stage and two-stage opamp topologies, returning human-competitive results. The runtime is orders of magnitude faster than open-ended systems, and unlike the other evolutionary algorithm approaches, the resulting circuits are trustworthy by construction. The approach generalizes to other problem domains which have accumulated structural domain knowledge, such as robotic structures, car assemblies, and modeling biological systems.

54 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel stateful logic pipeline architecture based on memristive switches, and addresses some of the issues, in particular logic representation using OR-inverter graphs, two-level optimization synthesis strategy, data synchronization with data forwarding, stall-free pipelined finite state machines, and constraints for synthesis and mapping onto the fabric.
Abstract: Recently, researchers have demonstrated that memristive switches can be used to implement logic and latches as well as memory and programmable interconnects. In this paper, we propose a novel stateful logic pipeline architecture based on memristive switches. The proposed architecture mapped to the field programmable nanowire interconnect fabric produces a field programmable stateful logic array, in which general-purpose computation functions can be implemented by configuring only nonvolatile nanowire crossbar switches. CMOS control switches are used to isolate stateful logic units so that multiple operations can be executed in parallel. Since basic operation of the stateful logic, namely, material implication, cannot fan out, a new basic AND operation which can duplicate output is proposed. The basic unit of the proposed architecture is designed to execute multiple basic operations concurrently in a step so that each basic unit implements a large fan-in OR or NOR gate. The fine-grain ultradeep constant-throughput pipeline properties pose new design automation problems. We address some of the issues, in particular logic representation using OR-inverter graphs, two-level optimization synthesis strategy, data synchronization with data forwarding, stall-free pipelined finite state machines, and constraints for synthesis and mapping onto the fabric.

54 citations


Proceedings ArticleDOI
21 Nov 2011
TL;DR: This paper proposes a contribution for reasoning about automation designs using a model-based approach exploiting refined task models that describe operations with enough details in order to reason about automation and to rationalize automation designs.
Abstract: Designing systems in such a way that as much functions as possible are automated has been the driving direction of research and engineering in aviation, space and more generally in computer science for many years. In the 90's many studies (e.g. [12] related to the notion of mode confusion) have demonstrated that fully automated systems are out of the grasp of current technologies and that additionally migrating functions [2] from the operator to the system might have disastrous impact on operations both in terms of safety and usability. In order to be able to design automation with a hedonic view of the involved factors (safety, usability, reliability, …) a complete understanding of operator's tasks is required prior to considering migrating them to the system side. This paper proposes a contribution for reasoning about automation designs using a model-based approach exploiting refined task models. These models describe operations with enough details in order to reason about automation and to rationalize automation designs. In this paper we present how such representations can support the assessment of alternative design options for automation. The proposed approach is applied to satellite ground segments.

54 citations


Patent
Ru-Gun Liu1, Lai Chih-Ming1, Wen-Chun Huang1, Boren Luo1, I-Chang Shin1, Yao-Ching Ku1, Cliff Hou1 
26 May 2011
TL;DR: In this paper, a secure method for providing semiconductor fabrication processing parameters to a design facility is described, where a set of processing parameters of a fabrication facility is provided and a model from the set of parameters is generated.
Abstract: Methods and systems for providing processing parameters in a secure format are disclosed. In one aspect, a method for providing semiconductor fabrication processing parameters to a design facility is disclosed. The method comprises providing a set of processing parameters of a fabrication facility; creating a model from the set of processing parameters; converting the model into a corresponding set of kernels; converting the set of kernels into a corresponding set of matrices; and communicating the set of matrices to the design facility. In another aspect, a method for providing semiconductor fabrication processing parameters is disclosed. The method comprises providing a set of processing parameters of a fabrication facility; creating a processing model from the set of processing parameters; encrypting the processing model into a format for use with a plurality of EDA tools; and communicating the encrypted processing model format to a design facility.

44 citations


Journal ArticleDOI
TL;DR: An EDA tool is introduced that quickly and accurately estimates the reliability of any CMOS gate by taking into consideration the gate's topology,the reliability of the individual devices, the applied input vector, as well as the noise margins.
Abstract: Scaling complementary metal oxide semiconductor (CMOS) devices has been a method used very successfully over the last four decades to improve the performance and the functionality of very large scale integrated (VLSI) designs. Still, scaling is heading towards several fundamental limits as the feature size is being decreased towards 10 nm and less. One of the challenges associated with scaling is the expected increase of static and dynamic parameter fluctuations and variations, as well as intrinsic and extrinsic noises, with significant effects on reliability. Therefore, there is a clear, growing need for electronic design automation (EDA) tools that can predict the reliability of future massive nano-scaled designs with very high accuracy. Such tools are essential to help VLSI designers optimize the conflicting tradeoffs between area-power-delay and reliability requirements. In this paper, we introduce an EDA tool that quickly and accurately estimates the reliability of any CMOS gate. The tool improves the accuracy of the reliability calculation at the gate level by taking into consideration the gate's topology, the reliability of the individual devices, the applied input vector, as well as the noise margins. It can also be used to estimate the effect on different types of faults and defects, and to estimate the effects of enhancing the reliability of individual devices on the gate's overall reliability.

43 citations


Journal ArticleDOI
01 Jan 2011
TL;DR: An extensive comparison of the state-of-the-art of MOEA approaches with an approach based on fuzzy approximation to speed up the evaluation of a candidate system configuration is proposed, performed in a real case study: optimization of the performance and power dissipation of embedded architectures based on a Very Long Instruction Word microprocessor in a mobile multimedia application domain.
Abstract: Multi-objective evolutionary algorithms (MOEAs) have received increasing interest in industry because they have proved to be powerful optimizers. Despite the great success achieved, however, MOEAs have also encountered many challenges in real-world applications. One of the main difficulties in applying MOEAs is the large number of fitness evaluations (objective calculations) that are often needed before an acceptable solution can be found. There are, in fact, several industrial situations in which fitness evaluations are computationally expensive and the time available is very short. In these applications efficient strategies to approximate the fitness function have to be adopted, looking for a trade-off between optimization performance and efficiency. This is the case in designing a complex embedded system, where it is necessary to define an optimal architecture in relation to certain performance indexes while respecting strict time-to-market constraints. This activity, known as design space exploration (DSE), is still a great challenge for the EDA (electronic design automation) community. One of the most important bottlenecks in the overall design flow of an embedded system is due to simulation. Simulation occurs at every phase of the design flow and is used to evaluate a system which is a candidate for implementation. In this paper we focus on system level design, proposing an extensive comparison of the state-of-the-art of MOEA approaches with an approach based on fuzzy approximation to speed up the evaluation of a candidate system configuration. The comparison is performed in a real case study: optimization of the performance and power dissipation of embedded architectures based on a Very Long Instruction Word (VLIW) microprocessor in a mobile multimedia application domain. The results of the comparison demonstrate that the fuzzy approach outperforms in terms of both performance and efficiency the state of the art in MOEA strategies applied to DSE of a parameterized embedded system.

29 citations



Proceedings ArticleDOI
20 Nov 2011
TL;DR: Experimental results show that the user can perform automated 3D-DfT insertion through existing EDA tools with negligible area costs, and verify the proposed DfT by test pattern generation and simulation.
Abstract: Using Through-Silicon Vias (TSVs) in three-dimensional stacked ICs (3D-SICs) has benefits in terms of interconnect density, performance, and power dissipation. For 3D-SICs, an extension of the Design-for-Test architecture based on die-level wrappers is required to enable pre-bond die testing as well as modular post-bond die and interconnect testing. This paper presents an approach that automates the insertion of die wrappers. Experimental results show that the user can perform automated 3D-DfT insertion through existing EDA tools with negligible area costs, and verify the proposed DfT by test pattern generation and simulation.

Proceedings ArticleDOI
22 Dec 2011
TL;DR: This study introduces a swarm intelligence based methodology for optimal sizing of a CMOS operational amplifier that not only meets design specifications and satisfies design constraints but also minimizes total MOS area with respect to convex optimization method.
Abstract: An efficient design of optimal operational amplifier is a cornerstone of an analog design environment. This study introduces a swarm intelligence based methodology for optimal sizing of a CMOS operational amplifier. Actually, analog sizing is a constructive procedure that aims at mapping the circuit specifications into the design parameter values. Specifying design constraints, conflicting design specifications are introduced to optimization algorithm as a constrained problem and CMOS transistor area is aimed to be minimized. Simulation results demonstrate that proposed method not only meets design specifications and satisfies design constraints but also minimizes total MOS area with respect to convex optimization method.

Proceedings ArticleDOI
15 May 2011
TL;DR: This paper presents some examples and comparisons between the standard cell approach and the network of transistor approach, which can reduce the amount of transistors needed to implement a circuit, reducing the power consumption and the leakage power that is proportional to the number of components.
Abstract: The power optimization of integrated circuits must be observed in all levels of abstraction of the design flow. The traditional standard cell flow don't really takes care of power minimization at physical level, because there is a limited number of logical functions in a cell library, as well a limited number of sizing versions. To really obtain an optimization at physical level, it is needed to allow the use of any possible logical function, by also using complex cells (Static CMOS complex gates - SCCG) that are not available in a cell library. To have a “freedom” in the logic design step, it is needed the use of an EDA set of tools to let the automatic design of any transistor network (even with a different number of P and N transistors). This approach can reduce the amount of transistors needed to implement a circuit, reducing the power consumption, mainly the leakage power that is proportional to the number of components (transistors). This paper presents some examples and comparisons between the standard cell approach and the network of transistors approach. The flexibility of the approach can also let the designers to define layout parameters to cope with problems like tolerance to transient effects, yield improvement, printability and DFM. The designer can also manage the sizing of transistors to reduce power consumption, without compromising the clock frequency.

Journal ArticleDOI
TL;DR: This paper studies the power delivery problem in voltage island designs, and proposes to consider voltage drop during the floorplanning process to reduce design iterations and obtain more robust low power design within reasonable runtime.
Abstract: Voltage island has become a very effective design style for power saving in low-power design. However, the new design style also brings forward new challenges, especially to the designers of power/ground (P/G) networks. In this paper, we study the power delivery problem in voltage island designs, and propose to consider voltage drop during the floorplanning process to reduce design iterations. Our analysis shows that it is unnecessary to consider the pitch of the P/G network in the floorplan stage. By using the simplified searching strategy in floorplanning, we can obtain more robust low power design within reasonable runtime. Experimental results have demonstrated the effectiveness of our approach.

Patent
16 Mar 2011
TL;DR: In this paper, a printed circuit board (PCB) virtual manufacturing system of electronic design automation (EDA) of the electronic product and a realization method thereof is presented. But the method is limited to the case of a single island.
Abstract: The invention relates to a printed circuit board (PCB) virtual manufacturing system of the electronic design automation (EDA) of the electronic product and a realization method thereof. The realization method comprises the following steps: all types of EDA design files can be extracted and converted to an EDA simulation file with the same format and data are input in a database; the PCB static (three-dimensional) 3D simulation and assembly 3D simulation are performed under the VC++ 6.0 environment; and finally PCB manufacturability analysis is performed to extract the simulation information and the judgement parameters of each element, the judgement parameters of each element are detected one by one according to the manufacturability design criteria in a rule database, and the PCB physical parameter error and assembly error of the EDA of the electronic product are displayed and filed instantly. By adopting the method of the invention, the relationship between the PCB design and the manufacture of the isolated island can be built and the visual basis can be provided for the data modification of the EDA optimal deign in the shortest time, thus the aim of optimizing the development cycle and the cost and maximizing the production efficiency can be achieved.

Proceedings ArticleDOI
01 May 2011
TL;DR: A tool flow is presented, which automatically generates homogeneous hard macros for Xilinx FPGAs starting from a high-level description, such as VHDL, which aims at maintaining the homogeneity of the resulting hard macro.
Abstract: The regularity of resources found in FPGAs is a unique feature, which can be utilized in a number of applications, e.g., in timing critical applications or applications with a demand for homogeneous routing. Current synthesis tools do not support an automatic generation of homogeneous FPGA designs, such that a time-consuming hand-crafted design is required. We present a tool flow, which automatically generates homogeneous hard macros for Xilinx FPGAs starting from a high-level description, such as VHDL. Key functionalities of the tool flow are a homogeneous placer and a suitable routing algorithm, which aim at maintaining the homogeneity of the resulting hard macro. The place and route tools use a resource library that is automatically generated for the target FPGA family by extracting relevant information from the vendor tools. The tool chain is demonstrated for the design of hard macros for a time-to-digital converter and a tiled partially reconfigurable region. The resulting designs are evaluated with respect to resource requirements and timing constraints.

Journal ArticleDOI
TL;DR: It is suggested that the overall learning process is improved, and students gain a better knowledge of modern technologies and design methods if they are given full time access to programmable logic boards.
Abstract: This paper presents the benefits and costs of providing students with unlimited access to programmable boards in digital design education, allowing hands-on experiences outside traditional laboratory settings. Studies were conducted at three universities in two different countries-Rose Hulman Institute of Technology, Terre Haute, IN; Washington State University, Pullman; and Technical University of Cluj-Napoca, Romania-to measure the effect on student learning and student performance of students having their own programmable hardware systems and unrestricted access to the most commonly used design tools. The results of the studies, supported by assessment data from various sources, suggest that the overall learning process is improved, and students gain a better knowledge of modern technologies and design methods if they are given full time access to programmable logic boards.

Patent
28 Jan 2011
TL;DR: In this article, performance metrics of servers in the public cloud infrastructure and performance history of a user's past EDA tasks are maintained to estimate operation parameters such as runtime of a new EDA task.
Abstract: Provisioning resources in public cloud infrastructure to perform at least part of electronic design automation (EDA) tasks on the public cloud infrastructure. Performance metrics of servers in the public cloud infrastructure and performance history of a user's past EDA tasks are maintained to estimate operation parameters such as runtime of a new EDA task. Based on the estimation, a user can provision appropriate types and amounts of resources in the public cloud infrastructure in a cost-efficient manner. Also, a plurality of EDA tasks are assigned to computing resources in a manner that minimizes the overall cost for performing the EDA tasks.

Patent
08 Jul 2011
TL;DR: The optical proximity correction (OPC) process calculates, improves, and optimizes one or more features on an exposure mask (used in semiconductor or other processing) so that a resulting structure realized on an integrated circuit or chip meets desired design and performance requirements as mentioned in this paper.
Abstract: Computationally intensive electronic design automation operations are accelerated with algorithms utilizing one or more graphics processing units. The optical proximity correction (OPC) process calculates, improves, and optimizes one or more features on an exposure mask (used in semiconductor or other processing) so that a resulting structure realized on an integrated circuit or chip meets desired design and performance requirements. When a chip has billions of transistors or more, each with many fine structures, the computational requirements for OPC can be very large. This processing can be accelerated using one or more graphics processing units.

Journal ArticleDOI
31 Aug 2011
TL;DR: This work proposes an approach for automating the design debugging procedures by integrating SAT-based debugging with test bench based verification, and shows that this approach is as accurate as exact formal debugging in 71% of the experiments.
Abstract: Debugging is one of the major bottlenecks in the current VLSI design process as design size and complexity increase. Efficient automation of debugging procedures helps to reduce debugging time and to increase diagnosis accuracy. This work proposes an approach for automating the design debugging procedures by integrating SAT-based debugging with test bench based verification. The diagnosis accuracy increases by iterating debugging and counterexample generation, i.e., the total number of fault candidates decreases. The experimental results show that our approach is as accurate as exact formal debugging in 71% of the experiments.

Journal ArticleDOI
TL;DR: A multiscale variation-aware optimization technique based on integer linear programming is proposed for the lab-on-a-chip component placement and demonstrates that without considering variations, the technique always satisfies the design constraints and largely outperforms the state-of-the-art approach.
Abstract: The invention of microfluidic lab-on-a-chip alleviates the burden of traditional biochemical laboratory procedures which are often very expensive. Device miniaturization and increasing design complexity have mandated a shift in digital microfluidic lab-on-a-chip design from traditional manual design to computer-aided design (CAD) methodologies. As an important procedure in the lab-on-a-chip layout CAD, the lab-on-a-chip component placement determines the physical location and the starting time of each operation such that the overall completion time is minimized while satisfying nonoverlapping constraint, resource constraint, and scheduling constraint. In this paper, a multiscale variation-aware optimization technique based on integer linear programming is proposed for the lab-on-a-chip component placement. The simulation results demonstrate that without considering variations, our technique always satisfies the design constraints and largely outperforms the state-of-the-art approach, with up to 65.9% reduction in completion time. When considering variations, the variation-unaware design has the average yield of 2%, while our variation-aware technique always satisfies the yield constraint with only 7.7% completion time increase.

Journal ArticleDOI
TL;DR: The paper shows that the CPOG model is a very convenient formalism for efficient representation of processor instruction sets and provides a ground for a concise formulation of several encoding problems, which are reducible to the Boolean satisfiability (SAT) problem and can be efficiently solved by modern SAT solvers.
Abstract: There is a critical need for design automation in microarchitectural modelling and synthesis. One of the areas which lacks the necessary automation support is synthesis of instruction codes targeting various design optimality criteria. This paper aims to fill this gap by providing a set of formal methods and a software tool for synthesis of instruction codes given the description of a processor as a set of instructions. The method is based on the conditional partial order graph (CPOG) model, which is a formalism for efficient specification and synthesis of microcontrollers. It describes a system as a functional composition of its behavioural scenarios, or instructions, each of them being a partial order of events. In order to distinguish instructions within a CPOG they are given different encodings represented with Boolean vectors. Size and latency of the final microcontroller significantly depends on the chosen encodings, thus efficient synthesis of instruction codes is essential. The paper shows that the CPOG model is a very convenient formalism for efficient representation of processor instruction sets. It provides a ground for a concise formulation of several encoding problems, which are reducible to the Boolean satisfiability (SAT) problem and can be efficiently solved by modern SAT solvers.

Patent
12 Oct 2011
TL;DR: In this paper, the authors proposed a check rule based approach for automatic check of process data of a printed circuit board (PCB) by formulating a manufacturability check rule according to a PCB design specification.
Abstract: The invention discloses a method for automatic check of process data of a printed circuit board (PCB), comprising formulating a manufacturability check rule according to a PCB manufacturability design specification; establishing a check rule database including the manufacturability check rule and PCB manufacturer processing information; reading PCB process data from light-plotting and drilling files introduced from an EDA; designing a universal data structure to integrate the process data; hierarchically and visually displaying a PCB light-plotting effect drawing; and calling the check rule during the check process to perform manufacturability check on the process data, performing a qualification check to the PCB manufacturer processing, and classifying and outputting the check result list. The method causes a user to be independent from an EDA (electronic design automation) design environment; the method is suitable for PCB manufacturers, professional EDA/CAD (computer aided design) companies and independent circuit design engineers to realize an automatic check function of PCB process data after finishing a PCB design and before processing and manufacturing; the method combines the processing capacities of the PCB manufacturers to perform manufacturability analysis, decrease the processing rejection rate and increase the production efficiency and product quality.

Journal ArticleDOI
TL;DR: A computer aided design system that brings together theories and tools from geometric modeling, image processing, and reverse engineering to overcome the traditional time-consuming manual operations of shoe design phases is described.
Abstract: In the footwear industry there is growing methodological research linking advanced computer-based technologies to the traditional manufacturing process. This paper deals with the automation of shoe design phases and describes a computer aided design system that brings together theories and tools from geometric modeling, image processing, and reverse engineering. At first, the paper reviews the current technologies used for creating new shoe models. Then the paper presents an approach based on shoe 3D virtual modeling in order to overcome the traditional time-consuming manual operations. The approach is concretized into dedicated tools able to automatically perform design of the last shape model and flattening of the shoe styling curves represented in the virtual prototype. The modeling tool uses 3D geometric rules derived from the analysis of strategies adopted by skilled manual operators, while the styling curves recognition and flattening are based on specific image processing algorithms and geometrical deformation rules. Experimental results show a good compromise between quality results and modeling time.

Proceedings ArticleDOI
15 Jun 2011
TL;DR: This contribution presents an integrated approach to verify functional and a subset of non-functional properties of manufacturing control systems and two approaches are introduced that specify functional requirements with Symbolic Timing Diagrams and non- functional ones with a Safety-Oriented Technical Language.
Abstract: Verification of control software is usually not applied in industrial practice because of additional work expenses and missing theoretical background that is necessary to apply this technique. Therefore, this contribution presents an integrated approach to verify functional and a subset of non-functional properties of manufacturing control systems. To support the user in creating a well-defined but also understandable specification of plant behavior, two approaches are introduced that specify functional requirements with Symbolic Timing Diagrams and non-functional ones with a Safety-Oriented Technical Language. These behavior descriptions are then translated to temporal logic formulas to perform model-checking of the closed-loop system of plant and controller.

DOI
01 Jan 2011
TL;DR: This thesis describes the author's contribution to solve the modelling and simulation challenges mentioned above in three thematic phases.
Abstract: Systems on Chip (SoCs) and Systems in Package (SiPs) are key parts of a continuously broadening range of products, from chip cards and mobile phones to cars. Besides an increasing amount of digital hardware and software for data processing and storage, they integrate more and more analogue/RF circuits, sensors, and actuators to interact with their (analogue) environment. This trend towards more complex and heterogeneous systems with more intertwined functionalities is made possible by the continuous advances in the manufacturing technologies and pushed by market demand for new products and product variants. Therefore, the reuse and retargeting of existing component designs becomes more and more important. However, all these factors make the design process increasingly complex and multidisciplinary. Nowadays, the design of the individual components is usually well understood and optimised through the usage of a diversity of CAD/EDA tools, design languages, and data formats. These are based on applying specific modelling/abstraction concepts, description formalisms (also called Models of Computation (MoCs)) and analysis/simulation methods. The designer has to bridge the gaps between tools and methodologies using manual conversion of models and proprietary tool couplings/integrations, which is error-prone and time-consuming. A common design methodology and platform to manage, exchange, and collaboratively develop models of different formats and of different levels of abstraction is missing. The verification of the overall system is a big problem, as it requires the availability of compatible models for each component at the right level of abstraction to achieve satisfying results with respect to the system functionality and test coverage, but at the same time acceptable simulation performance in terms of accuracy and speed. Thus, the big challenge is the parallel integration of these very different part design processes. Therefore, the designers need a common design and simulation platform to create and refine an executable specification of the overall system (a virtual prototype) on a high level of abstraction, which supports different MoCs. This makes possible the exploration of different architecture options, estimation of the performance, validation of re-used parts, verification of the interfaces between heterogeneous components and interoperability with other systems as well as the assessment of the impacts of the future working environment and the manufacturing technologies used to realise the system. For embedded Analogue and Mixed-Signal (AMS) systems, the C++-based SystemC with its AMS extensions, to which recent standardisation the author contributed, is currently establishing itself as such a platform. This thesis describes the author's contribution to solve the modelling and simulation challenges mentioned above in three thematic phases. In the first phase, the prototype of a web-based platform to collect models from different domains and levels of abstraction together with their associated structural and semantical meta information has been developed and is called ModelLib. This work included the implementation of a hierarchical access control mechanism, which is able to protect the Intellectual Property (IP) constituted by the model at different levels of detail. The use cases developed for this tool show how it can support the AMS SoC design process by fostering the reuse and collaborative development of models for tasks like architecture exploration, system validation, and creation of more and more elaborated models of the system. The experiences from the ModelLib development delivered insight into which aspects need to be especially addressed throughout the development of models to make them reusable: mainly flexibility, documentation, and validation. This was the starting point for the development of an efficient modelling methodology for the top-down design and bottom-up verification of RF Systems based on the systematic usage of behavioural models in the second phase. One outcome is the developed library of well documented, parameterisable, and pin-accurate VHDL-AMS models of typical analogue/digital/RF components of a transceiver. The models offer the designer two sets of parameters: one based on the performance specifications and one based on the device parameters back-annotated from the transistor-level implementation. The abstraction level used for the description of the respective analogue/digital/RF component behaviour has been chosen to achieve a good trade-off between accuracy, fidelity, and simulation performance. The pin-accurate model interfaces facilitate the integration of transistor-level models for the validation of the behavioural models or the verification of a component implementation in the system context. These properties make the models suitable for different design tasks such as architecture exploration or overall system validation. This is demonstrated on a model of a binary Frequency-Shift Keying (FSK) transmitter parameterised to meet very different target specifications. This project showed also the limits in terms of abstraction and simulation performance of the "classical" AMS Hardware Description Languages (HDLs). Therefore, the third and last phase was dedicated to further raise the abstraction level for the description of complex and heterogeneous AMS SoCs and thus enable their efficient simulation using different synchronised MoCs. This work uses the C++-based simulation framework SystemC with its AMS extensions. New modelling capabilities going beyond the standardised SystemC AMS extensions have been introduced to describe energy conserving multi-domain systems in a formal and consistent way at a high level of abstraction. To this end, all constants, variables, and parameters of the system model, which represent a physical quantity, can now declare their dimension and associated system of units as an intrinsic part of their data type. Assignments to them need to contain besides the value also the correct measurement unit. This allows a much more precise but still compact definition of the models' interfaces and equations. Thus, the C++ compiler can check the correct assembly of the components and the coherency of the equations by means of dimensional analysis. The implementation is based on the Boost.Units library, which employs template metaprogramming techniques. A dedicated filter for the measurement units data types has been implemented to simplify the compiler messages and thus facilitate the localisation of unit errors. To ensure the reusability of models despite precisely defined interfaces, their interfaces and behaviours need to be parametrisable in a well-defined manner. The enabling implementation techniques for this have been demonstrated with the developed library of generic block diagram component models for the Timed Data Flow (TDF) MoC of the SystemC AMS extensions. These techniques are also the key to integrate a new MoC based on the bond graph formalism into the SystemC AMS extensions. Bond graphs facilitate the unified description of the energy conserving parts of heterogeneous systems with the help of a small set of modelling primitives parametrisable to the physical domain. The resulting models have a simulation performance comparable to an equivalent signal flow model.

Journal ArticleDOI
TL;DR: In this article, the authors conduct a comprehensive literature review in the field of FMLD and find that the evolutionary design approach using genetic algorithms reveals a new opportunity to automate and optimise such complex FMCRLD with an explorative and generative design process embodied in a stochastic evolutionary search.
Abstract: Family mould layout design (FMLD) is demanding and experience-dependent. It involves many sub-design tasks that determine the cost and performance of a family mould in the early conceptual mould layout design phase. This paper conducts a comprehensive literature review in the field of FMLD. The review ascertains that family mould cavity and runner layout design (FMCRLD) automation and optimisation is the critical knowledge gap in FMLD, and it should be viewed as a complex combinatorial layout design optimisation problem. The second part of this paper extends the review of the literature regarding possible computational techniques for supporting FMCRLD automation and optimisation. The review discovers that the evolutionary design approach using genetic algorithms reveals a new opportunity to automate and optimise such complex FMCRLD with an explorative and generative design process embodied in a stochastic evolutionary search. However, implementation of this innovative approach is full of challenges. Further research on this topic is urgently needed.

Proceedings ArticleDOI
10 Oct 2011
TL;DR: In this paper, a comprehensive overview of the literature on hardware acceleration of string matching algorithms is given, and an FPGA hardware exploration and expedite the design time by a design automation technique.
Abstract: Advances in life sciences over the last few decades have lead to the generation of a huge amount of biological data. Computing research has become a vital part in driving biological discovery where analysis and categorization of biological data are involved. String matching algorithms can be applied for protein/gene sequence matching and with the phenomenal increase in the size of string databases to be analyzed, software implementations of these algorithms seems to have hit a hard limit and hardware acceleration is increasingly being sought. Several hardware platforms such as Field Programmable Gate Arrays (FPGA), Graphics Processing Units (GPU) and Chip Multi Processors (CMP) are being explored as hardware platforms. In this paper, we give a comprehensive overview of the literature on hardware acceleration of string matching algorithms, we take an FPGA hardware exploration and expedite the design time by a design automation technique. Further, our design automation is also optimized for better hardware utilization through optimizing the number of peptides that can be represented in an FPGA tile. The results indicate significant improvements in design time and hardware utilization which are reported in this paper.

Proceedings Article
01 Jan 2011
TL;DR: The creation of computational caricatures is proposed as a design research practice that aims to advance understanding of the game design process and to develop the reusable technology for design automation.
Abstract: We propose the creation of computational caricatures as a design research practice that aims to advance understanding of the game design process and to develop the reusable technology for design automation. Computational caricatures capture and exaggerate statements about the game design process in the form of computational systems (i.e. software and hardware). In comparison with empirical interviews of game designers, arguments from established design theory, and the creation of neutral simulations of the design process, computational caricatures provide more direct access to inquiry and insight about design. Further, they tangibly demonstrate architectures and subsystems for a new generation of human-assisting design support systems and adaptive games that embed aspects of automated design in their runtime processes. In this paper, we frame the idea of computational caricature, review several existing design automation prototypes through the lens of caricature, and call for more design research to be done following this practice.