scispace - formally typeset
Search or ask a question

Showing papers in "Design Automation for Embedded Systems in 1996"


Journal ArticleDOI
TL;DR: This paper explores the novel technical challenges in embedded system design and presents experiences and results of the work in this area using the CASTLE system, a central design representation for complex embedded systems and several analysis and visualization tools.
Abstract: In the past decade the main engine of electronic design automation has been the widespread application of ASICs (Application Specific Integrated Circuits). Present technology supports complete systems on a chip, most often used as so-called embedded systems in an increasing number of applications. Embedded systems pose new design challenges which we believe will be the driving forces of design automation in the years to come. These include the design of electronic systems hardware, embedded software and hardware / software codesign. This paper explores the novel technical challenges in embedded system design and presents experiences and results of the work in this area using the CASTLE system. CASTLE supports the design of complex embedded systems and the design of the required tools. It provides a central design representation, Verilog, VHDL and C/C++ frontends, Hardware generation in VHDL and BLIF, a retargetable compiler backend and several analysis and visualization tools. Two design examples, video compression and a diesel injection control, illustrate the presented concepts.

89 citations


Journal ArticleDOI
TL;DR: This paper presents a methodology for embedded system design as a co-synthesis of interacting hardware and software components, and presents operation-level timing constraints and develops the notion of satisfiability of constraints by a given implementation both in the deterministic and probabilistic sense.
Abstract: Embedded systems are targeted for specific applications under constraints on relative timing of their actions. For such systems, the use of pre-designed reprogrammable components such as microprocessors provides an effective way to reduce system cost by implementing part of the functionality as a program running on the processor. However, dedicated hardware is often necessary to achieve the requisite timing performance. Analysis of timing constraints is, therefore, key to determination of an efficient hardware-software implementation. In this paper, we present a methodology for embedded system design as a co-synthesis of interacting hardware and software components. We present a decomposition of the co-synthesis problem into sub-problems, that is useful in building a framework for embedded system CAD. In particular, we present operation-level timing constraints and develop the notion of satisfiability of constraints by a given implementation both in the deterministic and probabilistic sense. Constraint satisfiability analysis is then used to define hardware and software portions of functionality. We describe algorithms and techniques used in developing a practical co-synthesis framework, vulcan. Examples are presented to show the utility of our approach.

64 citations


Journal ArticleDOI
TL;DR: In this article, an application of the methodology and of the various software tools embedded in the POLIS co-design system is presented in the realm of automotive electronics: a shock absorber controller, whose specification comes from an actual product.
Abstract: We present an application of the methodology and of the various software tools embedded in the POLIS co-design system. The application is in the realm of automotive electronics: a shock absorber controller, whose specification comes from an actual product. All aspects of the design process are closely examined, including high level language specification and automatic hardware and software synthesis. We analyze different software implementation styles, compare the results, and outline the future developments of our work.

40 citations


Journal ArticleDOI
TL;DR: The proposed approach aims at overcoming the problem of having two separate simulation environments by defining a VHDL-based modeling strategy for software execution, thus enabling the simulation of hardware and software modules within the same VHDl-based CAD framework.
Abstract: This paper presents a methodology for hardware/software co-design with particular emphasis on the problems related to the concurrent simulation and synthesis of hardware and software parts of the overall system. The proposed approach aims at overcoming the problem of having two separate simulation environments by defining a VHDL-based modeling strategy for software execution, thus enabling the simulation of hardware and software modules within the same VHDL-based CAD framework. The proposed methodology is oriented towards the application field of control-dominated embedded systems implemented onto a single chip.

33 citations


Journal ArticleDOI
TL;DR: A codesign case study where a computer graphics application is examined with the intention to speed up its execution, and the achieved speed-up is estimated based on an analysis of profiling information from different sets of input data and various architectural options.
Abstract: This paper describes a codesign case study where a computer graphics application is examined with the intention to speed up its execution. The application is specified as a C program, and is characterized by the lack of a simple compute-intensive kernel. The hardware/software partitioning is based on information obtained from software profiling and the resulting design is validated through cosimulation. The achieved speed-up is estimated based on an analysis of profiling information from different sets of input data and various architectural options.

10 citations


Journal ArticleDOI
TL;DR: The experiment of a CoDesign process to develop a communication system which needs to correctly mix hardware and software parts in order to satisfy the various required performances is described.
Abstract: In this paper, we describe the experiment of a CoDesign process to develop a communication system which needs to correctly mix hardware and software parts in order to satisfy the various required performances. The system design process is based on the MCSE methodology and we show its usefulness for CoDesign. CoDesign is shown as an enhancement of the implementation specification step of MCSE. System partitioning is the result of an interactive procedure based on performance and cost evaluations. The complete description of the implementation is obtained by transformations of the functional description: C or C++ for the software, VHDL for the hardware. The links between hardware and software are also synthesized. Such a procedure and associated tools aim at obtaining system prototypes in an efficient and incremental manner. The described example illustrates the benefit of the proposed method, the significance of the functional level and the specific part of a whole system for which CoDesign is appropriate.

8 citations


Journal ArticleDOI
TL;DR: This work details the design and measurement procedures used to reduce the power requirements of a low-power embedded system, a touchscreen interface device for a personal computer, designed to operate on excess power provided by unused RS232 communication lines.
Abstract: A case study in low-power system-level design is presented. We detail the design of a low-power embedded system, a touchscreen interface device for a personal computer. This device is designed to operate on excess power provided by unused RS232 communication lines. We focus on the design and measurement procedures used to reduce the power requirements of this system to less than 50 mW. Additionally, we highlight opportunities to use system-level design and analysis tools for low-power design and the obstacles that prevented using such tools in this design.

7 citations


Journal ArticleDOI
TL;DR: This study surveys proposed solutions and concepts for estimating the timing constraints and the behavioural degradation of a controlled system when it suffers the impact of a timing failure and shows that, except for a few cases, current literature does not place on them as much emphasis as one could expect.
Abstract: A real-time computer system must interact with its environment in terms that are dictated by the occurrence of a significant event or simply by the passage of time. The computational activities triggered by these stimuli are expected to provide the correct results at the right time, since a real-time controller must meet the timing constraints that are dictated by its particular environment. If a computer controller fails to meet these time constraints, the controlled system may suffer a behavioural degradation from where, in some cases, a catastrophe can emerge. Thus, the correct estimation and handling of the timing constraints of a controlled system are central issues for the specification, development and test of a real-time computer controller, in a job that requires the scientific contribution of system engineers and real-time computer designers. In this paper we survey proposed solutions and concepts for estimating the timing constraints and the behavioural degradation of a controlled system when it suffers the impact of a timing failure. Although it is universally agreed that these are central issues for the development of predictable real-time controllers, this study shows that, except for a few cases, current literature does not place on them as much emphasis as one could expect. Moreover, a systematic method for evaluating the timing constraints of a controlled system does not seem to actually exist.

5 citations


Journal ArticleDOI
TL;DR: A new method is presented for computing performance bounds of pipelined implementations given an iterative behavior, a set of resource constraints and a target initiation interval and it derives a lower bound on the iteration time achievable by any pipelining implementation.
Abstract: The performance of pipelined datapath implementations is measured basically by three parameters: the clock cycle length, the initiation interval between successive iterations (inverse of the throughput) and the iteration time (turn-around time). In this paper we present a new method for computing performance bounds of pipelined implementations: The method has a low complexity and it handles behavioral specifications containing loop statements with interiteration data dependency and timing constraints.

3 citations


Journal ArticleDOI
TL;DR: Design decisions from TigerSwitch are used to illustrate the difficulties faced in the partitioning and allocation of a system specification into an architecture: the critical performance paths may not be obvious from the initial specification, and it is often difficult to obtain the performance data required to allocate functions in the architecture.
Abstract: We designed TigerSwitch, a digital private branch exchange (PBX) implemented on an IBM PC-compatible platform, as an experiment in embedded system design. A telephone switching system is an interesting example of embedded system co-design because it combines a rich functionality with deadlines ranging from seconds to tenths of a millisecond. This paper uses design decisions from TigerSwitch to illustrate the difficulties faced in the partitioning and allocation of a system specification into an architecture: the critical performance paths may not be obvious from the initial specification, and it is often difficult to obtain the performance data required to allocate functions in the architecture.

2 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed an approach based on Amphibious logic combining a high-level design LSI-CAD system with a functionally reconfigurable Field Programmable Gate Array (FPGA).
Abstract: Entropy coding, as used for lossless compression in image coding, is dominated by serial bit processing on variable wordlength data. Digital signal processors (DSPs), in which pipeline processors play a central role, fail to yield adequate performance for this kind of application. This paper proposes a new approach that fulfills the two requirements for bit processing, the dominant task in entropy coding: high-performance and functional flexibility. This approach is based on Amphibious logic combining a high-level design LSI-CAD system with a functionally reconfigurable Field Programmable Gate Array (FPGA). Functions are programmed via a behavioral description program in a high-level design LSI-CAD system. In order to show the effectiveness of the newly proposed Amphibious logic approach, we designed JPEG-type Huffman and arithmetic CODECs for encoding still images. A comparison with the results of the processing speeds of DSPs and general-purpose microprocessors proves that the Amphibious logic is indeed possible to attain the dual goals of high performance and programmability. The proposed approach can be used to augment a conventional DSP by allocating the functions of numerical processing and bit stream processing, as used in image coding algorithms, between DSPs and FPGAs.

Journal ArticleDOI
TL;DR: The joint design process leading to an ASIC chipset accelerating the execution of rulebased systems is described and the interaction between the algorithm used for software implementation and the parallel algorithm suited for hardware implementation is examined.
Abstract: The move towards higher levels of abstraction in hardware design begins to blur the difference between hardware and software design. Nevertheless, the attractiveness of a software implementation is still defined by the much smaller abstraction gap between specification and implementation. Whereas, hardware design creates the possibility to exploit parallelism at a very fine level of granularity and thereby achieve tremendous performance gains with a moderate expenditure of hardware. This paper describes the joint design process leading to an ASIC chipset accelerating the execution of rulebased systems. The interaction between the algorithm used for software implementation and the parallel algorithm suited for hardware implementation is examined. An area efficient implementation of the programmable hardware was enabled by an application specific compiler backend. The heuristics applied by the optimising “code” generator are discussed quantitatively.