scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Electronic System-Level Synthesis Methodologies

TL;DR: This paper develops and proposes a novel classification for ESL synthesis tools, and presents six different academic approaches in this context based on common principles and needs that are ultimately required for a true ESL synthesis solution.
Abstract: With ever-increasing system complexities, all major semiconductor roadmaps have identified the need for moving to higher levels of abstraction in order to increase productivity in electronic system design. Most recently, many approaches and tools that claim to realize and support a design process at the so-called electronic system level (ESL) have emerged. However, faced with the vast complexity challenges, in most cases at best, only partial solutions are available. In this paper, we develop and propose a novel classification for ESL synthesis tools, and we will present six different academic approaches in this context. Based on these observations, we can identify such common principles and needs as they are leading toward and are ultimately required for a true ESL synthesis solution, covering the whole design process from specification to implementation for complete systems across hardware and software boundaries.

Summary (2 min read)

Introduction

  • Understanding the mechanism of phase transformations of protein aqueous solutions is important for several applications in materials science and biotechnology.
  • 5,6 However, a sound scientific basis of the thermodynamic behavior of protein aqueous solutions is still missing.
  • The chemical potential of the protein component is changed by the concentration of the additive in two ways.
  • The corresponding experimental results have been interpreted by using microscopic models based on preferential hydration (or binding),9 DLVO (Derjaguin-Landau-VerweyOverbeek) interactions,14 depletion interactions (or crowding),15 and Donnan effects.16 24.
  • Yet, they have been qualitatively successful in describing the effect of PEG molecular weight on the thermodynamic behavior of protein solutions.

Thermodynamic Perturbation Theory

  • The authors will now outline a thermodynamic perturbation theory that will be used to describe the liquid-liquid phase transition for the protein-polymer-buffer system.
  • The contribution of polymer to the translational entropy of the system is represented by c2 ln(c2/e), whereas c2(∂f̂ex/∂c2) is the first term of a series expansion describing the effect of polymer concentration on the excess free energy, f̂ex.
  • Furthermore, this approach also can be used to describe the presence of protein/polymer partitioning in the two coexisting liquid phases.
  • Yet, most of their experimental results are far from critical conditions and will be analyzed using eq 5 not eq 6.
  • For pure excluded-volume interactions, eq 4 predicts that, as the polymer concentration increases, (∂µ̂1/∂c1)T,µ̂2 decreases and can be made to reach zero.

Materials and Methods

  • Bovine serum albumin (BSA) was purchased from Sigma (purity 99%).
  • The concentration of PEG1450 in the samples was calculated by using the mass of PEG and the total volume of the solution.
  • The total volume was calculated from the sample mass using the corresponding specific volume, i.e., 0.735 mL/g for the protein,42 0.84 mL/g for PEG,24 and 1.000 mL/g for the buffer (the buffer density was measured using a DMA40 Mettler-Paar density meter) as in previous work.
  • The temperature reported for the onset of LLPS was Tph ) (Tcloud + Tclear)/2 as recommended in previous work.
  • The protein concentration in each phase was determined by UV absorption.

Results

  • The chosen buffer for the studies on the BSA-PEG1450 mixtures was a 0.1 M sodium acetate aqueous solution at pH 5.2.
  • This also suggests that the amount of PEG1450 required to induce LLPS at this pH is close to the minimum.
  • To characterize protein-protein interactions in the proteinbuffer binary system, the authors determine the value of its second virial coefficient, B2, at 298.15 K. Similar results were also obtained for γ-crystallin-PEG aqueous mixtures.
  • The authors thus expect that the proteinconcentrate phases may still contain small amounts (∼10%) of the protein-dilute phase.

Discussion

  • The thermodynamic behavior of the BSA-PEG1450-buffer system can be examined by applying the above-described thermodynamic perturbation theory to their experimental results.
  • The phase boundary is then computed by repeating this procedure for several values of Tph and µ̂2.
  • For this system, LLPS with a lower critical point is also observed when the temperature is increased above 363 K, consistent with previous results.
  • 54 Thus the effect of BSA is to move this phase boundary toward lower temperatures.

Summary and Conclusions

  • The authors have examined the thermodynamic behavior of the BSAPEG1450-buffer system using LLPS measurements.
  • For this system, the authors have experimentally determined the effect of PEG concentration on the LLPS temperature and protein/PEG partitioning in the two coexisting phases.
  • The authors results were interpreted using a thermodynamic perturbation theory.
  • The authors believe that the parallel examination of these two thermodynamic properties is a valuable tool for verifying the reliability of existing models and for developing more accurate ones.
  • Finally, the authors have shown that protein-PEG-buffer mixtures can also exhibit two distinct liquid-liquid phase transitions.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

UvA-DARE is a service provided by the library of the University of Amsterdam (http
s
://dare.uva.nl)
UvA-DARE (Digital Academic Repository)
Electronic system-level synthesis methodologies
Gerstlauer, A.; Haubelt, C.; Pimentel, A.D.; Stefanov, T.P.; Gajski, D.D.; Teich, J.
DOI
10.1109/TCAD.2009.2026356
Publication date
2009
Document Version
Final published version
Published in
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Link to publication
Citation for published version (APA):
Gerstlauer, A., Haubelt, C., Pimentel, A. D., Stefanov, T. P., Gajski, D. D., & Teich, J. (2009).
Electronic system-level synthesis methodologies.
IEEE Transactions on Computer-Aided
Design of Integrated Circuits and Systems
,
28
(10), 1517-1530.
https://doi.org/10.1109/TCAD.2009.2026356
General rights
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s)
and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open
content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please
let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material
inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter
to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You
will be contacted as soon as possible.
Download date:09 Aug 2022

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 28, NO. 10, OCTOBER 2009 1517
Electronic System-Level Synthesis Methodologies
Andreas Gerstlauer, Member, IEEE, Christian Haubelt, Member, IEEE, Andy D. Pimentel, Senior Member, IEEE,
Todor P. Stefanov, Member, IEEE,DanielD.Gajski,Fellow, IEEE, and Jürgen Teich, Senior Member, IEEE
Abstract—With ever-increasing system complexities, all major
semiconductor roadmaps have identified the need for moving to
higher levels of abstraction in order to increase productivity in
electronic system design. Most recently, many approaches and
tools that claim to realize and support a design process at the
so-called electronic system level (ESL) have emerged. However,
faced with the vast complexity challenges, in most cases at best,
only partial solutions are available. In this paper, we develop and
propose a novel classification for ESL synthesis tools, and we will
present six different academic approaches in this context. Based
on these observations, we can identify such common principles and
needs as they are leading toward and are ultimately required for
a true ESL synthesis solution, covering the whole design process
from specification to implementation for complete systems across
hardware and software boundaries.
Index Terms—Electronic system level (ESL), methodology,
synthesis.
I. INTRODUCTION
I
N ORDER to increase design productivity, raising the level
of abstraction to the electronic system level (ESL) seems
mandatory. Surely, this must be accompanied by new design
automation tools [1]. Many approaches exist today that claim
to provide ESL solutions. In [2], Densmore et al. define an
ESL classification framework that focuses on individual design
tasks by reviewing more than 90 different point tools. Many
of these tools are devoted to modeling purposes (functional
or platform) only. Other tools provide synthesis functionality
by either software code generation or C-to-RTL high-level
synthesis. However, true ESL synthesis tools show the ability to
combine design tasks under a complete flow that can generate
systems across hardware and software boundaries from an
algorithmic specification. In this paper, we therefore aim to
provide an extended classification focusing on such complete
ESL flows on top of individual point solutions.
Manuscript received February 24, 2009; revised June 6, 2009. Current
version published September 18, 2009. This paper was recommended by
Associate Editor P. Eles.
A. Gerstlauer is with the Department of Electrical and Computer En-
gineering, University of Texas at Austin, Austin, TX 78712 USA (e-mail:
gerstl@ece.utexas.edu).
C. Haubelt and J. Teich are with the Department of Computer Sci-
ence, University of Erlangen–Nuremberg, 91054 Erlangen, Germany (e-mail:
haubelt@cs.fau.de; teich@cs.fau.de).
A. D. Pimentel is with the Informatics Institute, University of Amsterdam,
1098 XG Amsterdam, The Netherlands (e-mail: a.d.pimentel@uva.nl).
T. P. Stefanov is with the Leiden Institute of Advanced Computer
Science, Leiden University, 2300 RA Leiden, The Netherlands (e-mail:
stefanov@liacs.nl).
D. D. Gajski is with the Center for Embedded Computer Systems, University
of California, Irvine, CA 92697 USA (e-mail: gajski@cecs.uci.edu).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TCAD.2009.2026356
Typically, ESL synthesis tools are domain specific and rely
on powerful computational models [3] for description of de-
sired functional and nonfunctional requirements at the input
of the synthesis flow. Such well-defined rich input models
are a prerequisite for later analysis and optimization. Typical
computational models in digital system design are process
networks, dataflow models, or state machines. On the other
hand, implementation platforms for such systems are often
heterogeneous or homogeneous multiprocessor system-on-chip
(MPSoC) solutions [4]. The complexity introduced by both
input computational model and target implementation plat-
form results in a complex synthesis step, including hardware/
software partitioning, embedded software generation, and
hardware accelerator synthesis. Aside from this, at ESL, the
number of design decisions, particularly in communication
synthesis, is compelling in contrast to lower abstraction levels.
Even more so, due to the increasing number of processors
in MPSoCs, the impact of the quality in computation and
communication synthesis is ever increasing.
In this paper, we aim to provide an analysis and compar-
ative overview of the state-of-the-art current directions and
future needs in ESL synthesis methodologies and tools. After
identifying common principles based on our observations, we
develop and propose a general framework for classification
and, eventually, comparison of different tools in Section II. In
Section III, we then present, in detail, a representative selection
of three ESL approaches developed in our groups. To provide
a more complete overview, Section IV briefly discusses three
related academic approaches. After introducing all six tools,
we follow with a comparison and discussion of future research
directions based on our classification criteria in Section V.
Finally, this paper concludes with a summary in Section VI.
II. E
LECTRONIC SYSTEM DESIGN
In this section, we will identify common principles in
existing ESL synthesis methodologies and develop a novel
classification for such approaches. Later, this will enable a
comparison of different methodologies. Furthermore, based on
such observations, synergies between different approaches can
be explored, and corresponding interfaces between different
tools can be defined and established in the future.
A. Design Flow
Before deriving a model for ESL synthesis, we start by
defining the system design process in general. As nearly all
ESL synthesis methodologies follow a top–down approach,
a definition of the design process should support this view.
0278-0070/$26.00 © 2009 IEEE
Authorized licensed use limited to: UVA Universiteitsbibliotheek SZ. Downloaded on February 8, 2010 at 10:09 from IEEE Xplore. Restrictions apply.

1518 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 28, NO. 10, OCTOBER 2009
Fig. 1. Electronic system design flow.
Furthermore, it should show the concurrent design of hardware
and software and required synthesis steps. A visualization of
this is given by the double roof model [5] shown in Fig. 1.
The double roof model defines the ideal top–down design
process for embedded hardware/software systems. One side of
the roof corresponds to the software design process, whereas
the other side corresponds to the hardware design process.
Each side is organized in different abstraction levels, e.g., task
and instruction levels or component and logic levels for the
software or hardware design processes, respectively. There is
one common level of abstraction, the ESL, at which we cannot
distinguish between hardware and software. At each level, in
a synthesis step (vertical arrow), a specification is transformed
into an implementation. Horizontal arrows indicate the step of
passing models of individual elements in the implementation
directly to the next lower level of abstraction as specifications
at its input.
The double roof model can be seen as extending the
Y-chart [6] by an explicit separation of software and hardware
design. Furthermore, for simplicity, we do not include a third
layout roof representing a physical view of the design. Note,
however, that layout information, while traditionally being of
minor importance, is increasingly employed even at the system
level, e.g., through early floorplanning, to account for spatial
effects such as activity hot spots [7], wiring capacitances, or
distance-dependent latencies [8].
The design process represented by the double roof model
starts with an ESL specification given by a behavioral model
that is often some kind of network of processes communicating
via channels. In addition, a set of mapping constraints and
implementation constraints (maximum area, minimal through-
put, etc.) is given. The platform model at ESL is typically a
structural model consisting of architectural components such
as processors, busses, memories, and hardware accelerators.
The task of ESL synthesis is then the process of selecting an
appropriate platform architecture, determining a mapping of
the behavioral model onto that architecture, and generating a
corresponding implementation of the behavior running on the
platform. The result is a refined model containing all design
decisions and quality metrics, such as throughput, latency, or
area. If selected, components of this refined model are then used
as input to the design process at lower abstraction levels, where
each hardware or software processor in the system architecture
is further implemented separately.
Synthesis at lower levels is a similar process in which a
behavioral or functional specification is refined down into a
structural implementation. However, depending on the abstrac-
tion level, the granularity of objects handled during synthesis
differs, and some tasks might be more important than others.
For instance, at the task level on the software side, commu-
nicating processes/threads bound to the same processor must
be translated into the instruction-set architecture (ISA) of the
processor, targeted toward and running on top of an off-the-
shelf real-time operating system (RTOS) or a custom-generated
runtime environment. This software task synthesis step is typi-
cally performed using a (cross-)compiler and linker tool chain
for the selected processor and RTOS. At the instruction level,
the instruction set of programmable processors is then realized
in hardware by implementing the underlying microarchitecture.
This step results in a structural model of the processor’s data-
path organization, usually specified as a register-transfer level
(RTL) description.
On the other hand, at the component level on the hard-
ware side, processes selected to be implemented as hardware
accelerators are synthesized down to an RTL description in
the form of controller state machines that drive a datapath
consisting of functional units, register files, memories, and
interconnect. This refinement step is commonly referred to as
behavioral or high-level synthesis. Today, there are several tools
available to perform such a high-level synthesis automatically
[9], [10]. Finally, at the logic level, the granularity of the objects
considered during logic synthesis then corresponds to Boolean
formulas implemented by logic gates and flip-flops.
An important observation that can be made from Fig. 1 is
that, at the RT level, hardware and software worlds unite again,
both feeding into (traditional) logic design processes down
to the final manufacturing output. In addition, we note that
a top–down ESL design process relies on the availability of
design flows at the component or task (and eventually logic and
instruction) levels to feed into on the hardware and software
side, respectively. Lower level flows can be supplied either in
the form of corresponding synthesis tools or by providing pre-
designed intellectual property (IP) components to be plugged
into the system architecture.
B. Synthesis Process
Before identifying the main tasks in ESL synthesis, we
first develop a general synthesis framework applicable at all
levels. As discussed in the previous section, during synthesis, a
specification is generally transformed into an implementation.
This abstract view can be further refined into an X-chart as
shown in Fig. 2. With this refinement, we can start to define
terms that are essential in the context of synthesis.
A specification is composed of a behavioral model and
constraints. The behavioral model represents the intended func-
tionality of the system. Its expressibility and analyzability can
be declared by its underlying model of computation (MoC) [3].
The behavioral model is often written in some programming
language (e.g., C, C++, or JAVA), system-level description
Authorized licensed use limited to: UVA Universiteitsbibliotheek SZ. Downloaded on February 8, 2010 at 10:09 from IEEE Xplore. Restrictions apply.

GERSTLAUER et al.: ELECTRONIC SYSTEM-LEVEL SYNTHESIS METHODOLOGIES 1519
Fig. 2. Synthesis process.
language (SLDL) (e.g., SpecC or SystemC), or a hardware
description language (HDL) (such as Verilog or VHDL).
The constraints often include an implicit or explicit plat-
form model that describes an architecture template, e.g.,
available resources, their capabilities (or services), and their
interconnections. Analogous to the classification of behavioral
models into MoCs, specific ways of describing architecture
templates can be generalized into models of architecture
(MoAs) [11]. Similar to the concept of MoCs, an MoA de-
scribes the characteristics underlying a class of platform models
in order to evaluate the richness of supported target archi-
tectures at the input of a synthesis tool. ESL architecture
templates can be coarsely subdivided based on their processing,
memory, and communication hierarchy. On the processing side,
examples include single-processor systems, hardware/software
processor/coprocessor systems, and homogeneous, symmetric
or heterogeneous, asymmetric multiprocessor/multicore sys-
tems (MPSoCs) [4].
1
Memorywise, we can distinguish shared
versus distributed memory architectures. Finally, communi-
cation architectures can be loosely grouped into shared bus-
based or network-on-chip (NoC) approaches. Aside from the
architecture template, constraints typically contain mapping re-
strictions and additional constraints on nonfunctional properties
like maximum response time or minimal throughput.
The synthesis step then transforms a specification into an
implementation. An implementation consists of a structural
model and quality numbers. The structural model is a re-
fined model from the behavioral model under the constraints
given in the specification. In addition to the implementation-
independent information contained in the behavioral model,
the structural model holds information about the realization of
design decisions from the previous synthesis step, i.e., mapping
of the behavioral model onto an architecture template. As
such, a structural model is a representation of the resulting
architecture as a composition of components that are internally
described in the form of behavioral models for input to the next
synthesis step. On top of a well-defined combination of MoCs
1
While details of supported architecture features and restrictions, as defined,
e.g., by tool database formats, can differ significantly, we limit discussions and
comparisons to such high-level MoA classifications in this paper.
for component-internal behavior and functional semantics, we
can hence introduce the term model of structure (MoS) for
separate classification of such implementation representations
and their architectural or structural semantics. Again, a MoS
allows characterization of the underlying abstracted semantics
of a class of structural models independent of their syntax.
Hence, MoSs can be used to compare expressibility and ana-
lyzability of specific implementation representations as realized
by different tools. For example, at many levels, a netlist con-
cept is used with semantics limited to describing component
connectivity. At the system level, pin-accurate models combine
a netlist with bus-functional component models. Furthermore,
transaction-level modeling (TLM) concepts and techniques are
employed to abstract away from pins and wires.
2
Similar to
behavioral models, structural models are often represented in
a programming language, SLDL, or HDL.
Quality numbers are estimated values for different imple-
mentation properties, e.g., throughput, latency, response time,
area, and power consumption. In order to get such estimates,
synthesis tools often use so-called performance models instead
of implementing each design option.
3
Performance models
represent the contributions of individual elements to overall
design quality in a given implementation. Basic numbers are
composed based on specific semantics, e.g., in terms of an-
notation granularity or worst/average/best case assumptions,
such that the overall quality estimates can be obtained, e.g.,
through simulation or static analysis. To distinguish and classify
representations of quality numbers across different instances
and implementations of performance models, we introduce
the concept of an underlying model of performance (MoP).
A MoP thereby refers to the overall accuracy and granularity
in time and space. Generalizing from the detailed definitions
of specific performance models, such as timing, power, or
cost/area models, a MoP can be used to judge the accu-
racy of the quality numbers and the computational effort to
get them. Examples of simulation-based MoPs for different
classes of timing granularity are cycle-accurate performance
models (CAPMs), instruction-set-accurate performance models
(ISAPMs), or task-accurate performance models (TAPMs) [12].
Quality numbers are often used as objective values during
design-space exploration (DSE) when identifying the set of
optimal or near-optimal implementations.
Given a specification, the task of synthesis then generates
an implementation from the specification by decision making
and refinement (Fig. 2). At any level, synthesis is a process of
determining the order or mapping of elements in the behavioral
model in space and time, i.e., the where and when of their
realization. Decision making is hence the task of computing an
allocation of resources available in the platform model, a spatial
binding of objects in the behavioral model onto these allocated
resources, and a temporal scheduling to resolve resource con-
tention of objects in the behavioral model bound to the same
resource.
2
Again, many definitions of specific TLM variants exist, but for simplicity,
we limit discussions in this paper to a general classification.
3
We use the term “performance” in the general sense to refer to any measured
property.
Authorized licensed use limited to: UVA Universiteitsbibliotheek SZ. Downloaded on February 8, 2010 at 10:09 from IEEE Xplore. Restrictions apply.

1520 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 28, NO. 10, OCTOBER 2009
Refinement is the task of incorporating the made decisions
into the behavioral model resulting in a structural model, as
discussed earlier. Moreover, with these decisions, a quality
assessment of the resulting implementation can be done. The
result of this assessment is the quality numbers.
Finally, in order to optimize an implementation, DSE should
be performed. As DSE is a multiobjective optimization prob-
lem, in general, we will identify a set of optimal implemen-
tations instead of a single optimal implementation. For this
purpose, the quality numbers provided by the MoP are used. In
this paper, we define DSE being the multiobjective optimization
problem of the synthesis task. In other words, decision making
is the task of calculating a single feasible allocation, binding,
and scheduling instance, whereas DSE is the process of finding
optimal design points.
In summary, the X-chart shown in Fig. 2 combines two
aspects: synthesis (left output) and quality assessment (right
output). For both aspects, corresponding so-called Y-charts
exist in the literature: The synthesis aspect was presented
and later refined into a first system design methodology by
Gajski et al. in [6] and [13], respectively, while the quality
assessment aspect was proposed by Kienhuis et al. in [14].
With the earlier discussion, first classification criteria for
synthesis tools can be derived.
1) Expressibility and analyzability of the specification.
a) The MoC of the behavioral model. As, in general,
expressibility can be traded against analyzability, the
MoC has a huge influence on the automation capabil-
ities of a synthesis tool.
b) The MoA of the platform model given in the con-
straints. The MoA, as used for refinement, determines
the classes of target implementations supported by a
particular tool.
2) Representations of the implementation.
a) The MoS of the structural model. As structural models
are often used for validation and virtual prototyping,
the MoS can have a large influence on issues such as
simulation performance, observability, and accuracy.
b) The MoP of the performance model given through the
quality numbers. Performance models are employed
for quality assessment, and thus, the MoP has large
impact on the synthesis quality and estimation accu-
racy.
As DSE can be performed manually or automatically, an ad-
ditional classification criterion to be considered is given in the
following.
3) Is DSE automated, i.e., does a methodology integrate
some multiobjective optimization strategy for decision
making?
C. ESL Synthesis
In general, both decision making and refinement can be
automated. However, ESL synthesis is a more complex task
compared to synthesis at lower levels of abstractions. At any
level, the tasks to be performed during decision making and
supported during refinement are computing and realizing an
allocation, binding, and scheduling. At ESL, however, these
three steps have to be performed for a design space which is
at its largest and are required for both computations and com-
munications in the behavioral model. Furthermore, compared
to lower levels where refinement is often reduced to producing
a simple netlist, generating an implementation of system-level
computation and communication decisions is a nontrivial task
that requires significant coding effort.
In computation synthesis, processing elements (PEs), e.g.,
processors, hardware accelerators, memories, and IP cores,
have to be allocated from the platform model. The resulting
allocation has to guarantee that at least each process from the
behavioral model can be bound to an allocated PE. A further
task in computation synthesis is process binding where each
process has to be bound to an allocated PE. A third task in
computation synthesis is process scheduling, i.e., a partial/total
order is imposed on the processes using a static or dynamic
scheduling strategy.
In communication synthesis, communication elements (CEs),
including busses, point-to-point connections, NoCs, bus
bridges, and transducers, have to be allocated. Here, the result-
ing topology must guarantee that each application communi-
cation channel can be bound to an ordered set of architectural
communication media and that channel accesses (transactions)
can be routed on the CEs. A second task is application chan-
nel binding to route application-level communication chan-
nels over the allocated architectural network topology. Finally,
transactions must be scheduled on the communication media
using static time-division access or dynamic, centralized, or
distributed arbitration. As is the case in process scheduling,
transaction scheduling can result in static, dynamic, or quasi-
static schedules.
It should be clearly stated that computation synthesis and
communication synthesis are, by no means, independent tasks.
Hence, an oversimplified synthesis method might result in
infeasible or suboptimal solutions only. Many approaches are
heavily biased toward either computation synthesis (e.g., [15]
and [16]) or communication synthesis (e.g., [17]–[19]), assum-
ing the counterpart to be done by a different tool. In order to
ensure feasibility and optimality, however, an ESL synthesis
methodology should support computation and communication
synthesis with all their respective subtasks.
As ESL synthesis with its subtasks can be automated in de-
cision making and/or refinement, we now can define additional
classification criteria for ESL synthesis tools.
4) Is decision making automated, and if yes, which tasks are
automated?
a) Are computation design decisions computed
automatically?
b) Are communication design decisions computed
automatically?
5) Is refinement automated, and if yes, which tasks are
performed automatically?
a) Is computation refinement automatic?
b) Is communication refinement automatic?
With all the mentioned criteria in this paper, we can classify
and compare ESL synthesis tools. In the following sections,
Authorized licensed use limited to: UVA Universiteitsbibliotheek SZ. Downloaded on February 8, 2010 at 10:09 from IEEE Xplore. Restrictions apply.

Citations
More filters
Journal ArticleDOI
13 May 2012
TL;DR: This paper presents major achievements of two decades of research on methods and tools for hardware/software codesign by starting with a historical survey of its roots, highlighting its major research directions and achievements until today, and predicting in which direction research in codesign might evolve in the decades to come.
Abstract: Hardware/software codesign investigates the concurrent design of hardware and software components of complex electronic systems. It tries to exploit the synergy of hardware and software with the goal to optimize and/or satisfy design constraints such as cost, performance, and power of the final product. At the same time, it targets to reduce the time-to-market frame considerably. This paper presents major achievements of two decades of research on methods and tools for hardware/software codesign by starting with a historical survey of its roots, by highlighting its major research directions and achievements until today, and finally, by predicting in which direction research in codesign might evolve in the decades to come.

275 citations

Journal ArticleDOI
TL;DR: This paper presents an evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results.
Abstract: High-level synthesis (HLS) is an increasingly popular approach in electronic design automation (EDA) that raises the abstraction level for designing digital circuits. With the increasing complexity of embedded systems, these tools are particularly relevant in embedded systems design. In this paper, we present our evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results. Even though HLS tools are still lacking some maturity, they are constantly improving and the industry is now starting to adopt them into their design flows.

162 citations

Proceedings ArticleDOI
09 Oct 2011
TL;DR: This paper analytically proves that in more than 80% of the cases, the throughput resulting from the approach is equal to the maximum achievable throughput of an application for a set of 19 real streaming applications.
Abstract: Most of the hard-real-time scheduling theory for multiprocessor systems assumes independent periodic or sporadic tasks. Such a simple task model is not directly applicable to modern embedded streaming applications. This is because a modern streaming application is typically modeled as a directed graph where nodes represent actors (i.e. tasks) and edges represent data-dependencies. The actors in such graphs have data-dependency constraints and do not necessarily conform to the periodic or sporadic task models. Therefore, in this paper we investigate the applicability of hard-real-time scheduling theory for periodic tasks to streaming applications modeled as acyclic Cyclo-Static Dataflow (CSDF) graphs. In such graphs, the actors are data-dependent, however, we analytically prove that they (i.e. the actors) can be scheduled as implicit-deadline periodic tasks. As a result, a variety of hard-real-time scheduling algorithms for periodic tasks can be applied to schedule such applications with a certain guaranteed throughput. We compare the throughput resulting from such scheduling approach to the maximum achievable throughput of an application for a set of 19 real streaming applications. We find that in more than 80% of the cases, the throughput resulting from our approach is equal to the maximum achievable throughput.

63 citations

Journal ArticleDOI
TL;DR: COSMOS as mentioned in this paper is an automatic methodology for the design-space exploration (DSE) of complex accelerators, that coordinates both HLS and memory optimization tools in a compositional way.
Abstract: Hardware accelerators are key to the efficiency and performance of system-on-chip (SoC) architectures With high-level synthesis (HLS), designers can easily obtain several performance-cost trade-off implementations for each component of a complex hardware accelerator However, navigating this design space in search of the Pareto-optimal implementations at the system level is a hard optimization task We present COSMOS, an automatic methodology for the design-space exploration (DSE) of complex accelerators, that coordinates both HLS and memory optimization tools in a compositional way First, thanks to the co-design of datapath and memory, COSMOS produces a large set of Pareto-optimal implementations for each component of the accelerator Then, COSMOS leverages compositional design techniques to quickly converge to the desired trade-off point between cost and performance at the system level When applied to the system-level design (SLD) of an accelerator for wide-area motion imagery (WAMI), COSMOS explores the design space as completely as an exhaustive search, but it reduces the number of invocations to the HLS tool by up to 146×

49 citations

References
More filters
Proceedings Article
01 Jan 1974
TL;DR: A simple language for parallel programming is described and its mathematical properties are studied to make a case for more formal languages for systems programming and the design of operating systems.
Abstract: In this paper, we describe a simple language for parallel programming. Its semantics is studied thoroughly. The desirable properties of this language and its deficiencies are exhibited by this theoretical study. Basic results on parallel program schemata are given. We hope in this way to make a case for more formal (i.e. mathematical) approach to the design of languages for systems programming and the design of operating systems. There is a wide disagreement among systems designers as to what are the best primitives for writing systems programs. In this paper, we describe a simple language for parallel programming and study its mathematical properties. 1. A SIMPLE LANGUAGE FOR PARALLEL PROGRAMMING The features of our mini-language are exhibited on the sample program S on Figure 1. The conventions are close to Algol1 and we only insist upon the new features. The program S consists of a set of declarations and a body. Variables of type integer channel are declared at line (1), and for any simple type σ (boolean, real, etc. . . ) we could have declared a σ channel. Then processes f , g and h are declared, much like procedures. Aside from usual parameters (passed by value in this example, like INIT at line (3)), we can declare in the heading of the process how it is linked to other processes : at line (2) f is stated to communicate via two input lines that can carry integers, and one similar output line. The body of a process is an usual Algol program except for invocation of wait until something on an input line (e.g. at (4)) or send a variable on a line of compatible type (e.g. at (5)). The process stays blocked on a wait until something is being sent on this line by another process, but nothing can prevent a process from performing a send on a line. In others words, processes communicate via first-in first-out (fifo) queues. Calling instances of the processes is done in the body of the main program at line (6) where the actual names of he channels are bound to the formal parameters of the processes. The infix operator par initiates the concurrent activation of the processes. Such a style of programming is close to may systems using EVENT mechanisms ([1, 2, 3, 4]). A pictorial representation of the program is the schema P on Figure 2, where the nodes represent processes and the arcs communication channels between these processes. What sort of things would we like to prove on a program like S? Firstly, that all processes in S run forever. Secondly, Begin (1) In t eg e r channel X, Y, Z , T1 , T2 ; (2 ) Process f ( i n t e r g e r in U,V; i n t e r g e r out W) ; Begin i n t e g e r I ; l o g i c a l B; B := true ; Repeat Begin (4 ) I := i f B then wait (U) e l s e wait (V) ; (7 ) p r in t ( I ) ; (5 ) send I on W; B := not B; End ; End ; Process g ( i n t e g e r in U ; i n t e g e r out V, W) ; Begin i n t e g e r I ; l o g i c a l B; B := true ; Repeat Begin I := wait (U) ; i f B then send I on V e l s e send I on W : B := not B; End ; End ; (3 ) Process h( i n t e g e r in U; i n t e g e r out V; i n t e g e r INIT ) ; Begin i n t e g e r I ; send INIT on V; Repeat Begin I := wait (U) ; send I on V; End ; End ; Comment : body o f mainprogram ; (6 ) f (X,Y,Z) par g (X,T1 ,T2) par h(T1 ,Y, 0 ) par h(T2 , Z , 1 ) ; End ; Figure 1: Sample parallel program S. more precisely, that S prints out (at line (7)) an alternating sequence of 0’s and 1’s forever. Third, that if one of the processes were to stop at some time for an extraneous reason, the whole systems would stop. The ability to state formally this kind of property of a parallel program and to prove them within a formal logical framework is the central motivation for the theoretical study of the next sections. 2. PARALLEL COMPUTATION Informally speaking, a parallel computation is organized in the following way: some autonomous computing stations are connected to each other in a network by communication lines. Computing stations exchange information through these lines. A given station computes on data coming along

2,478 citations

Journal ArticleDOI
01 Sep 1987
TL;DR: A preliminary SDF software system for automatically generating assembly language code for DSP microcomputers is described, and two new efficiency techniques are introduced, static buffering and an extension to SDF to efficiently implement conditionals.
Abstract: Data flow is a natural paradigm for describing DSP applications for concurrent implementation on parallel hardware. Data flow programs for signal processing are directed graphs where each node represents a function and each arc represents a signal path. Synchronous data flow (SDF) is a special case of data flow (either atomic or large grain) in which the number of data samples produced or consumed by each node on each invocation is specified a priori. Nodes can be scheduled statically (at compile time) onto single or parallel programmable processors so the run-time overhead usually associated with data flow evaporates. Multiple sample rates within the same system are easily and naturally handled. Conditions for correctness of SDF graph are explained and scheduling algorithms are described for homogeneous parallel processors sharing memory. A preliminary SDF software system for automatically generating assembly language code for DSP microcomputers is described. Two new efficiency techniques are introduced, static buffering and an extension to SDF to efficiently implement conditionals.

1,985 citations


"Electronic System-Level Synthesis M..." refers background in this paper

  • ...Due to the complexity of many streaming applications, they often cannot be modeled as static dataflow graphs [30], [31], where consumption and production rates are known at compile time....

    [...]

Book
31 May 2002
TL;DR: System Design and SystemC provides a comprehensive introduction to the powerful modeling capabilities of the SystemC language, and also provides a large and valuable set of system level modeling examples and techniques.
Abstract: The emergence of the system-on-chip (SoC) era is creating many new challenges at all stages of the design process. Engineers are reconsidering how designs are specified, partitioned and verified. With systems and software engineers programming in C/C++ and their hardware counterparts working in hardware description languages such as VHDL and Verilog, problems arise from the use of different design languages, incompatible tools and fragmented tool flows. Momentum is building behind the SystemC language and modeling platform as the best solution for representing functionality, communication, and software and hardware implementations at various levels of abstraction. The reason is clear: increasing design complexity demands very fast executable specifications to validate system concepts, and only C/C++ delivers adequate levels of abstraction, hardware-software integration, and performance. System design today also demands a single common language and modeling foundation in order to make interoperable system--level design tools, services and intellectual property a reality. SystemC is entirely based on C/C++ and the complete source code for the SystemC reference simulator can be freely downloaded from www.systemc.org and executed on both PCs and workstations. System Design and SystemC provides a comprehensive introduction to the powerful modeling capabilities of the SystemC language, and also provides a large and valuable set of system level modeling examples and techniques. Written by experts from Cadence Design Systems, Inc. and Synopsys, Inc. who were deeply involved in the definition and implementation of the SystemC language and reference simulator, this book will provide you with the key concepts you need to be successful with SystemC. System Design with SystemC thoroughly covers the new system level modeling capabilities available in SystemC 2.0 as well as the hardware modeling capabilities available in earlier versions of SystemC. designed and implemented the SystemC language and reference simulator, this book will provide you with the key concepts you need to be successful with SystemC. System Design with SystemC will be of interest to designers in industry working on complex system designs, as well as students and researchers within academia. All of the examples and techniques described within this book can be used with freely available compilers and debuggers e no commercial software is needed. Instructions for obtaining the free source code for the examples obtained within this book are included in the first chapter.

1,011 citations

Journal ArticleDOI
TL;DR: A denotational framework (a "meta model") within which certain properties of models of computation can be compared is given, which describes concurrent processes in general terms as sets of possible behaviors.
Abstract: We give a denotational framework (a "meta model") within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems.

687 citations


"Electronic System-Level Synthesis M..." refers background in this paper

  • ...Furthermore, compared to lower levels where refinement is often reduced to producing a simple netlist, generating an implementation of system-level computation and communication decisions is a nontrivial task that requires significant coding effort....

    [...]

  • ...Furthermore, based on such observations, synergies between different approaches can be explored, and corresponding interfaces between different tools can be defined and established in the future....

    [...]

Frequently Asked Questions (14)
Q1. What are the contributions in "Electronic system-level synthesis methodologies" ?

In this paper, the authors develop and propose a novel classification for ESL synthesis tools, and they will present six different academic approaches in this context. Based on these observations, the authors can identify such common principles and needs as they are leading toward and are ultimately required for a true ESL synthesis solution, covering the whole design process from specification to implementation for complete systems across hardware and software boundaries. 

Nevertheless, no single approach currently provides a complete solution, and further research in many areas is required. In the future, the authors plan to investigate such interoperability issues using combinations of different tools presented in this paper. On the other hand, based on the common concepts and principles identified in this classification, it should be possible to define interfaces such that different point tools can be combined into an overall ESL design environment. Last, but not least, the authors would like to thank the reviewers for their helpful comments and suggestions in making this paper a much stronger contribution. 

constant methods called guards (e.g., check) can be used to test values of internal variables and data in the input channels. 

Finding a good application-to-architecture mapping is carried out during a two-phase automatic architecture exploration step consisting of static and dynamic (i.e., simulative) exploration methods using a TAPM MoP. 

Due to the highly automated design flow of Daedalus, all DSE and prototyping work was performed in only a short amount of time, five days in total. 

Examples of simulation-based MoPs for different classes of timing granularity are cycle-accurate performance models (CAPMs), instruction-set-accurate performance models (ISAPMs), or task-accurate performance models (TAPMs) [12]. 

Lower level flows can be supplied either in the form of corresponding synthesis tools or by providing predesigned intellectual property (IP) components to be plugged into the system architecture. 

The FSM controlling the communication behavior of the SysteMoC actor checks for available input data (e.g., #i1 ≥ 1) and available space on the output channels (e.g., #o1 ≥ 1) to store results. 

An important observation that can be made from Fig. 1 is that, at the RT level, hardware and software worlds unite again, both feeding into (traditional) logic design processes down to the final manufacturing output. 

Due to the complexity of many streaming applications, they often cannot be modeled as static dataflow graphs [30], [31], where consumption and production rates are known at compile time. 

The complete cellphone specification consists of about 16 000 lines of SpecC code and is refined down to 30 000 lines in the final TLM. 

These components include a variety of programmable processors, dedicated hardwired IP cores, memories, and interconnects, thereby allowing the implementation of a wide range of heterogeneous MPSoC platforms. 

KPNgen [23] allows for automatically converting a sequential (SANLP) behavioral specification written in C into a concurrent KPN [22] specification. 

Once this selection has been made, the last step of the proposed ESL design flow is the rapid prototyping of the corresponding FPGA-based implementation in terms of model refinement.