scispace - formally typeset
Open AccessJournal ArticleDOI

Scheduling with bus access optimization for distributed embedded systems

TLDR
The goal is to derive a worst case delay by which the system completes execution, such that this delay is as small as possible; to generate a logically and temporally deterministic schedule; and to optimize parameters of the communication protocol such thatThis delay is guaranteed.
Abstract
In this paper, we concentrate on aspects related to the synthesis of distributed embedded systems consisting of programmable processors and application-specific hardware components. The approach is based on an abstract graph representation that captures, at process level, both dataflow and the flow of control. Our goal is to derive a worst case delay by which the system completes execution, such that this delay is as small as possible; to generate a logically and temporally deterministic schedule; and to optimize parameters of the communication protocol such that this delay is guaranteed. We have further investigated the impact of particular communication infrastructures and protocols on the overall performance and, specially, how the requirements of such an infrastructure have to be considered for process and communication scheduling. Not only do particularities of the underlying architecture have to be considered during scheduling but also the parameters of the communication protocol should be adapted to fit the particular embedded application. The optimization algorithm, which implies both process scheduling and optimization of the parameters related to the communication protocol, generates an efficient bus access scheme as well as the schedule tables for activation of processes and communications.

read more

Content maybe subject to copyright    Report

General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright
owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
You may not further distribute the material or use it for any profit-making activity or commercial gain
You may freely distribute the URL identifying the publication in the public portal
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately
and investigate your claim.
Downloaded from orbit.dtu.dk on: Aug 10, 2022
Scheduling with Bus Access Optimization for Distributed Embedded Systems
Eles, Petru; Doboli, Alex; Pop, Paul; Peng, Zebo
Published in:
IEEE Transactions on VLSI Systems
Link to article, DOI:
10.1109/92.894152
Publication date:
2000
Document Version
Publisher's PDF, also known as Version of record
Link back to DTU Orbit
Citation (APA):
Eles, P., Doboli, A., Pop, P., & Peng, Z. (2000). Scheduling with Bus Access Optimization for Distributed
Embedded Systems. IEEE Transactions on VLSI Systems, 8(5), 472-491. https://doi.org/10.1109/92.894152

472 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 8, NO. 5, OCTOBER 2000
Scheduling with Bus Access Optimization for
Distributed Embedded Systems
Petru Eles, Member, IEEE, Alex Doboli, Student Member, IEEE, Paul Pop, and Zebo Peng, Member, IEEE
Abstract—In this paper, we concentrate on aspects related to
the synthesis of distributed embedded systems consisting of pro-
grammable processors and application-specific hardware compo-
nents. The approach is based on an abstract graph representation
that captures, at process level, both dataflow and the flow of con-
trol. Our goal is to derive a worst case delay by which the system
completes execution, such that this delay is as small as possible;
to generate a logically and temporally deterministic schedule; and
to optimize parameters of the communication protocol such that
this delay is guaranteed. We have further investigated the impact
of particular communication infrastructures and protocols on the
overall performance and, specially, how the requirements of such
an infrastructure have to be considered for process and commu-
nication scheduling. Not only do particularities of the underlying
architecture have to be considered during scheduling but also the
parameters ofthe communication protocol should be adapted to fit
the particular embedded application. The optimization algorithm,
which implies both process scheduling and optimization of the pa-
rameters related to the communication protocol, generates an effi-
cient bus access scheme as well as the schedule tables for activation
of processes and communications.
Index Terms—Communication synthesis, distributed embedded
systems, process scheduling, real-time systems, system synthesis,
time-triggered protocol.
I. INTRODUCTION
M
ANY embedded systems have to fulfill strict require-
ments in terms of performance and cost efficiency.
Emerging designs are usually based on heterogeneous archi-
tectures that integrate multiple programmable processors and
dedicated hardware components. New tools that extend design
automation to system level have to support the integrated
design of both the hardware and software components of such
systems.
During synthesis of an embedded system the designer maps
the functionality captured by the input specification on different
architectures, trying to find the most cost-efficient solution that,
at the same time, meets the design requirements. This design
process implies the iterative execution of several allocation
and partitioning steps before the hardware and software com-
ponents of the final implementation are generated. The term
“hardware/software cosynthesis” is often used to denote this
Manuscript received August 15, 1999; revised February 18, 2000.
P. Eles, P. Pop, and Z. Pang are with the Department of Computer and
Information Science, Linköping University, Sweden (e-mail: petel@ida.liu.se;
paupo@ida.liu.se; zebpe@ida.liu.se).
A. Doboli is with the Department of Electrical and Computer Engineering
and Computer Science, University of Cincinnati, Cincinnati, OH 45221 USA
(e-mail: adoboli@ececs.uc.edu).
Publisher Item Identifier S 1063-8210(00)09504-4.
system-level synthesis process. Surveys on this topic can be
found in [1]–[6].
An important characteristic of an embedded system is its per-
formance in terms of timing behavior. In this paper, we con-
centrate on several aspects related to the synthesis of systems
consisting of communicating processes, which are implemented
on multiple processors and dedicated hardware components. In
such a system, in which several processes communicate with
each other and share resources like processors and buses, sched-
uling of processes and communications is a factor with a deci-
sive influence on the performance of the system and on the way
it meets its timing constraints. Thus, process scheduling has to
be performed not only for the synthesis of the final system but
also as part of the performance estimation task.
Optimal scheduling, in even simpler contexts than that pre-
sented above, has been proven to be an NP complete problem
[7]. Thus, it is essential to develop heuristics that produce good
quality results in a reasonable time. In our approach, we assume
that some processes can only be activated if certain conditions,
computed by previously executed processes, are fulfilled [8],
[9]. Thus, process scheduling is further complicated since at a
given activation of the system, only a certain subset of the total
amount of processes is executed, and this subset differs from
one activation to the other. This is an important contribution of
our approach because we capture both the flow of data and that
of control at the process level, which allows a more accurate and
direct modeling of a wide range of applications.
Performance estimation at the process level has been well
studied in the last years. Papers like [10]–[16] provide a good
background for derivation of execution time (or worst case
execution time) for a single process. Starting from estimated
execution times of single processes, performance estimation
and scheduling of a system consisting of several processes can
be performed. Preemptive scheduling of independent processes
with static priorities running on single-processor architectures
has its roots in [17]. The approach has been later extended
to accommodate more general computational models and has
also been applied to distributed systems [18]. The reader is
referred to [19] and [20] for surveys on this topic. In [21],
performance estimation is based on a preemptive scheduling
strategy with static priorities using rate monotonic analysis. In
[22], an earlier deadline first strategy is used for nonpreemptive
scheduling of processes with possible data dependencies.
Preemptive and nonpreemptive static scheduling are combined
in the cosynthesis environment described in [23] and [24].
Several research groups have considered hardware/software
architectures consisting of a single programmable processor
1063–8210/00$10.00 © 2000 IEEE

ELES et al.: SCHEDULING WITH BUS ACCESS OPTIMIZATION 473
and an application-specific integrated circuit acting as a hard-
ware coprocessor. Under these circumstances, deriving a static
schedule for the software component is practically reduced to
the linearization of a dataflow graph with nodes representing
elementary operations or processes [25]. In the Vulcan system
[26], software is implemented as a set of linear threads that
are scheduled dynamically at execution. Linearization for
thread generation can be performed both by exact, exponential
complexity, algorithms and by faster urgency-based heuristics.
Given an application specified as a collection of tasks, the
tool presented in [27] automatically generates a scheduler
consisting of two parts: a static scheduler that is implemented
in hardware and a dynamic scheduler for the software tasks
running on a microprocessor.
Static cyclic scheduling of a set of data-dependent software
processes on a multiprocessor architecture has been intensively
researched [28]. Several approaches are based on list sched-
uling heuristics using different priority criteria [29]–[32] or
on branch-and-bound algorithms [33], [34]. In [35] and [36],
static scheduling and partitioning of processes, and allocation
of system components, are formulated as a mixed integer linear
programming (MILP) problem. A disadvantage of this ap-
proach is the complexity of solving the MILP model. The size
of such a model grows quickly with the number of processes
and allocated resources. In [37], a formulation using constraint
logic programming has been proposed for similar problems.
It is important to mention that in all the approaches discussed
above, process interaction is only in terms of dataflow. This is
the case also in [38], where a two-level internal representation
is introduced: control-dataflow graphs for operation-level repre-
sentation and pure dataflow graphs for representation at process
level. The representation is used as a basis for derivation and
validation of internal timing constraints for real-time embedded
systems. In [39] and [40], an internal design representation is
presented that is able to capture mixed data/control flow speci-
fications. It combines dataflow properties with finite-state ma-
chine behavior. The scheduling algorithm discussed in [39] han-
dles a subset of the proposed representation. Timing aspects
are ignored and only software scheduling on a single processor
system is considered.
In our approach, we consider embedded systems specified as
interacting processes, which have been mapped on an architec-
ture consisting of several programmable processors and ded-
icated hardware components interconnected by shared buses.
Process interaction in our model is not only in terms of dataflow
but also captures the flow of control. Considering a nonpreemp-
tive execution environment, we statically generate a schedule
table and derive a guaranteed worst case delay.
Currently, more and more real-time systems are used in phys-
ically distributed environments and have to be implemented on
distributed architectures in order to meet reliability, functional,
and performance constraints. However, researchers have often
ignored or very much simplified aspects concerning the com-
munication infrastructure. One typical approach is to consider
communication processes as processes with a given execution
time (depending on the amount of information exchanged) and
schedule them as any other process, without considering issues
like communication protocol,busarbitration, packagingof mes-
sages, clock synchronization, etc. These aspects are, however,
essential in the context of safety-critical distributed real-time
applications, and one of our objectives is to develop a strategy
that takes them into consideration for process scheduling.
Manyeffortsdedicated to communicationsynthesishavecon-
centrated on the synthesis support for the communication infra-
structure but without considering hard real-time constraints and
system-level scheduling aspects [41]–[45].
We have to mention here some results obtained in extending
real-time schedulability analysis so that network communica-
tion aspects can be handled. In [46], for example, the CAN
protocol is investigated, while [47] considers systems based on
the asynchronous transfer mode ( ATM) protocol. These works,
however, are restricted to software systems implemented with
priority-based preemptive scheduling.
In the first part of this paper we consider a communication
model based on simple bus sharing. There we concentrate on
the aspects of scheduling with data and control dependencies,
and such a simpler communication model allows us to focus
on these issues. However, one of the goals of this paper is to
highlight how communication and process scheduling strongly
interact with each other and how system-level optimization can
only be performed by taking into consideration both aspects.
Therefore, in the second part of this paper, we introduce a par-
ticular communication model and execution environment. We
take into consideration the overheads due to communications
and to the execution environment and consider the requirements
of the communication protocol during the scheduling process.
Moreover, our algorithm performs an optimization of param-
eters defining the communication protocol, which is essential
for reduction of the execution delay. Our system architecture is
built on a communication model that is based on the time-trig-
gered protocol (TTP) [48]. TTP is well suited for safety-critical
distributed real-time control systems and represents one of the
emerging standards for several application areas, such as auto-
motive electronics [28], [49].
This paper is divided as follows. In Section II, we formulate
our basic assumptions and set the specific goals of this work.
Section III defines the formal graph-based model, which is
used for system representation, introduces the schedule table,
and creates the background needed for presentation of our
scheduling technique. The scheduling algorithm for conditional
process graphs is presented in Section IV. In Section V, we
introduce the hardware and software aspects of the TTP-based
system architecture. The mutual interaction between scheduling
and the communication protocol as well as our strategy for
scheduling with optimization of the bus access scheme are
discussed in Section VI. Section VII describes the experimental
evaluation, and Section VIII presents our conclusions.
II. P
ROBLEM FORMULATION
We consider a generic architecture consisting of pro-
grammable processors and application specific hardware
processors (ASICs) connected through several buses. The
buses can be shared by several communication channels con-
necting processes assigned to different processors. Only one
process can be executed at a time by a programmable processor,

474 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 8, NO. 5, OCTOBER 2000
while a hardware processor can execute processes in parallel.
1
Processes on different processors can be executed in parallel.
Only one data transfer can be performed by a bus at a given
moment. Data transfer on buses and computation can overlap.
Each process in the specification can be, potentially, assigned
to several programmable or hardware processors, which are
able to execute that process. For each process estimated cost
and execution time on each potential host processor are given
[50]. We assume that the amount of data to be transferred during
communication between two processes has been determined
in advance. In [50], we presented algorithms for automatic
hardware/software partitioning based on iterative improvement
heuristics. The problem we are discussing in this paper con-
cerns performance estimation of a given design alternative and
scheduling of processes and communications. Thus, we assume
that each process has been assigned to a (programmable or
hardware) processor and that each communication channel,
which connects processes assigned to different processors,
has been assigned to a bus. Our goal is to derive a worst case
delay by which the system completes execution such that this
delay is as small as possible, to generate the static schedule and
optimize parameters of the communication protocol, such that
this delay is guaranteed.
For the beginning, we will consider an architecture based on
a communication infrastructure in which communication tasks
are scheduled on buses similar to the way processes are sched-
uled on programmable processors. The time needed for a given
communication is estimated depending on the parameters of
the bus to which the respective communication channel is as-
signed and the number of transferred bits. Communication time
between processes assigned to the same processor is ignored.
Based on this architectural model we introduce our approach to
process scheduling in the context of both control and data de-
pendencies.
In the second part of the paper we introduce an architectural
model with a communication infrastructure suitable for safety
critical hard real-time systems. This allows us to further in-
vestigate the scheduling problem and to explore the impact of
the communication infrastructure on the overall system perfor-
mance. The maingoal is to determine theparameters of thecom-
munication protocol so that the overall system performance is
optimized and, thus, the imposed time constraints can be satis-
fied.Weshowthat system optimizationand, in particular, sched-
uling cannot be efficiently performed without taking into con-
sideration the underlying communication infrastructure.
III. P
RELIMINARIES
A. The Conditional Process Graph
We consider that an application specified as a set of inter-
acting processes is mapped to an abstract representation con-
sisting of a directed acyclic polar graph
called
1
In some designs certain processes implemented on the same hardware pro-
cessor can share resources and, thus, cannot execute in parallel. This situation
can easily be handled in our scheduling algorithm by considering suchprocesses
in a similar way as those allocated to programmable processors. For simplicity,
here we consider that processes allocated to ASICs do not share resources.
a process graph. Each node represents one process.
and are the sets of simple and conditional edges, respec-
tively.
and , where is the set
of all edges. An edge
from to indicates that the
output of
is the input of . The graph is polar, which means
that there are two nodes, called source and sink, thatconvention-
ally represent the first and last task. These nodes are introduced
as dummy processes, with zero execution time and no resources
assigned, so that all other nodes in the graph are successors of
the source and predecessors of the sink, respectively.
A mapped process graph
is generated
from a process graph
by inserting additional
processes (communication processes) on certain edges and by
mapping each process to a given processing element. The map-
ping of processes
to processors and buses is given by a
function
, where
is the set of processing elements. , where
is the set of programmable processors, is the set of
dedicated hardware components, and
is the set of allocated
buses. In certain contexts, we will call both programmable pro-
cessors and hardware components simply processors. For any
process
, is the processing element to which is as-
signed for execution. In the rest of this paper, when we use the
term conditional process graph (CPG), we consider a mapped
process graph as defined here.
Each process
, assigned to a programmable or hardware
processor
, is characterized by an execution time .
In the CPG depicted in Fig. 1,
and are the source and
sink nodes, respectively. For the rest of 31 nodes, 17, denoted
, are ordinary processes specified by the
designer. They are assigned to one of the two programmable
processors
and or to the hardware component .
The rest of 14 nodes are so-called communication processes
. They are represented in Fig. 1 as solid
circles and are introduced during the mapping process for each
connection, which links processes assigned to different pro-
cessors. These processes model interprocessor communication
and their execution time
(where is the sender and the
receiver process) is equal to the corresponding communication
time. All communications in Fig. 1 are performed on bus
.As
discussed in the previous section, we treat, for the beginning,
communication processes exactly as ordinary processes. Buses
are similar to programmable processors in the sense that only
one communication can take place on a bus at a given moment.
An edge
is a conditional edge (represented with
thick lines in Fig. 1) and has an associated condition value.
Transmission on such an edge takes place only if the associated
condition value is true and not, like on simple edges, for each
activation of the input process
. In Fig. 1, processes
and have conditional edges at their output. Process , for
example, communicates alternatively with
and , or with
. Process , if activated (which occurs only if condition
in has value true), always communicates with but alter-
natively with
or , depending on the value of condition
.
We call a node with conditional edges at its output a disjunc-
tion node (and the corresponding process a disjunction process).
A disjunction process has one associated condition, the value of

ELES et al.: SCHEDULING WITH BUS ACCESS OPTIMIZATION 475
Fig. 1. Conditional process graph with execution times and mapping.
which it computes. Alternative paths starting from a disjunc-
tion node, which correspond to complementary values of the
condition, are disjoint, and they meet in a so-called conjunc-
tion node (with the corresponding process called conjunction
process).
2
In Fig. 1, circles representing conjunction and dis-
junction nodes are depicted with thick borders. The alternative
paths starting from disjunction node
, which computes con-
dition
, meet in conjunction node . Node is the joining
point for both the paths corresponding to condition
(starting
from disjunction node
) and condition (starting from dis-
junction node
). We assume that conditions are independent
and alternatives starting from different processes cannot depend
on the same condition.
A process that is not a conjunction process can be activated
only after all its inputs have arrived. A conjunction process can
be activated after messages coming on one of the alternative
paths have arrived. All processes issue their outputs when they
terminate. In Fig. 1, process
can be activated after it receives
messages sent by
and ; process waits for messages
sent by
, , and or by and . If we consider the
activation time of the source process as a reference, the activa-
tion time of the sink process is the delay of the system at a cer-
tain execution. This delay has to be, in the worst case, smaller
than a certain imposed deadline. Release times of some pro-
cesses aswell asmultiple deadlines can be easily modeled by in-
serting dummy nodes between certain processes and the source
or the sink node, respectively. These dummy nodes represent
processes with a certain execution time but that are not allocated
to any processing element.
A Boolean expression
, called a guard, can be associ-
ated to each node
in the graph. It represents the necessary
conditions for the respective process to be activated. In Fig. 1,
for example,
, , ,
and
. is not only necessary but also sufficient
for process
to be activated during a given system execution.
Thus, two nodes
and , where is not a conjunction node,
are connected by an edge
only if (which means
that
is true whenever istrue). This avoidsspecifications
2
If noprocess is specified on an alternative path, it is modeled by a conditional
edge from the disjunction to the corresponding conjunction node (a communi-
cation process may be inserted on this edge at mapping).
in which a process is blocked even if its guard is true, because it
waits for a message from a process that will not be activated. If
is a conjunction node, predecessor nodes can be situated
on alternative paths corresponding to a condition.
The above execution semantics is that of a so-called single
rate system. It assumes that a node is executed at most once for
each activation of the system. If processes with different periods
have to be handled, this can be solved by generating several
instances of the processes and building a CPG that corresponds
to a set of processes as they occur within a time period that is
equal to theleast common multipleof the periodsof the involved
processes.
As mentioned, we consider execution times of processes, as
well as the communication times, to be given. In the case of
hard real-time systems this will, typically, be worst case execu-
tion times, and their estimation has been extensively discussed
in the literature [13], [14]. For many applications, actual execu-
tion times of processes are depending on the current data and/or
the internal state of the system. By explicitly capturing the con-
trol flow in our model, we allow for a more fine-tuned modeling
and a tighter (less pessimistic) assignment of worst case execu-
tion times to processes, compared to traditional dataflow-based
approaches.
B. The Schedule Table
For a given execution of the system, that subset of the pro-
cesses is activated that corresponds to the actual track followed
through the CPG. The actual track takendepends on the value of
certain conditions. For each individual track there exists an op-
timal schedule of the processes that produces a minimal delay.
Let us consider the CPG in Fig. 1. If all three conditions
and are true, the optimal schedule requires to be activated
at time
on processor and processor to be kept
idle until
, in order to activate as soon as possible
[see Fig. 2(a)]. However, if
and are true but is false,
the optimal schedule requires starting both
on and
on at ; will be activated in this case at , after
has terminated and, thus, becomes free [see Fig. 2(b)].
This example reveals one of the difficulties when generating a
schedule fora systemlike that in Fig. 1. As the valuesof thecon-
ditions are unpredictable,the decision of on whichprocess to ac-

Citations
More filters
Proceedings ArticleDOI

Energy-aware communication and task scheduling for network-on-chip architectures under real-time constraints

TL;DR: A novel energy-aware scheduling (EAS) algorithm which statically schedules both communication transactions and computation tasks onto heterogeneous network-on-chip (NoC) architectures under real-time constraints and proposes an efficient heuristic to solve it.
Journal ArticleDOI

Timing analysis of the FlexRay communication protocol

TL;DR: Techniques for determining the timing properties of messages transmitted in both the static and the dynamic segments of a FlexRay communication cycle are proposed and three optimisation algorithms are presented that can be used to improve the schedulability of a system that uses FlexRay.
Proceedings ArticleDOI

Design Optimization of Time-and Cost-Constrained Fault-Tolerant Distributed Embedded Systems

TL;DR: The design optimization approach decides the mapping of processes to processors and the assignment of fault-tolerant policies to processes such that transient faults are tolerated and the timing constraints of the application are satisfied.
Proceedings ArticleDOI

Holistic scheduling and analysis of mixed time/event-triggered distributed embedded systems

TL;DR: Experimental results prove the efficiency of the holistic timing analysis and scheduling approach developed for mixed static/dynamic bus protocols, which communicate over bus protocols consisting of both static and dynamic phases.
References
More filters
Book

Scheduling algorithms for multiprogramming in a hard real-time environment

TL;DR: In this paper, the problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service, and it is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets.
Book

Synthesis and optimization of digital circuits

TL;DR: This book covers techniques for synthesis and optimization of digital circuits at the architectural and logic levels, i.e., the generation of performance-and-or area-optimal circuits representations from models in hardware description languages.
Book

Real-Time Systems: Design Principles for Distributed Embedded Applications

TL;DR: Real-Time Systems offers a splendid example for the balanced, integrated treatment of systems and software engineering, helping readers tackle the hardest problems of advanced real-time system design, such as determinism, compositionality, timing and fault management.
Journal ArticleDOI

NP-complete scheduling problems

TL;DR: It is shown that the problem of finding an optimal schedule for a set of jobs is NP-complete even in the following two restricted cases, tantamount to showing that the scheduling problems mentioned are intractable.
Journal ArticleDOI

Dynamic critical-path scheduling: an effective technique for allocating task graphs to multiprocessors

TL;DR: A static scheduling algorithm for allocating task graphs to fully connected multiprocessors which has admissible time complexity, is economical in terms of the number of processors used and is suitable for a wide range of graph structures.
Frequently Asked Questions (10)
Q1. What are the contributions in "Scheduling with bus access optimization for distributed embedded systems" ?

In this paper, the authors concentrate on aspects related to the synthesis of distributed embedded systems consisting of programmable processors and application-specific hardware components. The authors have further investigated the impact of particular communication infrastructures and protocols on the overall performance and, specially, how the requirements of such an infrastructure have to be considered for process and communication scheduling. 

Considering a TTP-based system architecture, the authors have shown that the general scheduling algorithm for conditional process graphs can be successfully applied if the strategy for message planning is adapted to the requirements of the TDMA protocol. There the authors also considered the possibility of messages being split over several successive frames. The authors have shown that important performance gains can be obtained, without any additional cost, by optimizing the bus access scheme. The authors neither insist here on the relatively simple procedure for postprocessing of the schedule table, during which the table can be simplified for certain situations in which identical activation times are scheduled for a given process on different columns. 

One of the very important applications of their scheduling algorithm is for performance estimation during design space exploration. 

By applying a clock synchronization algorithm, TTP provides a global time-base of known precision, without any overhead on the communication. 

For broadcasting of condition values, only buses are considered to which all processors are connected, and the authors assume that at least one such bus exists. 

For many applications, actual execution times of processes are depending on the current data and/or the internal state of the system. 

List scheduling heuristics [29] are based on ordered lists from which processes are extracted to be scheduled at certain moments. 

The mapping of processes to processors and buses is given by a function , where is the set of processing elements. , whereis the set of programmable processors, is the set of dedicated hardware components, and is the set of allocated buses. 

TTP is also perfectly suited for systems implemented with static nonpreemptive scheduling, and thus represents an ideal target architecture for the scheduling approach presented in the previous sections. 

The algorithm presented in Fig. 4 is able to schedule, based on a certain priority function, process graphs without conditional control dependencies.