scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

An optimal algorithm for scheduling soft-aperiodic tasks in fixed-priority preemptive systems

02 Dec 1992-pp 110-123
TL;DR: A novel algorithm for servicing soft deadline aperiodic tasks in a real-time system in which hard deadline periodic tasks are scheduled using a fixed priority algorithm is presented and is proved to be optimal in the sense that it provides the shortest a Periodic response time among all possible a periodic service methods.
Abstract: A novel algorithm for servicing soft deadline aperiodic tasks in a real-time system in which hard deadline periodic tasks are scheduled using a fixed priority algorithm is presented. This algorithm is proved to be optimal in the sense that it provides the shortest aperiodic response time among all possible aperiodic service methods. Simulation studies show that it offers substantial performance improvements over current approaches, including the sporadic server algorithm. Moreover, standard queuing formulas can be used to predict aperiodic response times over a wide range of conditions. The algorithm can be extended to schedule hard deadline aperiodics and to efficiently reclaim unused periodic service time when periodic tasks have stochastic execution times. >

Summary (3 min read)

1 Introduction

  • A reclaimer can cooperate with a slack stealer by making available for aperiodic service any processing time uuused by the periodic tasks when they require less thau their worst-case execution times.
  • The slack stealing algorithm requires a relatively large amount of calculation.
  • A direct implementation may not be practical.
  • It does, however, provide a lower bound on aperiodic response which is attainable and a basis for finding nearly optimal implementable algorithms.

Tk = min{t 1 E(t) = &pi },

  • It is important to note that if such an upper envelope can be found, it will lead to the minimum response time for every aperiodic task, not just the average response time.
  • The authors will determine the upper envelope and the associated optimal aperiodic scheduling algorithm in the next section, under the following assumptions: l Al: All overhead for context swapping, task scheduling, etc., is assumed to be zero.
  • Tasks are ready at the start of their period and do not suspend themselves or synchronize with any other task.
  • Higher priority-work from 0 until the completion time Of Tij.

Ai

  • Gives the largest amount of aperiodic processing in [0, t] at priority level i or higher possible such that the processor is constantly busy with priority level i or higher activity but all jobs of Ti meet their deadline.
  • These are step functions, with jump points corresponding to the completion times of the jobs of Ti, or the Cij's, and jump heights corresponding to the Aij values computed by Equation (7).
  • Thus, every job of ~1 has zero slack, in the interval of time between its arrival and completion.
  • As a result, their exact completion times depend on the amount of higher priority aperiodic processing done at run-time.
  • The authors next determine bounds on the aperiodic processing at each level for which all deadlines of all periodic tasks can still be met.

3.2 Algorithm Description

  • If A*(s, t) < W, then aperiodic work can be done during [s, s+A*(s, t)] at the highest priority level, but no further work can be done until additional slack becomes available.
  • Note that more aperiodic processing capacity can become available only when a periodic job is completed, because these are the only points in time in which the Ai(s, t) functions step up to their next values.
  • Thus, the evaluation of A*(s, t) should be done only when aperiodic work arrives to an empty aperiodic queue or when there is aperiodic work ready and a periodic task completes.

4.2 Servicing Hard Deadline Aperiodics

  • The problem of scheduling hard deadline aperiodic tasks where the periodic tasks are scheduled using the Earliest Deadline algorithm was studied by Ghetto and Chetto [14] .
  • A similar approach can be used to solve this scheduling problem for the case in which the periodic tasks are scheduled according t,o a fixed priority algorithm.
  • If so, that aperiodic task's deadline could be guaranteed.
  • This is the basis for an approach to guaranteeing harddeadline aperiodic tasks.

4.3 Managing Aperiodic Capacity: The Allocation Problem

  • It is important to point out that the SRPT would reduce the uveruge aperiodic response times; however, if the slack stealer were to process aperiodic tasks using SRPT instead of FIFO ordering, it would no longer possess the strong optimality property of minimizing every aperiodic response time.
  • Thus an aperiodic scheduling policy different from the slack stealer using SRPT will have a longer average aperiodic response time, but the response times of some of the aperiodic tasks may be shorter.

4.4 Finding An Optimal Fixed-Priority Assignment for Joint Scheduling

  • This optimality property is relative to a given fixed-priority order for the periodic tasks.
  • To see that changes in the fixed priority order can alter the aperiodic response times, consider the following example:.

Example 2

  • All periodic deadlines are met and the response time is 13.
  • The authors also wish to identify the operating conditions in which the performance of the slack stealing algorithm approaches and/or deviates significantly from the lower bound derived from the queueing model.
  • There are four important parameters which play a major role in determining the aperiodic task response times and the accuracy of simple queueing formulas for approximating those response times.
  • (1) the periodic load, expressed as a utilizai tion factor, U,. obtained by summing the utilizations 5 Performance Evaluation, also known as The parameters are.

Whaf are the operating conditions for a real-time system under which the slack stealer significantly outperforms state-of-the-art server algorithms? How much do their performances differ under such conditions?

  • The answers to these questions can provide valuable insights for assessing the potential value of implementing the slack stealing algorithm as an alternative to current server algorithms.
  • Surprisingly, it is sufficient to compute the aperiodic response times that would be attained if the aperiodic tasks were to have sole access to the processor.
  • Standard results from queueing theory indicate that job response times will increase with the traffic intensity parameter, popcr = Uopr and also with the mean computation requirement of the aperiodic jobs.
  • These effects will carry over to a processor which handles both aperiodic tasks and hard deadline periodic tasks.
  • Moreover, the difference between UP and UBD is an additional measure of the slack that is typ ically available.

An interesting

  • Observation is that for a fixed demand-capacity ratio and a given aperiodic load, the mean aperiodic execution times for THBD are larger than those for TLBD.
  • Thus, one woulcl expect that larger job sizes would tend to mask out performance differences due to a larger server execution time.
  • Yet, the optimal algorithm outperforms the sporadic server as the aperiodic load is increased, even under these ideal conditions.
  • The performance gains of the optimal algorithm, on the other hand, illustrate the superiority of allocating time for the aperiodics on demand, without the constraints of a periodic server task.
  • Second, the authors observe that the optimal curve departs from the ideal M/M/l bound at lower aperiodic loads than those observed for the 40% load.

5.2.2 Evaluation of the INS Task Set

  • The sporadic server deviates from the optimal at a very low aperiodic load , which reaffirms the higher susceptibility of the sporadic server to the periodic load relative to the optimal algorithm.
  • In fact, the optimal algorithm is capable of maintaining aperiodic response times equal to those of the M/M/l bound for the entire range of aperiodic loads and both demand-capacity ratios shown in Figure 8 .
  • The performance results for the INS task set confirm that a periodic workload with a high breakdown utilization is very favorable to aperiodic responsiveness, as concluded in the random task set study.
  • It seems even more surprising that such a performance is maintained for all reasonable aperiodic loading levels, even for a moderate demand-capacity ratio of 50%.
  • This is a significant result because it shows a promising direction for finding some solutions to the analytical prediction of aperiodic response times, which is an open research issue.

5.3 Performance Summary

  • The performance of the optimal algorithm approaches the ideal M/M/l bound within a 10% error margin for a relatively large range of total loads.
  • In most cases, optimal performance tends to deviate significantly from the ideal when the aperiodic load is high relative to the periodic load.
  • Under these conditions, preemption delays due to the periodic tasks tend to dominate the aperiodic response times.
  • Since this situation is not likely to occur at high periodic loads, the performance of the optimal algorithm remains close to the M/M/l bound for a wide range of operating conditions.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

An Optimal Algorithm for Scheduling
Soft-Aperiodic Tasks
in Fixed-Priority Preemptive Systems
John P. Lehoczkyt and Sandra Ramos-Thuelj
Department of Statistics+
Department of Electrical and Computer Engineering*
Carnegie Mellon University,
Pittsburgh, PA 15213
Abstract
This paper presents a new algorithm for servicing
soft deadline aperiodic tasks in II red-time system in
which hard deadline periodic tusks are scheduled us-
ing a fixed priority algorithm.
The new algorithm
is is proved to be optimal in the sense that it provides
the shortest aperiodic response time amony ull possi-
ble aperiodic service methods. Simulation studies show
that it offers substantial performance improvements
over current approaches including the sporudic server
algorithm. Moreover, standard yueueiny formulas cun
be used to predict aperiodic response times over (I wide
range of conditions. The algorithm can be extended to
schedule hard deadline aperiodics and to eficiently re-
claim unused periodic service time when periodic tusks
have stochastic execution times.‘12
1 Introduction
In 1973, Liu and Layland [l] presented an anal-
ysis of the rate monotonic algorithm for scheduling
periodic tasks with hard deadlines. Recently, this
algorithm has gained popularity as an approach to
designing predictable real-time systems. Moreover,
the algorithm has been modified to allow for the
‘The authors wish to thank Stephen W. Gdyas for his par-
ticipation in this research effort and Jay K. Strosuider aud his
research group for their insightful comments and suggestions.
‘This research is supported in part by a grant from the Of-
fice of Naval Research under contracts N00014-84-K-0734 and
NOOO14-91-J-1304Nl73, by the Naval Ocean Systems Center un-
der contract N66001-87-C-01155, by the Federal Systems Divi-
sion of IBM Corporation under University Agreement Y-278067,
and by AT&T Bell Laboratories uuder the Cooperative He-
search Fellowship Program.
solution of many practical problems which arise in
actual real-time systems including task synchroniza-
tion, transient overload, and simultaneous scheduling
of both periodic and aperiodic tasks, among others.
The mixed task scheduling problem is important, be-
cause many real-time systems have substantial ape-
riodic task workloads. Moreover, the aperiodic tasks
may themselves have a variety of timing requirements,
ranging from hard deadlines to soft deadlines. For ex-
ample, recovery from transient failures may create an
aperiodic stream of hard deadline periodic tasks which
must be reexecuted (see Ramos-Thuel [2]).
In this paper, we reconsider the problem of jointly
scheduling hard deadline periodic tasks and aperiodic
tasks. Although the methods presented in this pa-
per apply both to hard deadline and soft deadline
aperiodic tasks, we limit our attention to the case of
scheduling soft deadline aperiodic tasks. That is, we
seek to schedule a mixture of periodic and aperiodic
tasks in such a way that all periodic task deadlines are
met and the response times for the aperiodic tasks are
as small as possible.
There are two standard approaches to this problem.
The least effective approach is to service the aperi-
odic tasks in the background of the periodic tasks (i.e.,
when the processor is idle). A better approach is to
create a periodic polling task with as large a capacity
as possible. The polling task will be run periodically,
and its capacity will be used to service aperiodic tasks.
While the polling server is faz superior to background,
the periodic polling task is not necessarily coordinated
with the aperiodic arrival process, so some aperiodic
arrivals must wait for the return of the polling task
before they can be executed. This waiting may create
unnecessarily long task response times. In addition,
110
1052-8725192 $3.00 0 1992 IEEE

the polling task may be ready but have no tasks ready
for execution, a situation that wastes the high priority
capacity of the polling task.
Recently, new approaches to the joint scheduling
problem have been developed including the sporadic
server algorithm by Sprunt [3, 41 and the deferrable
server algorithm by Strosnider [5]. Although similar
in spirit to the polling server, these algorithms allow
their capacity to be used throughout the server’s pe-
riod rather than only at the beginning. The two algo-
rithms differ in the way their capacity is replenished,
and each has its own individual schedulability analy-
sis; however, in certain circumstances, both can offer
up to an order of magnitude improvement in aperiodic
response time over the polling approach.
This paper develops a new approach to aperiodic
service, and shows that this method can offer sub-
stantial improvements over the deferrable aud spo-
radic server algorithms. The new approach, called the
slack stealing algorithm, does not create a periodic
server for aperiodic task service. Rather it creates a
passive task, referred to as the slack stealer, which
when prompted for service attempts to make time for
servicing aperiodic tasks by “stealing” all the process-
ing time it can from the periodic tasks without, caus-
ing their deadlines to be missed. This is equivalent
to “stealing slack” from the periodic tasks. Note the
similarity of this approach to cycle stealiug t8echniques
used in memory systems [6].
The slack stealer relies on the exact schedulability
conditions given by Lehoczky, Sha and Ding [7] and
Lehoczky [S] to provide the maximum possible capac-
ity for aperiodic service at the time it is needed. Sub-
stantial improvements in aperiodic task response times
will be demonstrated with the slack stealer. It will
also be shown to be optimal for the particular fixed
priority assignment chosen for the periodic tasks. In
addition, the slack stealer can be generalized to han-
dle hard deadline aperiodic tasks, and its functionality
can be efficiently augmented by a recl&~cr. A re-
claimer can cooperate with a slack stealer by making
available for aperiodic service any processing time uu-
used by the periodic tasks when they require less thau
their worst-case execution times. The slack stealing
algorithm requires a relatively large amount of calcula-
tion. Consequently, a direct implementation may not
be practical. It does, however, provide a lower bound
on aperiodic response which is attainable and a basis
for finding nearly optimal implementable algorithms.
2 F’ramework and Assumptions
Consider a real-time system with n periodic tasks,
Tl, . . . , r, . Each task, ri, has a worst-case computa-
tion requirement Ci, a period Ti, an initiation time
4; 2 0 or offset relative to some time origin, and a
deadline Di, assumed to satisfy Di 5 Ti. The pa-
rameters Ci, Ti, +i, and Di are known deterministic
quantities. We require that these tasks be scheduled
according to a fixed priority algorithm, such as the
deadline monotonic algorithm, in which tasks with
small values of D; are given relatively high priority
[‘3]. We assume that the periodic tasks are indexed in
priority order with 71 having highest priority and r,,
having lowest priority. For simplicity, we refer to those
levels as 1, . . . , n with 1 indicating highest priority and
R the lowest. The aperiodic tasks can be assigned any
priority, and we even permit them to be executed dy-
namically at different priority levels. We assume that
if an aperiodic task executes at priority level k, then it
has lower priority than any periodic task with priority
1
.‘1
k - 1 and higher priority than any periodic task
Gith priority k, k + 1, . . . ,
n. Aperiodic task execution
at priority level n + 1 is equivalent to background ex-
ecutiou.
A periodic task, say Ti, gives rise to an infinite se-
quence of jobs. The kth such job is ready at time
di + (k - 1)Ti and its Ci units of required execution
must, be completed by time di + (k - 1)Ti + Di or else
a periodic task timing fault will occur.
We next introduce the aperiodic tasks, {Jk, k 2 1).
Each aperiodic job, Jk, has an associated arrival time
Uk and a processing requirement pk. The tasks are
indexed such that 0 5 &‘k 5 (Yk+l, k 2 1. It is useful
to defiue the cumulative aperiodic workload process,
WA(t) = c Pk,
(1)
{k I aklfl
which accumulates all the aperiodic work that arrives
in the interval [0, t]. Any algorithm for scheduling
both periodic and aperiodic loads will, for any periodic
task set and aperiodic task stream { Jk, k 2 l}, create
a cumulative aperiodic execution process, c(t), giving
the cumulative time during [O,t] that aperiodic tasks
were executed. ~(1) is a continuous function which
must, necessarily satisfy E(t) 5 WA(t),t 2 0 , and we
require that the associated algorithm must meet all
periodic deadlines.
We assume aperiodic tasks are processed in FIFO
order”. The completion time of Jk, denoted Tk, is
31r~ section 4 we consider the shortest remaining processing
time queue discipline which will result in lower average aperiodic
t,wk response times.
111

given by
executed at priority level i or higher during [O,t] and
Tk = min{t 1 E(t) = &pi },
still have Tij finish by Dij.
(2)
Since we seek the largest amount of aperiodic pr*
i=l
cessing possible, and are only concerned with rij’s
deadline, the processor will be busy with level i or
and the response time of JI; , denoted &, is given by
R, = Tk - Qk.
(3)
We seek a scheduling algorithm that will minimize
& which is equivalent to minimizing Tk. Thus, we
need to find a scheduling algorithm whose associated
c(t) is the supremum or upper envelope of all possible
aperiodic execution functions that are associated with
algorithms which meet all periodic deadlines. It is
important to note that if such an upper envelope can
be found, it will lead to the minimum response time
for every aperiodic task, not just the average response
time. We will determine the upper envelope and the
associated optimal aperiodic scheduling algorithm in
the next section, under the following assumptions:
l Al: All overhead for context swapping, task
scheduling, etc., is assumed to be zero.
l A!?: Tasks are ready at the start of their period
and do not suspend themselves or synchronize
with any other task.
l AJ: Any task can be instantly preempted.
higher priority-work from 0 until the completion time
Of Tij.
We now follow the methods developed by
Lehoczky, Sha, and Ding [7] and Lehoczky [8] to de-
termine the necessary and sufficient conditions for Tij
to be schedulable.
Suppose ai is the aperiodic processing at level i
or higher during [0, t], 0 5 t 5 Dij , resulting from
some algorithm. The job rij will finish by Dij, thus
meeting its deadline, if and only if there is a time
t E [Rij, Dij] at which all ai units of aperiodic pro-
cessing and all periodic jobs of priority i or higher
ready before t, including the j jobs of ri are com-
pleted. Let Pi(t) be the periodic ready work in [O,t],
where Pi(t) = CjS: Cj * [maz(O, t - dj)/Tjl + jCi.
The total ready work in [0, t] is then defined by
wi(t) = f%(t) f%(t) + + Pi(t) Pi(t) + + h(t), h(t),
(4) (4)
where Zi(t) is the cumulative level-i inactivity in [0, t].
Thus rij will meet its deadline if and only if there ex-
ists t E [ Rij , Dij]
such that Wi(t) = t or equivalently,
Wi(t)/t = 1. This condition for the feasibility of ai
can be alternatively expressed as
min{Rij 5 t < Dij} {wi(t)/t} I l*
(5)
l A4: There is unlimited buffer space for the ape-
riodic tasks.
3 The Slack Stealing Algorithm
3.1 Formulation
If we assume that the aperiodic workload is suffi-
ciently large so that WA(t) > c(t) for any feasible E(t),
then we can increase ai by Ii(t) and the processor
will be continually busy with level i or higher priority
work up to the completion time of Tij. Equation (5)
can now be rewritten as
mini0 5 t 5 Dij} {Wilt)lt.l I l.
(6)
To determine the upper envelope on aperiodic pro-
cessing, we focus on the maxjmum amount of process-
ing possible such that all periodic deadlines are met.
Consider, for example, the jth job of ri, or Tij, which
is ready at time l&j = & + (j - 1)Ti and must be
finished by &j + Di
= Dij. During [0, Dij] the pro-
cessor may execute tasks at a priority level equal to or
greater than i, tasks at a priority level below i, or may
be idle. Under our fixed-priority system any tasks ex-
ecuted at priority level lower than i are equivalent to
being idle or inactive relative to level i, thus we refer
Given that we want to increase the aperiodic pro-
cessing time as much as possible, we define Aij to be
the largest amount of aperiodic processing possible at
level i or higher during [0, Cij], such that Cij 5 Dij
(Cij refers to the completion time of ‘ii). Thus Aij is
the largest value such that
min{o < t < oij>{( Aj + Pi(t) )/t ) = 1.
(7)
- -
A;j is well defined because the periodic task set is as-
sumed to be schedulable and the function being min-
to level-i inactivity as processor time spent on activi-
imized is piecewise continuous and decreasing. Aij is
ties with priority lower than i. Since level-i inactivity
increased until a minimum of 1 is exactly achieved.
cannot influence the schedulability of any 7i job, we
The completion time of rij, or Cij, is the smallest
seek to find the amount of aperiodic work that can be
value oft for which equality holds in Equation (7).
112

Aperiodic processing at level i or higher given by
Aij during [O,t]
will cause the processor to be con-
stantly busy, but Tij will meet its deadline. We now
need to guarantee that all jobs of ri meet their dead-
line. To ensure the schedulability of Ti, we define
Ai = Aij, Cij-1 5 t < Cij, j > 1,
(8)
where Cio = 0. The non-decreasing step function
Ai gives the largest amount of aperiodic processing
in [0, t] at priority level i or higher possible such that
the processor is constantly busy with priority level i or
higher activity but all jobs of Ti meet their deadline.
To illustrate, let us consider a task set with two
tasks, ~1 and 72, with Cl = 1, Tl = 4, D1 = 1, 41 = 0,
and C2 = 3, Tz = 6, D2 = 6, q5z = 0. Note that
the tasks follow a deadline monotonic priority order.
We restrict our attention to an interval of time [0, H],
where H is the hyperperiod of the task set, or the time
at which the distribution of periodic arrivals repeats
itself. The hyperperiod of a periodic task set is equiv-
alent to the least common multiple of the task periods
which is 12 for this example. Figure 1.a shows the pro-
cessor schedule if no aperiodic work is processed. The
non-decreasing functions Al(t) and AZ(t) are shown
in Figure 1.b. These are step functions, with jump
points corresponding to the completion times of the
jobs of Ti, or the Cij’s, and jump heights correspond-
ing to the Aij values computed by Equation (7). Note
that in this example, all jump points for ~1 are known
a-priori because Clj = Dlj , for all j 2 1. Thus, ev-
ery job of ~1 has zero slack, in the interval of time
between its arrival and completion. On the contrary,
the jobs for 5 have non-zero slack, so their execution
can be delayed by the processing of aperiodic tasks. As
a result, their exact completion times depend on the
amount of higher priority aperiodic processing done
at run-time. Although the exact completion times for
each job of 72 cannot be determined a-priori, their
best- and worst-case values are known. For instance,
job 721 in Figure 1.a will complete no earlier than time
4 and no later than time 6, so its jump point is guar-
anteed to lie somewhere in the time interval [4, 61.
For the particular case in which aperiodics consume
all aperiodic processing time possible, the jump point
for 721 is 6, as shown in Figure 1.b for AZ(t).
We next determine bounds on the aperiodic pro-
cessing at each level for which all deadlines of all pe-
riodic tasks can still be met. Let Li(t) denote the
total amount of aperiodic processing in [0, t] at prior-
ity level i, 1 5 i 5 n. For Tk to meet all deadlines, it
is necessary that
J%(i) J%(i) + + . . . . . . + + Lk(t) Lk(t) < < Ak(t), Ak(t), 15 15 k k 5 5 n. n.
(0) (0)
Tl Tl 3 3
4O 4O ' ' * * 3 3 ' ' 5.F2s 5.F2s 7 7 5 5
0 0 10 10 11 11 12 12
r, r,
, , # # / / /_ /_ r., r.,
/_ /_ /./_ /./_
012345675 012345675
0 0 IO IO
11 11 12 12
(a)
l 3 3
+31
+3 +3
* * I I
AI(~) AI(~)
,,,,I,. ,,,,I,. I I I I I I -I -I
012 012
3 3 4 4 5 5 6 6
7 7 5 5 9 9 IO IO
11 11 12 12
+l +l )
l 2) 2)
IA2" IA2"
0 0 1 1 2 2
3 3 4 4 5 5 6 6
7 7 8 8 0 0 10 10
11 11
+o, +‘: ,
+q
A%, 9
( , , ,
1 I 1
, 1 1
012 012
3 3 4 4 5 5 6 6
7 7 5 5 0 0 10 10
11 11 12 12
(b) (b)
Figure 1: Example 1. Illustrating Slack Stealer opera-
tion: (a) Processor schedule in absence of aperiodics;
(b) Functions used by the Slack Stealer
Let A*(t) = mirql 5 k 5 ,.,I AI(t) and k*(t) be the
index of the highest priority level satisfying &e(t) =
A*(t). Thus &(t) + . . . + Lkt(t)(t) can be no larger
than A*(t). If this sum does assume its maximum
value, then all n inequalities in (9) will hold. Hence,
all periodic tasks at all levels with deadlines no later
than t will meet their deadlines and the processor will
be continuously busy throughout [0, t] executing only
tasks of priority k*(t) or higher. Hence, all periodic
task deadlines before t will be met if and only if
L1(2) +. . .
+ Lk*(t)(t) 5 A*(t)
(10)
Figure 1.b illustrates the function A*(t) for our task
set example. Note that priority level 1 places the tight-
est constraint on aperiodic processing time available
in the interval [0, l), whereas in the interval [l, 12),
priority level 2 places the tightest constraint. There-
fore, k*(t) =
1 , for 0 5 t < 1, and k*(t) = 2 , for
1 5 t < 12.
We next address the question of the priority level
at which the aperiodic tasks can be executed. If
J&(t) + . . .
+ Lqt)(t) < A*(t), then one can modify
&(t)+. . .+&(t)(t) to L’,(t) = &(t)+...+&*(t)(t),
L’,(t) = . . . = L&*)(l) =
0 and still be feasible. In
other words, one can carry out all aperiodic processing
at the highest priority level without any reduction in
aperiodic capacity. Since elevating the priority level
of aperiodic processing reduces their response times,
it is optimal to service aperiodic tasks at the high-
est priority level, and the total aperiodic processing
time cannot exceed A*(t), t 2 0. It follows for the
case in which WA(t) > c(t) for all feasible schedul-
ing algorithms, that the upper envelope on aperiodic
113

processing time is given by A*(t), t 2 0.
rc
The previous analysis assumed that there was ai-
,L_-_
+3r- --------
ways a sufficiently large amount of aperiodic work to +0, ‘Y---------’
be processed such that aperiodic processing would al- 2 3 4 5 6 ? di
ways use ail available slack at ail levels. This is, how- +2,--
+, __--___________-_
A
ever, not the general case. There may often be times
012 012 3 3 4 4 5 5 6 6 7 7 6 6
9 9 10 10 11 11 12 12
at which aperiodic processing could be done but none
+,____--_-t2_r--
____~U___________-________.
d(A*“’
is ready. We must modify our analysis to accommo-
*o, : : , , , , , , ,
date this case and define the upper envelope. Define
012 012 34 34 5 5 6+7 6+7 6 6
, ,
9 10 10
,; ,; ,2p&=, ,2p&=, 1) 1)
C C
t~5.5 t~5.5
A(t) = cumulative aperiodic processing consumed
at any priority level during [0, t]
Zi(t) = level-i inactivity during [O,t], for 1 5 i 2 71
and t 2 0.
Here, level-i inactivity refers to the cumulative amount
of time spent processing periodic tasks of priority i+ 1
or lower or any time the processor is idle during [0, t].
Suppose we now start at time s rather than at 0
and wish to determine the maximum amount, of aperi-
odic processing possible during [s, t], t 2 s. The anal-
ysis is the same as before. For example, Aij gives the
largest possible amount of aperiodic processing at pri-
ority level i or higher that can be carried out in [0, t]
and still meet the deadline of rij. However, during
[0, s], A(s) units of processing have already been used
for aperiodic processing and Ii(s) units of level-i inac-
tive time have taken place, time which was available
for level i aperiodic processing but was not used for
that purpose. Thus the amount of time availsMe for
additional aperiodic processing at time t, Aij must be
reduced by A(S) + Z;(S). G eneraiizing Equation (8) to
an arbitrary time origin s, we define for t 2 s,
Ai(s,t) = Aij-A(s)-Zi(s), Cij_1 5 t < Cij. (11)
This quantity gives the maximum amount, of aperi-
odic processing time possible during [s, t] at level i or
higher with ail 7i deadlines still being met. The anal-
ysis is now the same as the earlier analysis with s = 0.
Specifically, define
A*(s,t) = mini1 5 i 5 n}
A($, t)
(12)
and Ap~~)(s, t) = A*(s, t) where k*(t) is the high-
est priority such task. As before, the total aperi-
odic processing during [s, t] cannot exceed A*(s, t) and
ail should be executed at the highest priority level.
The function Apct)(s, t) thus gives the upper envelope
on aperiodic processing over any interval [s, t] during
which the aperiodic workload does not vanish.
Referring back to our previous example, suppose
that no aperiodic work was ready during [0, 5.51 and
an aperiodic task, rap, requiring 2 units of computa-
tion arrives at 5.5. We now use 5.5 as the new time
Figure 2: Example 1. Illustrating Slack Stealer oper-
ation with a change in the time origin
origin and note that A(5.5) = 0, Zl(5.5) = 3.5 (corre-
sponding to the level-l inactivity during 11, 41 and [5,
5.51) and Zx(5.5) = 0.5 (
corresponding to the level-2
inactivity during [5, 5.51). The level-i inactivity values
can be visualized in Figure 1.a. Since we have changed
the time origin to 5.5, the curves from Figure 1.b must
be adjusted for t 2 5.5, to reflect the fact that some
aperiodic processing time has been lost due to inac-
tivity. Thus, the functions Al(5.5, t), Az(5.5, t), and
A”(5.5,t) are obtained according to Equations (11)
and (12). These are depicted in Figure 2. Given these
functlions, the slack stealer finds 2.5 units of processing
capacity at time 5.5 and immediately allocates 2 units
to service rap. Consequently, rap finishes at time 7.5
and leaves 0.50 units of aperiodic processing available
for any aperiodic tasks that may arrive during [5.5,
7.51. if no other aperiodic tasks arrive, the processor
will spend [7.5, 81 on 7-2, [8, 91 on ~1, [9, 11.51 on rz
and then idle during [11.5, 12).
We next define the slack stealing algorithm, an ai-
gorit,hm which achieves the maximum possible amount
of aperiodic service time subject to the constraint of
meeting all the periodic deadlines. Later we will prove
its optimaiity property using the upper envelope de-
rived in this section.
3.2 Algorithm Description
The slack stealing algorithm uses the functions
A;(t) for each task Ti, 1 5 i 5 n, in determining the
capacity that can be allocated to aperiodic service.
Because the arrival pattern of the periodic workload
repeats itself every H time units, or the task set hyper-
period, it is sufficient to compute ail Ai functions
for 0 5 t < H. Given this, we compute the jump
heights associated with each job Tij, of each task ri,
according to Equation (7), for 0 5 t < H. These jump
heights are then stored as pairs of points (i, j), where i
is the task priority, 1 5 i 5 n and i is the job number
114 114

Citations
More filters
Journal ArticleDOI
TL;DR: This 25th year anniversary paper for the IEEE Real Time Systems Symposium reviews the key results in real-time scheduling theory and the historical events that led to the establishment of the current real- time computing infrastructure.
Abstract: In this 25th year anniversary paper for the IEEE Real Time Systems Symposium, we review the key results in real-time scheduling theory and the historical events that led to the establishment of the current real-time computing infrastructure. We conclude this paper by looking at the challenges ahead of us.

636 citations


Cites background from "An optimal algorithm for scheduling..."

  • ...The Slack Stealing algorithm overcomes these disadvantages [125, 183, 184]....

    [...]

01 Jan 2015
TL;DR: This review covers research on the topic of mixed criticality systems that has been published since Vestal’s 2007 paper and covers the period up to and including December 2015.
Abstract: This review covers research on the topic of mixed criticality systems that has been published since Vestal’s 2007 paper. It covers the period up to and including December 2015. The review is organised into the following topics: introduction and motivation, models, single processor analysis (including job-based, hard and soft tasks, fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, related topics, realistic models, formal treatments, and systems issues. An appendix lists funded projects in the area of mixed criticality.

471 citations


Cites methods from "An optimal algorithm for scheduling..."

  • ...The ability to run soft tasks in the slack provided by the hard tasks is also supported by the Slack Stealing schemes [82, 89, 166, 212] which have similar properties to servers....

    [...]

Journal ArticleDOI
Neil Audsley1, Alan Burns1, Robert I. Davis1, Ken Tindell1, Andy Wellings1 
TL;DR: An historical perspective on the development of fixed priority pre-emptive scheduling is provided for the implementation of real-time systems.
Abstract: From its roots in job-shop scheduling, research into fixed priority pre-emptive scheduling theory has progressed from the artificial constraints and simplistic assumptions used in early work to a sufficient level of maturity that it is being increasingly used in the implementation of real-time systems. It is therefore appropriate that within this special issue we provide an historical perspective on the development of fixed priority pre-emptive scheduling.

402 citations


Cites background from "An optimal algorithm for scheduling..."

  • ...The Slack Stealing algorithm suffers from none of these disadvantages (Lehoczky and Ramos-Thuel 1992)....

    [...]

Journal ArticleDOI
TL;DR: Five new on-line algorithms for servicing soft aperiodic requests in realtime systems, where a set of hard periodic tasks is scheduled using the Earliest Deadline First (EDF) algorithm, can achieve full processor utilization and enhance a periodic responsiveness.
Abstract: In this paper we present five new on-line algorithms for servicing soft aperiodic requests in realtime systems, where a set of hard periodic tasks is scheduled using the Earliest Deadline First (EDF) algorithm. All the proposed solutions can achieve full processor utilization and enhance aperiodic responsiveness, still guaranteeing the execution of the periodic tasks. Operation of the algorithms, performance, schedulability analysis, and implementation complexity are discussed and compared with classical alternative solutions, such as background and polling service. Extensive simulations show that algorithms with contained run-time overhead present nearly optimal responsiveness. A valuable contribution of this work is to provide the real-time system designer with a wide range of practical solutions which allow to balance efficiency against implementation complexity.

385 citations

Journal ArticleDOI
TL;DR: This paper compares RM against EDF under several aspects, using existing theoretical results, specific simulation experiments, or simple counterexamples to show that many common beliefs are either false or only restricted to specific situations.
Abstract: Since the first results published in 1973 by Liu and Layland on the Rate Monotonic (RM) and Earliest Deadline First (EDF) algorithms, a lot of progress has been made in the schedulability analysis of periodic task sets. Unfortunately, many misconceptions still exist about the properties of these two scheduling methods, which usually tend to favor RM more than EDF. Typical wrong statements often heard in technical conferences and even in research papers claim that RM is easier to analyze than EDF, it introduces less runtime overhead, it is more predictable in overload conditions, and causes less jitter in task execution.Since the above statements are either wrong, or not precise, it is time to clarify these issues in a systematic fashion, because the use of EDF allows a better exploitation of the available resources and significantly improves system's performance.This paper compares RM against EDF under several aspects, using existing theoretical results, specific simulation experiments, or simple counterexamples to show that many common beliefs are either false or only restricted to specific situations.

350 citations

References
More filters
Journal ArticleDOI
TL;DR: The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service and it is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization.
Abstract: The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service. It is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets. It is also shown that full processor utilization can be achieved by dynamically assigning priorities on the basis of their current deadlines. A combination of these two scheduling techniques is also discussed.

7,067 citations

Proceedings ArticleDOI
05 Dec 1989
TL;DR: An exact characterization of the ability of the rate monotonic scheduling algorithm to meet the deadlines of a periodic task set and a stochastic analysis which gives the probability distribution of the breakdown utilization of randomly generated task sets are represented.
Abstract: An exact characterization of the ability of the rate monotonic scheduling algorithm to meet the deadlines of a periodic task set is represented. In addition, a stochastic analysis which gives the probability distribution of the breakdown utilization of randomly generated task sets is presented. It is shown that as the task set size increases, the task computation times become of little importance, and the breakdown utilization converges to a constant determined by the task periods. For uniformly distributed tasks, a breakdown utilization of 88% is a reasonable characterization. A case is shown in which the average-case breakdown utilization reaches the worst-case lower bound of C.L. Liu and J.W. Layland (1973). >

1,582 citations

Journal ArticleDOI
TL;DR: It is shown that the problem is NP-hard in all but one special case and the complexity of optimal fixed-priority scheduling algorithm is discussed.

1,230 citations

Journal ArticleDOI
TL;DR: A new algorithm is presented, the Sporadic Server algorithm, which greatly improves response times for soft deadline a periodic tasks and can guarantee hard deadlines for both periodic and aperiodic tasks.
Abstract: This thesis develops the Sporadic Server (SS) algorithm for scheduling aperiodic tasks in real-time systems. The SS algorithm is an extension of the rate monotonic algorithm which was designed to schedule periodic tasks. This thesis demonstrates that the SS algorithm is able to guarantee deadlines for hard-deadline aperiodic tasks and provide good responsiveness for soft-deadline aperiodic tasks while avoiding the schedulability penalty and implementation complexity of previous aperiodic service algorithms. It is also proven that the aperiodic servers created by the SS algorithm can be treated as equivalently-sized periodic tasks when assessing schedulability. This allows all the scheduling theories developed for the rate monotonic algorithm to be used to schedule aperiodic tasks. For scheduling aperiodic and periodic tasks that share data, this thesis defines the interactions and schedulability impact of using the SS algorithm with the priority inheritance protocols. For scheduling hard-deadline tasks with short deadlines, an extension of the rate monotonic algorithm and analysis is developed. To predict performance of the SS algorithm, this thesis develops models and equations that allow the use of standard queueing theory models to predict the average response time of soft-deadline aperiodic tasks serviced with a high-priority sporadic server. Implementation methods are also developed to support the SS algorithm in Ada and on the Futurebus+.

947 citations

Proceedings ArticleDOI
05 Dec 1990
TL;DR: A general criterion for the schedulability of a fixed priority scheduling of period tasks with arbitrary deadlines is given and the results are shown to provide a basis for developing predictable distributed real-time systems.
Abstract: Consideration is given to the problem of fixed priority scheduling of period tasks with arbitrary deadlines. A general criterion for the schedulability of such a task set is given. Worst case bounds are given which generalize the C.L. Liu and J.W. Layland (1973) bound. The results are shown to provide a basis for developing predictable distributed real-time systems. >

867 citations

Frequently Asked Questions (16)
Q1. What contributions have the authors mentioned in the paper "An optimal algorithm for scheduling soft-aperiodic tasks in fixed-priority preemptive systems" ?

This paper presents a new algorithm for servicing soft deadline aperiodic tasks in II red-time system in which hard deadline periodic tusks are scheduled using a fixed priority algorithm. The new algorithm is proved to be optimal in the sense that it provides the shortest aperiodic response time amony ull possible aperiodic service methods. The algorithm can be extended to schedule hard deadline aperiodics and to eficiently reclaim unused periodic service time when periodic tusks have stochastic execution times. 

An M/M/l queueing model can be used to compute the ideal response time bound for such an aperiodic workload, using the M/M/l formula previously stated. 

The longer the aperiodic mean service times, the more the periodic8 will be forced to interrupt aperiodic processing thus increasing their response times. 

The slack stealer maximizes the time available for aperiodic processing during any interval of time among all algorithms that use fixed priority for the periodic tasks and meet all periodic deadlines. 

at a 70% periodic load, the preemption delays experienced by the aperiodics have a significant impact on their response times. 

The performance of the optimal algorithm approaches the ideal M/M/l bound within a 10% error margin for a relatively large range of total loads. 

in this region, the optimal algorithm is capable of “masking out” the presence of periodic tasks so efficiently that the responsiveness of aperiodic tasks is almost equal to that attainable on a dedicated processor. 

In conclusion, results indicate that TLBD has a particular operating region under which the M/M/l queueing model can be used to predict average aperiodic response times within a reasonably small error margin. 

In fact, the optimal algorithm is capable of maintaining aperiodic response times equal to those of the M/M/l bound for the entire range of aperiodic loads and both demand-capacity ratios shown in Figure 8. 

the distribution of service opportunities is optimal, so aperiodic response times are guaranteed to degrade as gracefully as possible. 

With a 5% demandcapacity ratio, the performance of the sporadic server is approximately optimal for aperiodic loads up to 25%, or a combined load of 65%. 

The optimal algorithm circumvents this shortcomiug by allocating time for aperiodic service in the most aggressive way possible subject only to schedulability constraints. 

the authors should find that aperiodic response times are an increasing function of l/p and U, and a decreasing function of UBD - Up, the proximity of the periodic load to the task set’s breakdown utilization. 

To see that changes in the fixed priority order can alter the aperiodic response times, consider the following example:Example 2Consider two periodic tasks 7, and 76 with the following timing requirements: C, = cb = 1, T, = 14, Tb = 10, D, = 14, Db = 10, I,, = Ib = 0. 

The performance curves for background are not shown because they have degraded an order of magnitude relative to all other algorithms. 

the sporadic server deviates from the optimal at a very low aperiodic load (less than 3% in Figure 8.a), which reaffirms the higher susceptibility of the sporadic server to the periodic load relative to the optimal algorithm.