Fast and Effective MultiframeTask Parameter Assignment Via Concave Approximations of Demand
E C R T S
Fast and Effective MultiframeTask Parameter Assignment Via Concave Approximations of Demand
Thidapat Chantem 0 1 2
0 Bo Peng Department of Computer Science, Wayne State University , Detroit, MI , USA
1 Department of Electrical and Computer Engineering, Virginia Tech , Arlington, VA , USA
2 Nathan Fisher Department of Computer Science, Wayne State University , Detroit, MI , USA
Task parameters in traditional models, e.g., the generalized multiframe (GMF) model, are fixed after task specification time. When tasks whose parameters can be assigned within a range, such as the frame parameters in selfsuspending tasks and endtoend tasks, the optimal offline assignment towards schedulability of such parameters becomes important. The GMFPA (GMF with parameter adaptation) model proposed in recent work allows frame parameters to be flexibly chosen (offline) in arbitrarydeadline systems. Based on the GMFPA model, a mixedinteger linear programming (MILP)based schedulability test was previously given under EDF scheduling for a given assignment of frame parameters in uniprocessor systems. Due to the NPhardness of the MILP, we present a pseudopolynomial linear programming (LP)based heuristic algorithm guided by a concave approximation algorithm to achieve a feasible parameter assignment at a fraction of the time overhead of the MILPbased approach. The concave programming approximation algorithm closely approximates the MILP algorithm, and we prove its speedup factor is (1 + ?)2 where ? > 0 can be arbitrarily small, with respect to the exact schedulability test of GMFPA tasks under EDF. Extensive experiments involving selfsuspending tasks (an application of the GMFPA model) reveal that the schedulability ratio is significantly improved compared to other previously proposed polynomialtime approaches in medium and moderately highly loaded systems. 2012 ACM Subject Classification Computer systems organization ? Realtime systems Acknowledgements We are grateful to the anonymous reviewers whose comments helped to significantly improve our paper. This research has been supported in part by the US National Science Foundation (Grant Nos. CNS1618185, IIS1724227, and CSR1618979).
and phrases generalized multiframe task model (GMF); generalized multiframe task model with parameter adaptation (GMFPA); selfsuspending tasks; uniprocessor scheduling; mixedinteger linear programming; concave approximation; linear programming

A generalized multiframe (GMF) task, whose model [3] generalizes the multiframe task
model [16] (MF) and the sporadic task model, consists of a number of ordered frames where
each frame has its own execution time, relative deadline, and frame separation time (the
minimum interval between two frames? release times). The GMF model generalizes the
sporadic model by using a set of ordered frames to represent an instance of a sporadic task.
Instead of setting an identical implicit frame deadline and minimal separation time for each
frame as in the MF model, the GMF model assigns each frame an individual deadline and a
minimum frame separation time.
The multiframe models (GMF/MF) have many applications. For example, Andersson [1]
presented the schedulability analysis of flows in multihop networks comprising
softwareimplemented Ethernet switches according to the GMF model. Ding et al. [11] scheduled a set
of tasks with an I/O blocking property under the MF model. The selfsuspension tasks [18]
can be represented using the GMF model but the problem size can be very large, e.g., in
automotive systems. In the keynote [7] of ECRTS 2012, Buttle has shown many scheduling
challenges as the number of ECUs in vehicles increases rapidly each year; there are more
than 100 ECUs nowadays and each task can easily have 50300 functions. In such complex
systems, there are several selfsuspension tasks (each consisting of multiple functions) and
their endtoend latencies need to be maintained in distributed settings.
The GMF model increases flexibility compared to the sporadic and MF task models, and
all parameters in the GMF model are typically not mutable after task specification time.
However, frame parameters can be adjustable (under the constraints of task parameters) to
improve schedulability in applications such as the selfsuspension tasks [20] and endtoend
flows [19]. Frame parameters are mainly used to maintain execution order in such applications
(e.g., frame priorities in FP scheduling and frame deadlines in EDF scheduling [22]). In order
to optimally assign parameters to improve schedulability, Peng and Fisher [18] extended the
GMF model and presented the GMF with parameter adaptation model (GMFPA). In the
GMFPA model, frame deadlines and separations can be selected under a set of constraints.
In this flexible model, frame parameters are optimally assigned (towards schedulability)
offline for each frame under the MILP algorithms [18].
Although the GMFPA model is more flexible, it has been shown that both the feasibility
and the parameter selection problems are very hard to solve. On the feasibility side, Ekberg
and Yi [12] proved that the feasibility of sporadic task systems remains coNPcomplete under
bounded utilization. On the parameter selection side, the priority assignment of subtasks in
endtoend task systems (originally the classical jobshop scheduling algorithm) has been
shown to be NPhard [13]. The scheduling of selfsuspending tasks (even for selfsuspending
tasks with at most two frames) is NPhard in the strong sense [21].
In order to address the feasibility test and parameter selection problem, Peng and
Fisher [18] gave an exact schedulability test of GMFPA tasks when frame parameters
are integers. The test is based on mixedinteger linear programming (MILP) under EDF
scheduling in uniprocessor systems. A sufficient, MILPbased schedulability test was also
developed. Although this sufficient approximation algorithm [18] is quite efficient, it is
still MILPbased and thus may require exponentialtime to solve in general. The goal and
contribution of this paper are to give an efficient linear programmingbased algorithm that
can determine the feasibility and select the frame parameters of GMFPA tasks.
The MILPbased algorithm contains a set of integer variables which form a set of staircase
functions/constraints (detailed in Section 5). To transform the MILPbased algorithm into a
LPbased algorithm, our idea is to use a set of linear functions to approximate all staircase
functions. As such, the selection of the slope values of the linear functions is directly related
to the schedulability of a system; if the slope values are not properly set, the linear functions
can grossly overapproximate, resulting in low schedulability ratio (the number of successfully
scheduled systems over the total tested).
In order to get a close approximation, we first use a set of concave functions that very
closely tracks the demand staircase functions to incur only a very small speedup factor
compared to the MILP algorithm. Since there exist no known efficient methods to solve
concave programming problems, we use the concave functions to guide the slope assignment
of linear functions in our iterative LPbased algorithm. That is, the LP algorithm runs
multiple times during which the algorithm adjusts the slopes of the linear functions based
on the concave functions. According to experiments, after a small number of iterations, the
LPbased algorithm can approach (or reach) the local optimal1. We apply the LPbased
algorithms to schedule selfsuspending tasks under EDF scheduling in uniprocessor systems
as a test case.
Our Contributions:
We give a concave approximation algorithm based on the MILP algorithm and prove the
speedup factor of the algorithm is (1 + ?)2 with respect to the exact schedulability test
of GMFPA tasks under EDF scheduling on uniprocessors. The positive constant ? is a
userdefined constant which can be made arbitrarily close to zero.
Since there is no known tractable way to solve a concave programming problem, we
develop a LPbased heuristic algorithm based on the concave approximation algorithm
for GMFPA tasks. The LPbased algorithm is an efficient schedulability test and can
select frame parameters at the same time.
We apply the LPbased algorithm to schedule multiplesuspending tasks. To exploit
the unique property of onesuspending tasks, as opposed to multisuspending tasks, we
present an improved heuristic algorithm for GMFPA tasks.
We conduct extensive experiments and show that the LPbased algorithms with fixed
numbers of iterations outperform previous work in terms of schedulability and average
running time. The fixed numbers of iterations make the LPbased algorithms
pseudopolynomial (the input size depends on the maximum interval length [3]), which is more
efficient than the MILPbased approach.
Section 2 surveys the related work. We review our GMFPA model in Section 3, and we
formally state the goal of this paper in Section 4. Section 5 reviews our parameteradaptation
method using mixedinteger linear programming (MILP) to obtain a schedulability test under
EDF scheduling. The concave approximation algorithm based on the MILP algorithm is
presented in Section 6. Since concave programming algorithm does not scale well, two iterative
LPbased algorithms are presented in Section 7. After applying the LPbased algorithms
to selfsuspending tasks, Section 8 provides extensive experimental results compared to
stateoftheart results. At last, Section 9 concludes this work and proposes future work.
2
Related Work
In this section, we introduce the related work on the GMFPA model in Section 2.1, and
survey one of its applications, the selfsuspending tasks, in Section 2.2.
2.1
The Generalized Multiframe Model
The generalized multiframe model (GMF) was presented by Baruah et al.[3] to extend
the sporadic task model and multiframe task model (MF) [16]. The recurring realtime
task (RRT) model [2] generalizes the GMF model to handle conditional code. The digraph
model [23] further generalizes the RRT model to allow arbitrary directed graphs (with loops),
1 The local optimal of the iterative LPbased algorithm is reached when all variables converge.
and it was shown that the feasibility problem on preemptive uniprocessor systems remains
tractable (pseudopolynomial complexity with bounded system utilization). A complete
review is surveyed by Stigge and Yi [24].
The GMF model has great advantages and has been applied to multiple areas, as described
earlier. However, current related models typically assume that parameters are fixed during
task specification time. In the GMFPA model [18] which extends the GMF model, frame
parameters are flexible and can be chosen by the MILPbased approach in uniprocessor
systems. The dGMFPA model [19] extends the GMFPA model to represent endtoend
flows in distributed systems. Similar flexible models, such as the parameteradaptation
model [8] and elastic model [6], are also used in many applications.
2.2
The SelfSuspending Task Model
A typical selfsuspension task model [15] contains two computational frames separated by a
selfsuspending frame. After the first computational frame finishes, the job suspends executing
the other computational frame until an external operation completes. The order of the frames
is required and a task suspends itself to communicate with external devices, I/O operations,
computation offloading, etc. We call such tasks onesuspension selfsuspending tasks.
For onesuspension selfsuspending tasks, Ridouard et al. [21] proved that scheduling
such periodic selfsuspending tasks on a uniprocessor is NPhard in the strong sense. Due to
the hardness of such scheduling problems, Chen and Liu [9] gave a fixedrelativedeadline
(FRD) scheduling algorithm to improve the schedulability of sporadic selfsuspending tasks
on uniprocessor systems. The FRD algorithm assigns frame relative deadlines and schedules
the ordering of frames of tasks under EDF scheduling.
The multiplesegment suspending task model [14], which allows multiple suspending
frames, explicitly considers the execution sequence of frames in a task. Peng and Fisher [18]
utilize MILP to select frame parameters of multiplesegment selfsuspending tasks. The
MILP algorithm [18] is an optimal FRD algorithm which extends the work by Chen and
Liu [9]. A recent review on scheduling selfsuspending tasks (mostly on onesuspension tasks)
can be found in the work by Chen et al. [
10
].
3
Model
We review the generalized multiframe (GMF) model [3] and the generalized multiframe
model with parameter adaptation (GMFPA) in this section.
A GMF task ?i consists of a set of ordered frames and each frame ?ij has its own execution
time Eij, relative deadline Dij, and frame separation time Pij. All frames of a task ?i can
be represented by the 3vector tuple (E??i, ?D?i, ??Pi) where E??i=[Ei0, Ei1,..., ENi?1], ?D?i=[Di0,
i
Di1,..., DiNi?1], and ??Pi=[Pi0, Pi1,..., PiNi?1]. The `?th frame of task ?i arrives at time ai`, has
deadline at ai` + d`, and worstcase execution time ei`. Since frames arrive in sequence, the
i
`?th frame corresponds to frame ?i` mod Ni , and we have:
1. ai`+1 ? ai` + P ` mod Ni
2. di` = D` mod Nii
3. ei` = Ei`i mod Ni
Based on the GMF model, the GMFPA model [18] is derived to allow frame parameters
to be assigned instead of fixing them during task specification time. Let T = {?0, ?2,...,
?n?1} be the task system of n GMFPA tasks executing on one processor. The task ?i=[?i0,
1 j j
?i , ?i2,..., ?iNi?1] consists of Ni frames where ?ij=(Eij, Dij, Di , P ij, P i ). The j?th frame
Ni?1
execution time of the i?th task is Eij, and the i?th taskwise execution time is Ei = X Eij.
+
j=0
The lower bound of relative deadline Dij (respectively, the minimum interarrival time
between consecutive jframes, Pij) is Djij (respectively, P ij) and the upper bound of Dij
(respectively, Pij) is Di (respectively, P i ). The frame parameters Dij and Pij can be flexibly
assigned in the ranges [Dij, Dij] and [P ij, P i ], respectively. The frame distance Dj,k = Dk
j
i i
(k?j?1X)mod Ni
Pi(j+p) mod Ni represents the relative time between the release of the j?th
p=0
frame and the deadline Dik of the k?th frame. For example, Di2,4 = Pi2 + Pi3 + Di4. The task
Ni?2
deadline Di is the upper bound of DiNi?1 + X Pij, and the task minimum interarrival
j=0
Ni?1
time Pi is the upper bound of X Pij. The utilization of task ?i is Ui = Ei/Pi, and the
j=0
n?1
utilization of a task system is Ucap = X Ui.
i=0
Frame parameters (Dij and Pij) must satisfy the localized Monotonic Absolute Deadlines
(lMAD) property [3] to maintain frame execution order. That is, the absolute deadline of the
j?th frame must be no later than the one of the j + 1?th frame (Dij ? Pij + Di(j+1) mod Ni ,
?i, j). Figure 1 shows an example of the GMF model with the lMAD property. The
lMAD property is widely used in systems which use firstin firstout (FIFO) scheduling for
a shared resource. E.g., a network can be seen as a shared resource and packets sent from a
computational node to a network node follow FIFO scheduling.
Eij
Let dbfi(t, F~i) be the task demand bound function of a GMFPA task ?i within the interval
length t. Let F~i = [Di0, Pi0, Di1, Pi1,..., DiNi?1, PiNi?1] represent an assignment of values
for all the task parameters (frame deadline and separations) of task ?i. The task demand
bound function dbfi(t, F~i) accounts for task ?i?s accumulated execution time of frames which
have both release times and deadlines inside the interval of length t. We use the notation
dbfi(t, Dij,k) to represent the demand for the k?th frame when the first frame to arrive in
X dbfi(t, F~i) ? t, ?t.
?i?T
(1)
the interval length t is the j?th frame. The relationship between the frame demand and
task demand will be presented in Section 5. In a uniprocessor system T , the sufficient and
necessary condition for schedulability of a task set T is shown in Equation 1.
I Problem Definition. Given the above model, our goal is to find an optimal and valid
assignment F~i of frame parameters of all tasks so that the worstcase demand P?i?T dbfi(t, F~i)
over all time intervals of length t is minimized.
5
The MILP Algorithm
We review the MILP algorithm [18] to solve the problem defined in Section 4 under EDF
scheduling in uniprocessor systems since the proposed concave programming and LPbased
algorithms are closely related to the MILP algorithm.
Figure 2 shows the MILP algorithm. Notations in bold font are constants and the other
notations are variables. Lines 3 and 5 are the requirements that a feasible system must obey.
Line 4 shows the lMAD property. Line 6 shows the calculation of the demand for every
possible sequence of frames of task ?i over any interval of length t. To calculate all possible
frame demands, we use the notation2 yj,k(t) to denote the demand of the k?th frame of task
i
?i starting from the j?th frame over a tlength interval. To calculate the worstcase demand
under EDF scheduling, the starting j?th frame arrives exactly at the start of the interval and
subsequent frames arrive as soon as possible (e.g., see [3] for GMF schedulability).
The inequality t?Pitb ? xij,k(t) ? reaPlmiin is the constraint function that decides the value
of xij,k(t) where xij,k(t) decides the value of yij,k(t) in turn. The length tb is the summation
t
of the previous periods b Pi c ? Pi and the frame distance from the starting j?th frame to
k?th frame Dij,k, and the constant realmin is the smallest representable positive number for
the MILP solver. For example, the length tb = D1,3 + b Pi c ? Pi if we consider the interval
t
i
starting with an arrival of the first frame and ending at the deadline of the third frame.
When t ? tb, the integer variable xj,k(t) ? [0, 1] must be one for the inequality in Line 6 to
i
be feasible and the demand Eik contributes to yj,k(t). When t < tb, xij,k(t) can be either
i
zero or one. However, the MILP tends to choose zero for xij,k(t) to obtain a smaller demand
(shown in Lemma 1). We calculate demand yj,k(t) for all possible combinations of i, j, k,
i
and t in Line 6. For simplicity, we use ??? to represent the ranges of variables. The task
index i ranges from zero to n ? 1. The superscripts j and k range from zero to Ni ? 1. The
Ucap
maximum integer interval length [3] H = d 1?Ucap ? max?i?? (Pi)e.
I Lemma 1 (from [18]). The value of yij,k(t) in the MILP is the exact worstcase demand
of frame ?ik over a tlength interval when the first frame to arrive in the interval is ?ij (with
respect to the frame parameters assigned to each frame of ?i by the MILP).
Line 7 calculates task ?i?s demand yij (t) whose starting frame in the tlength interval is
the j?th frame. In Line 8, the demand yi(t) is the maximum demand for ?i over all yij (t). At
n?1
last, the demand of all tasks X yi(t) is set to be no larger than L ? t as shown in Equation 1.
i=0
2 The term dbfi(t, Dj,k) represents the frame demand, and the term yj,k(t) is a free variable in the
i i
mathematical programming formulation that is used to calculate the demand dbfi(t, Dij,k).
Parameter Selection and Exact Feasiblity Test.
1 minimize: L
2 subject to:
3 Eik ? Dik ? Dik ? Dik , ?i, k.
Eik ? P ik ? Pik ? P ik , ?i, k.
4 Dik ? Pik + Di(k+1) mod Ni , ?i, k.
Ni?1 Ni?2
5 X Pik ? Pi , DiNi?1 + X Pij ? Di , ?i.
k=0 j=0
yj,k(t) = xj,k(t) ? Eik + b Pi c ? Eik , ?i, j, k, t.
i i t
6 t?Pitb ?ixij,k(t)t? reaPl miin , ?i, j, k, t.
tb = Dj,k + b Pi c ? Pi
Ni?1
7 yij(t) = X yij,k(t), ?i, j, t.
k=0
8 yi(t) ? yij(t), ?i, j.
n?1
9 X yi(t) ? L ? t ?t.
i=0
10 and: Dik, Pik, yij,k(t), yi(t), L ? R?, xij,k(t) ? {0, 1} .
If the system is schedulable, L ? 1. We minimize L in the MILP which also minimizes the
summation of all task demand over all interval lengths3 t. The MILP algorithm?s necessity
and sufficiency for feasibility are proved in Theorem 2.
I Theorem 2 (from [18]). For arbitrary, realvalued parameters, our MILP is a necessary
feasibility test when L ? 1. When frame parameters are restricted to be integers (i.e., Dk,
i
P k
i ? N ? i, k), then the MILP is an exact feasibility test when L ? 1.
6
The Concave Approximation Algorithm
We reviewed our previously proposed MILP in the last section. In this section, we give a
concave approximation algorithm for the MILP algorithm and prove the speedup factor of
the concave approximation algorithm (compared to the optimal FRD/the MILP algorithm)
can approach one. Although there is no known efficient way to solve a concave programming
problem, our concave approximation algorithm plays a key role in the LPbased algorithms
presented in the next section.
6.1
The Concave Functions
We first use the concave function (Equation 2) (illustrated by the blue dashed curve of
Figure 3) to approximate the exact frame demand determined by the MILP in Line 6
of Figure 2.
3 We take integervalued t since we cannot check all realvalued t. We also use integer constants t in the
concave programming and LPbased algorithms later.
dbficoncave(t, Dij,k) = max {0, Eik ? (1 + ?) ? Eik ? ? ? e??(Dij,k+b Pti c?Pi?t)} + b
t
Pi
c ? Eik
(2)
The concave programming algorithm is constructed by replacing all staircase functions in
Line 6 of Figure 2 with yj,k(t) = dbficoncave(t, Dij,k) and removing all integer variables. The
i
other lines in Figure 2 remain the same.
Frame Demand
(0, Eik ? (1 + ?))
(0, y0)
(0, Eik)
(t, Eik)
(t, 0)
Equation 2 shows our proposed concave approximation function dbficoncave(t, Dj,k) (e.g.,
i
the blue dashed curve in Figure 3) for the k?th frame demand of task ?i during the tlength
interval in which the starting frame is the j?th frame. We define the systemwide maximum
error rate4 ?. The rate ? must be larger than zero to ensure the demand of any approximation
function be larger than the staircase function for any given deadline. We set ? as a
designerdefined constant in the system, and set the constant ? = 1? ? ln 1 + 1? as shown in Lemma 3.
In Lemma 3, we prove that the maximum error rate of the concave function is smaller than
the system maximum error rate ?, and the concave function approaches the staircase function
when ? decreases.
I Lemma 3. The demand of the concave function in Equation 2 overapproximates the one
in the MILP algorithm, and the error rate of the concave function is smaller than the system
error constant ? when we set ? in Equation 2 as follows,
Proof. Let ?y and ?d be the worstcase error rates on the demand (on yaxis) and deadline
(on xaxis) directions of concave functions, respectively. Let tb = Dj,k + b Pi c ? Pi ? t. The
t
i
worst rates happen when, in Figure 3 for example, Eik ? (1 + ?y) = y0 and t ? (1 + ?d) = t0. We
will prove that ? ? ?y and ? ? ?d.
When 0 ? tb ? t, the largest demand of the concave function happens at tb = 0.
i ? (1 + ?y) (respectively, 0) for yj,k(t) (respectively, tb), the concave
By substituting Ek i
function becomes Eik ? (1 + ?y) = Eik ? (1 + ?) ? Eik ? ? ? e??(0?t). After simplification, we get
4 The error rate (with respect to the exact frame demand function) of an approximation function is its
percentage increase in the yaxis direction for t ? Dj,k or its percentage increase in the xaxis dimension
i
if t > Dj,k. The maximum error rate is the largest error rate over all t > 0. E.g., the error rate on
the xaxiis of the point (t ? (1 + ?), 0) in Figure 3 is ?. The maximum error rate of any approximation
function must be smaller than the systemwide maximum error rate ?.
?y = ? ? ? ? e??(?t). Thus, ? ? ?y and ? is an upper bound of ?y. Since the concave function
is a decreasing function and it passes the points (0, Eik ? (1 + ?y)) and (t, Eik), the concave
function overapproximates the corresponding demand in MILP when 0 ? tb ? t.
When tb > t, the maximum error on the deadline direction happens at tb = t ? (1 + ?d).
By substituting 0 (respectively, t ? (1 + ?d)) for yj,k(t) (respectively, tb), we have 0 =
i
Eik ? (1 + ?) ? Eik ? ? ? e??(t?(1+?d)?t). After simplification, we have ?d = t?1? ? ln(1 + 1? ). We set
? = 1? ? ln(1 + 1? ), and ?d = ?t after replacing ? in ?d. Since t ? 1, ? ? ?d.
J
A speedup factor is a value that quantifies the quality of an approximation algorithm with
respect to the optimal scheduling algorithm. A speedup factor S > 1 [4] means that an
approximation algorithm can schedule a task system at a speedS processor if an optimal
algorithm can schedule the system at a speedone processor.
Let LMILP be the value of the objective function returned by the MILP algorithm and
Lconcave be the value returned by the concave programming algorithm. We will prove that
LMILP < Lconcave < LMILP ? (1 + ?)2. LMILP < Lconcave indicates that a task system
will be deemed schedulable by the MILP algorithm if the system is schedulable by the
concave programming algorithm (which means LMILP < Lconcave ? 1). By the definition of
the speedup factor, Lconcave < LMILP ? (1 + ?)2 indicates that the speedup factor of our
concave programming algorithm is (1 + ?)2 with respect to the MILP algorithm. In other
words, Lconcave/(1 + ?)2 < LMILP indicates a task system can be scheduled by the concave
programming algorithm under a (1 + ?)2speed processor if the system can be scheduled by
the MILP algorithm under the corresponding onespeed processor.
We prove LMILP < Lconcave in Lemma 4, and Lconcave < LMILP ? (1 + ?)2 from Lemma 5
to Lemma 8. By Lemmas 4 and 8, we prove that the speedup factor of our concave
programming algorithm is (1 + ?)2 with respect to the MILP algorithm in Theorem 9.
I Lemma 4. Let LMILP and Lconcave be the values returned by the MILP and concave
programming algorithms (assume they exist), respectively. We have:
LMILP < Lconcave.
(4)
Proof. Let L0MILP be the value calculated as follows. Assume there exists such a solver
that can solve the concave programming algorithm and return Lconcave, frame deadlines
and separations. We assign the returned frame parameters from the concave programming
0
algorithm to the formulation of the MILP algorithm and get the value of LMILP .
Under the same values of frame parameters, any frame demand of concave programming
algorithm is larger than its corresponding demand of the MILP algorithm, as shown in
Lemma 3. The task demands of concave programming algorithm with the preassigned
frame parameters are thus also larger than the ones from the MILP approach. When we
0
summarize task demands over any interval length, LMILP is thus always less than Lconcave.
0 0
Since LMILP is calculated by preassigned frame parameters, LMILP must not be smaller
than LMILP . If the frame parameters returned by the MILP and concave programming
0 0
algorithms are all identical, LMILP = LMILP . In all, LMILP ? LMILP < Lconcave and this
lemma is proved. J
In order to prove Lconcave < LMILP ? (1 + ?)2, we first define L0concave. Let the MILP
algorithm return LMILP , frame deadlines and separations. If we fix the deadline and
separation variables of the concave programming formulation to be the values returned by the
MILP, we calculate the value of L0concave. We will prove Lconcave ? Lconcave < LMILP ?(1+?)2.
0
0
Lconcave ? Lconcave is proved in Lemma 5. Based on the demand bound functions defined in
Equations 8 and 9, we prove Lconcave < LMILP ? (1 + ?)2 in Lemma 8.
0
I Lemma 5. Let Lconcave be the optimal value returned by the concave programming algorithm,
0
and Lconcave be the value calculated by the concave programming algorithm using the frame
parameters returned by the MILP. We have:
(5)
(6)
(7)
(8)
(9)
0
Lconcave ? Lconcave.
Proof. Since the concave programming algorithm minimizes Lconcave, Lconcave must be
the smallest value over all feasibleassigned/preassigned frame parameters, and Lconcave <
0
Lconcave. If frame parameters returned by the MILP and concave programming algorithms
are same, Lconcave = L0concave. In all, Lconcave ? L0concave. J
For ease of proof, we consider a staircase approximation function dbfia(t, Dj,k) illustrated
i
by the red dotted line in Figure 3 for task ?i over the tlength interval, and the solid line
shows an example of the staircase demand dbfi(t, Dij,k).
Equation 6 shows dbfi(t, Dij,k) as the k?th framedemand function of task ?i over the
tlength interval that starts with the j?th frame. The corresponding task demand dbfi(t, F~i)
is shown in Equation 8, and the reasoning is same as to the relationship between yi(t) and
yj,k(t) in the MILP algorithm. I.e., we take the maximum demand over all sequences as the
i
task demand. The approximate framedemand dbfia(t, Dij,k) and taskdemand dbfia(t, F~i) (for
dbfi(t, Dij,k) and dbfi(t, F~i), respectively) are defined in Equations 7 and 9, respectively. We
prove that the approximation demand overapproximates the concave demand in Lemma 6.
dbfi(t, Dij,k) = ??0E,ik, Dij,k0??tt?<PDiij,k
?Eik ? b Pti c + dbfi(t ? Pi ? b Pti c, Dij,k), t > Pi
???0, 0 ? t < (D1+ij,?k)
dbfia(t, Dij,k) = ?(1 + ?) ? Eik, (D1+ij,?k) ? t ? Pi
???Eik ? b Pi c + dbfia(t ? Pi ? b Pti c, Dij,k), t > Pi
t
dbfi(t, F~i) = Nmji=a?0x1 {Ni?1
X dbfi(t, Dij,k)}
k=0
dbfia(t, F~i) = Nmji=a?0x1 {NXi?1 dbfia(t, Dij,k)}
k=0
I Lemma 6. The demand of task ?i over any interval length t in Equation 9 is an upper
bound of its corresponding concave approximation demand.
Proof. In Lemma 3, we proved that ?d ? ?. Let t? = t ? Pi ? b Pti c. From Equation 2 and
the definition of ?d, the concave demand with any value assigned for Dij,k ? [0, t? ? (1 + ?d)]
i ? (1 + ?), and the demand is zero when Dj,k > t? ? (1 + ?d). Since
is smaller than Ek i
dbfia(t, Dij,k) = Eik ? (1 + ?) when Dij,k ? t? ? (1 + ?) and ?d ? ?, the demand function
dbfia(t, Dij,k) over approximates the concave demand. For taskwise demand dbfia(t, F~i), we
take the summation of all frame demand dbfia(t, Dj,k) of task ?i over all sequences (sequences
i
differ from the starting j?th frame in the tlength interval), and take the maximum demand
over all sequences as the task demand. The task demand dbfia(t, F~i) also over approximates
the corresponding concave demand. In all, we have proved this lemma. J
With the demand bound functions shown in Equations 89, we prove L0concave < LMILP ?
(1 + ?)2 in Lemmas 78.
I Lemma 7. For the task ?i?s demand dbfi(t, F~i) and its approximation demand dbfia(t, F~i)
in the tlength time interval, we have: dbfi((1 + ?) ? t, F~i) ? (1 + ?) ? dbfia(t, F~i).
Proof. We first prove dbfi((1 + ?) ? t, Dij,k) ? (1 + ?) ? dbfia(t, Dj,k), and dbfi((1 + ?) ? t, F~i) ?
i
(1 + ?) ? dbfia(t, F~i) can be extended by Equations 8 and 9. We classify all interval lengths t
in three sets:
T1 : 0 ? t < Dj,k/(1 + ?),
i
T2 : Dj,k/(1 + ?) ? t ? Pi,
i
T3 : otherwise.
When t ? T1, dbfi(t, Dij,k) = dbfia(t, Dij,k) = 0. Since demand bound functions are
monotonically increasing functions, dbfi((1 + ?) ? t, Dj,k) ? (1 + ?) ? dbfi(t, Dij,k) = dbfia(t, Dij,k).
i
When t ? T2, we know that dbfi(t?, Dij,k) = Eik at Dij,k ? t? ? Pi from Equation 6. Let
t? = t ? (1 + ?), we have dbfi(t ? (1 + ?), Dj,k) = Eik at Dj,k/(1 + ?) ? t ? Pi. From Equations 6
i i
and 7, we know that dbfi(t ? (1 + ?), Dj,k) ? (1 + ?) = dbfia(t, Dj,k) at Dj,k/(1 + ?) ? t ? Pi.
i i i
When t ? T3, it is trivial to see the fact that dbfi((1 + ?) ? t, Dj,k) ? (1 + ?) ? dbfia(t, Dij,k)
i
since the demand is iteratively calculated from the demand when t ? T1 ? T2. J
0
I Lemma 8. Let LMILP be the optimal value returned by the MILP algorithm, and Lconcave
be the value calculated by the frame parameters returned by the MILP. We have:
Lconcave < LMILP ? (1 + ?)2.
0
Proof. Line 9 of Figure 2 shows that L is the largest value of Pin=?0t1 yi(t) for all values of t in
the MILP algorithm (can be derived from Lemma 1). We also require this line in the concave
programming algorithm. From Lemma 7, we know that dbfi((1+?)?t, F~i)?(1+?) ? dbfia(t, F~i)
for any task ?i over any tlength interval. Let t = (1 + ?) ? t?, we have:
LMILP = max
t>0
P
P?i?T dbfi(t, F~i)
t
?i?T dbfi((1 + ?) ? t?, F~i)
(1 + ?) ? t?
?i?T dbfi((1 + ?) ? t?, F~i) ? (1 + ?)
(1 + ?)2 ? t?
By Lemma 1 = =
?
P
P
?i?T dbfia(t?, F~i)
(1 + ?)2 ? t?
0
? L(1co+nc?a)v2e By Lemma 6
By Lemma 7
(10)
(11)
J
I Theorem 9. When the concave programming algorithm returns integer frame deadlines
and separation times, the speedup factor of our concave programming algorithm with respect
to the MILP algorithm is (1 + ?)2.
Proof. In Lemmas 4, 5, and 8, we have proved that LMILP < Lconcave < LMILP ? (1 + ?)2.
LMILP < Lconcave indicates that a task system is deemed schedulable (with integer frame
parameters) by the MILP if the task system is deemed schedulable (with integer parameters)
The LPbased Algorithm for GMFPA tasks.
1 Initialize D as Dik ? (Eik/Ei) ? Pi, Llast ? ?, and Lcur ? ?
2 repeat
3 Llast ? Lcur
4 S ? computeSlope(D)
5 [D, Lcur] ? HeuristicLP (D, S)
6 until Llast ? Lcur <
7 Process frame deadlines D to integers.
8 [Lcur] ? HeuristicLP f ixedDeadline(D, S)
9 if Lcur ? 1
10 then return schedulable
11 else return unschedulable
by the concave programming algorithm. LMILP < Lconcave shows our concave programming
algorithm is an approximation algorithm for the MILP.
We divide (1 + ?)2 on both sides of the inequality Lconcave < LMILP ? (1 + ?)2 to get
Lconcave/(1 + ?)2 < LMILP . Lconcave/(1 + ?)2 represents that we change the speed of the
processor from one to (1 + ?)2. Thus, a task system must be scheduled by the concave
programming algorithm with a (1 + ?)2speed processor if the task system is scheduled by
the MILP on a single speed processor. From the definition of the speedup factor, we have
proved that the speedup factor of our concave programming algorithm with respect to the
MILP is (1 + ?)2. J
7
The Linear ProgrammingBased Heuristic Algorithm and its
Application to OneSuspension SelfSuspending Tasks
Until now, we have constructed the concave programming approximation algorithm for the
MILPbased algorithm. Due to the difficulties in solving concave programming (or nonconvex
optimization) problems in general, we use a heuristic LPbased scheme to efficiently select
frame parameters of GMFPA tasks, and apply it to selfsuspending tasks. For ease of
presentation, we let Dik = Ek, Dik = Pi, and Pik = Dk. In this case, frames deadlines are
i i
constrained by frame execution time and the lMAD property. We present the LPbased
heuristic algorithm in Section 7.1, and further to optimize the LPbased algorithm to schedule
onesuspension selfsuspending tasks in Section 7.2.
7.1
The Linear ProgrammingBased Heuristic Algorithm
The general routine of the LPbased scheme for GMFPA tasks is: 1) We initialize frame
parameters of GMFPA tasks. 2) Given the frame parameters, we recalculate a set of linear
functions, which approximate the staircase functions for frame demands in the MILP, guided
by the concave programming algorithm. 3) We run the LP algorithm (shown later) based on
the assigned linear functions, and receive frame parameters as outputs. If the difference in L
values between the current and the last iterations is no smaller than some threshold, the
program goes back to Step 2. 4) We round frame parameters to integers and run the LP
algorithm with the fixed integervalued parameters to get the final assignment.
1 Calculate all Dj,k0 from D
t i
2 t0 ? t ? b Pi c ? Pi
3 yj,k0 (t0) ? Eik ? (1 + ?) ? Eik ? ? ? e??(Dij,k0 ?t0)
i
4 if Dj,k0 > t0
i
5 then sij,k(t) ? (0 ? Eik)/( ?1 ? ln(1 + 1? ) + t0 ? t0)
6 elseif Dj,k0 == t0
7 thenisij,k(t) ? ?D?ij,k dbficoncave(t0, Dij,k)
8 else sij,k(t) ? (yij,k0 (t0) ? Eik)/(Dij,k0 ? t0)
9 return S
10 S is the matrix that stores all slopes sij,k(t).
In The LPbased Algorithm for GMFPA Tasks (Figure 4), we initialize frame deadlines by
proportional deadline assignment (PDA [15]) to Dik = (Eik/Ei) ? Pi. Given the deadline
matrix D which stores all Dik, we calculate all slopes and store them in matrix S. We
replace Line 6 of Figure 2 with Equation 12 to transform the algorithm into a LP algorithm
HeuristicLP (D, S) (Line 5 of Figure 4). The slope element sij,k(t) of S, which corresponds
to yj,k(t), is calculated in the algorithm shown in Figure 5 and all lines pass the point
i
(t0, Eik). The linear functions are illustrated by the red lines in Figures 68. If the deadline
Dj,k0 (generated from the previous iteration) is smaller than t0 = t ? b Pi c ? Pi, we calculate
i t
the demand yij,k0 (t0) of the concave function at Dj,k0 . The slope of the line is calculated
i
by two points (Dij,k0 , yij,k0 (t0)) and (t0, Eik) illustrated in Figure 6. If the deadline Dj,k0
i
equals t0, we calculate the slope by taking the tangent of the concave function at the point
(t0, Ek) shown in Figure 7. If the deadline is larger than t0, we use two points (t0, Eik) and
i
?1 ? ln(1 + 1? ) + t0, 0 , which is the cross point of the xaxis and the concave function, to
calculate the slope, and the line with the slope is shown in Figure 8. The slope matrix S is
adjusted in each iteration of the loop in Figure 4.
Demand
(t0, Eik)
Dj,k0
i
(t0, 0)
Demand
(t0, Eik)
Demand
(t0, Eik)
The loop in Figure 4 will not stop recursively calling function HeuristicLP (D, S) until
the difference of the L values in two consecutive iterations is smaller than the positive threshold
. Llast and Lcur represent the L values of the last and current iterations, respectively. The
HeuristicLP f ixedDeadline(D, S) algorithm (Line 8 of Figure 4) uses the integer deadlines
to maintain sufficiency for schedulability, which is proved in Theorem 11. We first round up
frame deadlines to be integers. For each task, we keep reducing the largest frame deadline by
one until the summation of them equals to its task deadline/period. We assign the deadline
variables to the integer values in Line 7 of Figure 4 and the other parts are the same as in
the HeuristicLP (D, S) algorithm. The system is schedulable if L ? 1.
We prove in Theorem 10 that the while loop of the algorithm The LPbased Algorithm
for GMFPA Tasks function stops after a finite number of iterations. The sufficiency of the
LPbased algorithm for schedulability is proved in Theorem 11.
I Theorem 10. The while loop of the function The LPbased Algorithm for GMFPA Tasks
stops in a finite number of iterations.
Proof. We first prove that L decreases from one iteration to the next. Before each iteration
of the algorithm HeuristicLP (D, S), we use the deadline assignment D0 from the last
iteration to calculate the slopes S of frame functions in the current iteration. Let Llast be
the value of L in the last iteration. In the current iteration, let us assume that we use the
same set of the deadlines D0 to calculated the value Lcur.
In the first and third cases shown in Figures 6 and 8, the frame demand is either smaller
(if the last iteration is the first iteration) or equal to the one in the last iteration. In the
second case, the frame demand is the same as the one in the last iteration. From all cases,
we know that the same set of deadlines causes Lcur ? Llast. Since we minimize L in the
algorithm, the returned deadlines by the algorithm HeuristicLP (D, S) must generate a
value of L that is smaller than Lcur. Thus, we have proved that L decreases from one
iteration to the next. We also set a threshold to be the difference of the L values in two
consecutive iterations, and we know that the lower bound of L equals Pn
i=1 Ui. In either
cases, the loop of the function The LPbased Algorithm for GMFPA Tasks stops in a finite
number of iterations. J
I Theorem 11. The LPbased algorithm is a sufficient schedulability test when L ? 1.
Proof. This proof is similar to Theorem 2. The sufficiency of any approximation/heuristic
algorithm (w.r.t. the MILP algorithm) for schedulability requires two conditions: 1) the
demand of the algorithm over any tlength interval is larger than the one in the MILP. 2)
frame parameters must take integer values. The first condition ensures that the demand over
approximates on any t, and the second condition ensures that the demand only changes at
integer values. We require the second condition since all lengths (represented by t) can only be
integers in the MILP algorithm. The LPbased algorithm over approximates system demand
among all t, and the algorithm adjusts frame deadlines to be integers in the last iteration. J
7.2
The Application of the LPBased Algorithm to OneSuspension
SelfSuspending Tasks
The LPbased scheme can be applied to multiplesegment selfsuspending tasks directly. In
this section, we further optimize the algorithm for onesuspension selfsuspending tasks by
reducing the number of free variables and equations. Given that n is the number of tasks
and H is the maximum interval length, the algorithm uses 8 ? n ? H + n fewer variables and
15 ? n ? H + n fewer number of constraints than the ones in the standard LPbased scheme.
For each task ?i, we use variables Di1 and Pi ? Si ? Di1 (instead of Di1 and Di2) to denote
frame deadlines to reduce the number of variables and constraints. Si is the suspension
length of task ?i. In this case, the demand bound function only relies on Di1 and t and
F~i = [Di1, Di1, Pi ? Si ? Di1, Pi ? Si ? Di1] since we let Pik = Dik. A task demand falls in four
cases which are shown and proved in Theorem 12.
I Theorem 12. The demand bound function of a task ?i lies in one of the following four cases:
?? ??Ei1, 0 < Di1 ? t
????dbfi1(t, F~i) = ?0, t < Di1 < Pi ? Si ? t
?
???? ??Ei2, Pi ? Si ? t < Di1 ? Pi ? Si
????????? wh??enE0i1,< t0 << (DPi1i ?< SPii)/?2Si ? t
dbfi(t, F~i) = ??dbfi2(t, F~i) = ?max {Ei1, Ei2}, Pi ? Si ? t ? Di1 ? t
?
??Ei2, t < Di1 < Pi ? Si
?
?
????? when (Pi ?2 Si)/2 ? t < Pi ? Si
???dbfi3(t, F~i) = Ei1 + Ei ,
?
????? whent Pi ? Si ? t ? Pi
???dbfi4(t, F~i) = b Pi c ? (Ei1 + Ei2) + dbfi(t ? b Pti c ? Pi, Di1),
?
?? when t > Pi
(13)
Proof. Figures 910 show an example of the staircase demand of dbfi1(t, F~i) and dbfi2(t, F~i)
with black solid lines, respectively. Roughly, the two staircase/concave demand curves head
toward each other when t increases. The first two cases differ when the two staircase functions
meet as t increases. The demand dbfi3(t, F~i) considers the total task demand and dbfi4(t, F~i)
iterates over the first three cases.
For the demand dbfi1(t, F~i) in the first case, when 0 < t < (Pi ? Si)/2, we know that
t < Pi ? Si ? t by simple mathematical transformation. In this case, we have two separate
staircase functions as shown in Figure 9. When D1 i
i ? t, the demand of the first frame is E1,
the demand of the second frame is zero because D1
i ? t < Pi ? Si ? t. Di1 < Pi ? Si ? t
means t < Pi ? Si ? Di1 which indicates the deadline of the second frame is larger than t.
Thus, dbfi1(t, F~i) = Ei1 when D1
i ? t. When t < Di1 < Pi ? Si ? t, dbfi1(t, F~i) = 0 because
t < Di1 and t < Pi ? Si ? D1. When D1
i i ? Pi ? Si ? t, i.e., t ? Pi ? Si ? Di1, the demand
dbfi1(t, F~i) equals E2. Thus, we have proved that the demand of task ?i is this case when
i
0 < t < (Pi ? Si)/2.
For the demand dbfi2(t, F~i), the proof is similar to the one of the demand dbfi1(t, F~i).
We know Pi ? Si ? t ? t since (Pi ? Si)/2 ? t. By comparing the deadline and length t,
dbfi2(t, F~i) = Ei1 when 0 < Di1 < Pi ? Si ? t and dbfi2(t, F~i) = Ei2 when t < Di1 < Pi ? Si.
When Pi ? Si ? t ? Pi, we know that either frame can contribute to the demand. However,
the two frames cannot contribute together since t < Pi ? Si. In other words, the interval
length t cannot fit both frames. Thus, we take the maximum execution of the two frames as
the demand when Pi ? Si ? t ? Pi.
It is trivial to see that dbfi3(t, F~i) = Ei1 + Ei2 when Pi ? Si ? t ? Pi, and the fourth case
iterates over the first three cases. In all, we have proved this theorem. J
The LPbased algorithm for onesuspension tasks is based on approximating the exact
demand in Theorem 12 and the algorithm The LPbased Algorithm for GMFPA Tasks in
Figure 4. We replace Lines 68 in the MILP algorithm with the linear functions shown
in Equation 16 to get the LP algorithm HeuristicLP (D, S) in Line 5 of Figure 4. The
linear functions shown in Equations 1415 are to approximate the two concave portions of
the task demand for dbfi1(t, F~i) and dbfi2(t, F~i), respectively, illustrated by the red dotted
lines in Figures 910.
dbfi1(t, F~i)
(0, Ei1) (t, Ei1)
(Pi ? Si ? t, Ei2)
?dbfi1,linear(t, F~i) when 0 < t < (Pi ? Si)/2
?
?
???dbfi2,linear(t, F~i) when (Pi ? Si)/2 ? t < Pi ? Si
?
?
?
dbfliinear(t, F~i) = ??dbfi3,linear(t, F~i) = Ei1 + E2,
i
??? whent Pi ? Si ? t ? Pi t
???dbfi4,linear(t, F~i) = b Pi c ? (Ei1 + Ei2) + dbfi(t ? b Pi c ? Pi, Di1),
?
?
?
?
when t > Pi
(16)
The approximation demand dbfliinear(t, F~i) is calculated based on the tlength interval.
Equation 14 shows that the task demand is approximated when 0 < t < (Pi ?Si)/2. This case
is illustrated by the red dashed lines shown in Figure 9. The functions are also based on the
E1
LPbased iterative process and the initial deadline Di1 is assigned by PDA (Pi ? Si) ? Ei1+iE2 .
i
The slope of the linear function depends on the frame deadline Di10 from the last iteration.
If the deadline D10 lies in the region (t, Pi ? Si ? t), we use the two red dotted lines shown
i
in Figure 9 to approximate the staircase demand. The first line passes the points (t, Ei1) and
(Di10 , 0), and the second line passes the points (Di10 , 0) and (Pi ? Si ? t, Ei2). When the frame
deadline D10 lies in the region (0, t] or [Pi ? Si ? t, Pi ? Si), we reuse the linear function
i
dbfliinear(t, D1) shown in Equation 12 to calculate the slopes.
i
Equation 15 shows the task demand when (Pi ?Si)/2 ? t < Pi ?Si, the demand functions
differ by the values of Ei1 and E2. In the case of the demand dbfi2(t, F~i), the first line equals
i
min Ei1, Ei2 , and the second line uses the previous method computeSlope(D) to adjust the
slope of the linear function as shown in Figure 10. Figure 10 shows the approximate lines
when Ei1 < Ei2, and the case is similar when Ei1 ? Ei2. When t ? Pi ? Si, the demand
dbfi3,linear(t, F~i) and dbfi4,linear(t, F~i) are identical to dbfi3(t, F~i) and dbfi4(t, F~i) of Equation 13,
respectively. Thus, we have created the LPbased algorithm for onesuspension tasks.
8
Experiments
We implement our LPbased algorithms using the commercial solver GUROBI [17] in
MATLAB on a 2 GHz Intel Core i5 processor and 8 GB memory machine. We compare
our LPbased algorithm with the MILP algorithm [18] and its application to selfsuspending
tasks [9, 14] on uniprocessor systems. The algorithm LP? is the LPbased schedulability
test given the maximum error ? of the concave programming algorithm. The algorithm
niterLP? limits the number of iterations to be niter. Note that we set ? = 0.1, as the
constant ? = 1? ? ln 1 + 1? (e.g. the exponential constants in Equation 2) will be out of
range if ? is too small.
The MILP algorithm is introduced in Section 5. The algorithm EDA (equal deadline
assignment [9, 3]) assigns each frame the same deadline (Dik = (Pi ? PiN=i0?1 Sik)/Ni), and
the algorithm PDA [15, 3] assigns frame deadlines proportional to frame execution time
(Dik = (Pi ? PiN=i0?1 Sik) ? Eik/Ei). Note that we use the schedulability test in the GMF
model [3] with the EDA and PDA deadline assignment, since the upper bound of the
maximum interval length is bounded [3]. The details of application from GMFPA to
selfsuspending tasks can be found in a previous paper [18]. Comparative results on tasks with
one suspension and multiple suspensions are shown in Section 8.1 and 8.2, respectively.
8.1
The Experiments for OneSuspension SelfSuspending Tasks
For onesuspension selfsuspending tasks, we compare schedulability ratio and total running
time among the algorithms in Figures 11a and 11b, respectively. Since the MILP algorithm
does not scale well with an increasing number of tasks (Figure 12) and task periods, we
test multiplesuspension selfsuspending tasks in Figures 14a and 15a without the MILP
algorithm. The schedulability ratio is the number of feasible systems over the total systems.
The total running time consists of matrix building time and solver running time.
In the task systems, task periods Pi are randomly generated in the range [Plow, Phigh].
Plow and Phigh are the low and high bounds of the task periods. The UUniFast algorithm [5]
divides the utilizations Ui of n tasks under system utilization Ucap. The total execution time is
Ei = Pi ? Ui, and the suspension delay is generated from [Slow ? (1 ? Ui) ? Pi, Shigh ? (1 ? Ui) ? Pi].
Slow and Shigh in suspension range [Slow, Shigh] are the low and high suspension index
bounds, respectively. The UUniFast algorithm also divides the total execution time into
frame execution times. represents the threshold in the LPbased algorithm shown in Figure 4
and is set to be 0.01. Since all algorithms perform well under small system utilization Ucap,
we focus on the experiments whose system utilization Ucap ? 0.5.
1
In Figures 11a and 11b, the xaxes represent the system utilization Ucap ? [0.5, 0.9] with a
step size of 0.05. Each task system contains five tasks. The task configuration parameters are
Plow = 10, Phigh = 100, Slow = 0.3, and Shigh = 0.6. The yaxes represent the schedulability
ratio and total running time in Figures 11a and 11b, respectively. The data are the average
numbers of 500 runs on each Ucap. Figure 11a shows that our LP? is better than PDA
and EDA algorithms in terms of schedulability ratio. The iteration numbers of all tested
LP? algorithm are smaller than five. The multiple runs of the LP algorithm make the LP?
algorithm take slightly longer than the MILP algorithm shown in Figure 11b. The MILP can
be relatively efficient for small enough task systems; however, as the number of tasks/frames
increases, the MILP running time increases exponentially. Note that in Figure 11, we focus
on a small system where we can gauge the effectiveness of the LP in comparison with the
MILP and other algorithms. With Ucap = 0.5, Figure 12 shows that the execution time of
the MILP algorithm increases dramatically when the number of tasks increases. Multiple
input dimensions affect the execution time of the MILP algorithm, e.g., the task periods.
Task periods directly affect the number of integer variables of the MILP algorithm and the
running time is longer with higher task periods even when the number of tasks in the system
is small. The running time of the LPbased algorithm scales relatively well.
Since we use the concave programming algorithm to guide the LPbased algorithm and
have not proved a speedup factor for the LPbased algorithm, we perform experiments
on L value and maximum error (pL/LMILP ? 1 by the transformation of Theorem 9). L
shows how close the value of the heuristic algorithm is to the MILP algorithm. L indicates
the minimization of the maximum demands over all tested intervals. E.g., assume there
exist two heuristic algorithms that generate L = 0.2 and 0.9, respectively. Both algorithms
LP0.1
PDA
EDA
MILP
20
N30umber of Tasks per System
40 50 60
70
80
will give successful schedules in the schedulability ratio test, but the one with L = 0.2 is a
tighter schedule compared to the other one. If L > 1, the system is not schedulable. We also
compare the maximum error of the LP? algorithm since the error can be larger than ?.
Figure 13a shows the average L value of the algorithms among all system utilization points.
The LP0.1 algorithm returns the closest values to the MILP algorithm. The maximum error
values shown in Figure 13b take the maximum values among 500 runs in each utilization
point. Our LPbased algorithm returns the smallest error across all algorithms.
The Experiments for MultipleSuspension SelfSuspending Tasks
Among the shown experiments on selfsuspending tasks with one suspension frame, the
average number of iterations of the LPbased algorithm is smaller than five among all system
utilization Ucap. Since we believe that the algorithms can approach local optimal with a small
number of iterations, we fix the number of iterations to five and test on multiplesuspending
tasks. In Figures 14 and 15, the data for each system utilization point is based on 100
runs. Each run of the system contains 30 tasks and each task contains six execution frames
separated by five suspending frames (11 frames in total). Plow = 10 and Phigh = 100. Since
the MILPbased approach in this setting takes much longer than the LPbased algorithm,
we do not include the MILPbased approach in this experiment. The MILPbased approach
takes more than 1.5 ? 103 (respectively, 3.0 ? 103) seconds with optimality gap (the gap
between the lower and upper objective bounds) which is larger than 10% (respectively, 5%).
In Figure 14a, the system utilization Ucap ? [0.8, 0.96] with step size of 0.02 is shown
on the xaxis. Figure 14a has the suspension range with Slow = 0.1 and Shigh = 0.3. In
Figure 15a, the system utilization Ucap ? [0.5, 0.9] with a step size of 0.05 is shown on the
xaxis. Figure 15a has the suspension range with Slow = 0.3 and Shigh = 0.6. Figures 14a
and 15a show that our LP? is the best among all polynomialtime algorithms in terms of
5LP0.1
PDA
EDA
00.8 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96
System Utilization
(a) The schedulability ratio of the algorithms at
system utilization [0.8, 0.96].
schedulability ratio. The running times in Figures 14b and 14b reveal that LP? also scales
well. The improvements for low suspension range [0.1, 0.3] are better than the one with long
range [0.3, 0.6]. The reason is that when the system specification has more slack time (small
frame execution time and short suspending length), the LPbased algorithms can be ?trained?
to get near optimal parameters during the five iterations. In other words, e.g., the frames
deadlines will be equal to their corresponding execution times if there are no slacks for all
tasks, and all algorithms will return identical frame deadlines.
Our LPbased algorithm always yields higher schedulability ratio compared to other
polynomialtime algorithms. The average running time is competitive overall even when
compared with nonmathematicalprogramming based algorithms such as EDA/PDA.
9
Conclusions
In this paper, we propose a concave programming approximation algorithm and prove its
speedup factor (can approach one) compared to the optimal MILP algorithm. Under the
guidance of the tunable small speedup factor, we present the general LPbased scheme
to schedule GMFPA tasks. We further optimize the LPbased algorithm and apply it to
schedule onesuspension tasks. Extensive experiments show that our algorithms improve the
schedulability ratio and have competitive running time compared to the previous results.
17
18
19
20
23
24
B. Andersson . Schedulability analysis of generalized multiframe traffic on multihopnetworks comprising softwareimplemented ethernetswitches . In Proceedings of the IEEE International Symposium on Parallel and Distributed Processing , pages 1  8 , April 2008 .
S. Baruah . Dynamic and Staticpriority Scheduling of Recurring Realtime Tasks. RealTime Syst ., 24 ( 1 ): 93  128 , January 2003 .
S. Baruah , D. Chen , S. Gorinsky , and A. Mok . Generalized Multiframe Tasks. RealTime Systems , pages 5  22 , 1999 .
In Proceedings of the 26th RealTime Systems Symposium , pages 321  329 , 2005 .
E. Bini and G. C. Buttazzo . Measuring the Performance of Schedulability Tests . RealTime Systems , pages 129  154 , 2005 .
G. C. Buttazzo , G. Lipari, M. Caccamo , and L. Abeni . Elastic Scheduling for Flexible Workload Management . IEEE Transactions on Computers , pages 289  302 , March 2002 .
D. Buttle . RealTime in the PrimeTime . In Proceedings of the 24th Euromicro Conference on RealTime Systems , pages xii xiii, July 2012 . doi: 10 .1109/ECRTS. 2012 . 7 .
T. Chantem , X. Wang , M.D. Lemmon , and X.S. Hu . Period and Deadline Selection for Schedulability in RealTime Systems . In Proceedings of the Euromicro Conference on RealTime Systems (ECRTS) , pages 168  177 , July 2008 .
J. J. Chen and C. Liu . FixedRelativeDeadline Scheduling of Hard RealTime Tasks with SelfSuspensions . In Proceedings of the Real Time Systems Symposium (RTSS) , December 2014 .
J. J. Chen , G. von der Bruggen, W. H. Huang , and C. Liu . State of the Art for Scheduling and Analyzing SelfSuspending Sporadic RealTime Tasks . In Proceedings of the Embedded and RealTime Computing Systems and Applications (RTCSA) , 2017 .
S. Ding , H. Tomiyama , and H. Takada . Scheduling Algorithms for I/O Blockings with a Multiframe Task Model . In Proceedings of the 13th IEEE International Conference on Embedded and RealTime Computing Systems and Applications , August 2007 .
P. Ekberg and W. Yi . Uniprocessor Feasibility of Sporadic Tasks Remains coNPcomplete Under Bounded Utilization . In Proceedings of the 36th IEEE RealTime Systems Symposium (RTSS) , 2015 .
M. R. Garey , D. S. Johnson , and Ravi Sethi . The Complexity of Flowshop and Jobshop Scheduling. Math. Oper. Res. , 1 ( 2 ): 117  129 , May 1976 . doi: 10 .1287/moor.1.2.117.
W.H. Huang and J.J. Chen . SelfSuspension RealTime Tasks under FixedRelativeDeadline FixedPriority Scheduling . In Proceedings of the Design , Automation, and Test in Europe (DATE), March 2016 .
J. Liu . RealTime Systems . Prentice Hall, 2000 .
A.K. Mok and D. Chen . A multiframe model for realtime tasks . In Proceedings of the 17th IEEE RealTime Systems Symposium , pages 22  29 , December 1996 .
B. Peng and N. Fisher. Parameter Adaptation for Generalized Multiframe Tasks and Applications to SelfSuspending Tasks . In Proceedings of the 22nd Embedded and RealTime Computing Systems and Applications (RTCSA) , August 2016 .
B. Peng , N. Fisher, and T. Chantem . MILPbased deadline assignment for endtoend flows in distributed realtime systems . In Proceedings of the 24th International Conference on RealTime Networks and Systems, RTNS '16 , pages 13  22 , New York, NY, USA, 2016 . ACM.
doi:10.1145/2997465 .2997498.
Bo Peng and Nathan Fisher. Parameter adaptation for generalized multiframe tasks: schedulability analysis, case study, and applications to selfsuspending tasks . RealTime Systems , 2017 .
F. Ridouard , P. Richard, and F. Cottet . Negative results for scheduling independent hard realtime tasks with selfsuspensions . In Proceedings of the 25th RealTime Systems Symposium , pages 47  56 , December 2004 . doi: 10 .1109/REAL. 2004 . 35 .
J. M. Rivas , J. J. Guti?rrez , J. C. Palencia , and M. G. Harbour . Schedulability Analysis and Optimization of Heterogeneous EDF and FP Distributed RealTime Systems . In Proceedings of the 23rd Euromicro Conference on RealTime Systems (ECRTS) , pages 195  204 , July 2011 .
doi:10 .1109/ECRTS. 2011 . 26 .
M. Stigge , P. Ekberg , N. Guan , and W. Yi . The Digraph RealTime Task Model . In Proceedings of the 17th IEEE RealTime and Embedded Technology and Applications Symposium , pages 71  80 , April 2011 . doi: 10 .1109/RTAS. 2011 . 15 .
Martin Stigge and Wang Yi . Graphbased models for realtime workload: a survey . RealTime Systems , 2015 .