next up previous
Next: Remarks Up: Real Time Systems Previous: Scheduling Theory

Schedulability Analysis

Real time scheduling of periodic tasks (such as AMB controllers) often uses the rate-monotonic algorithm [LL73]. This algorithm has been shown to be optimal in the literature for uniprocessor systems and periodic tasks. The most important feature about this algorithm is that it introduces the ability to find, a priori, whether or not a given set of tasks will be schedulable - that is, whether or not they will meet all their deadlines.

Liu and Layland showed that for $n$ independent tasks - all of which have the same start time at some point in time, whose execution time is smaller than their deadline, and which, most importantly, are completely independent one of the other - all tasks are schedulable if the following is true:


\begin{displaymath}
\sum_{i=1}^{n} \frac{C_i}{P_i} \leq n(2^{1/n} -1 )
\end{displaymath} (1.1)

where $C_i$ is the worst case execution time of the task, $P_i$ is the period of the task, and the left hand side of (1.1) is called the total CPU utilization. Of most use is the fact that:


\begin{displaymath}
\lim_{n \rightarrow \infty } \sum_{i=1}^{n} \frac{C_i}{P_i} \leq 69.34\%
\end{displaymath} (1.2)

or equivalently, if CPU utilization is less than 69.3%, all tasks will meet their deadlines. This relation describes a worst case sufficient condition for a rate-monotonic algorithm and is thus pessimistic. An exact characterization was obtained by Lehoczky, Sha, and Ding [LSD89] but is not quoted here due to its somewhat more complex nature.

Equation 1.1 provides conditions under which a set of periodic tasks can be scheduled taking into account the effect of a task being preempted exclusively by higher priority tasks. However, (1.1) can be extended to include blocking of higher priority tasks by lower priority tasks [SRL90]. That is, assuming that the priority ceiling protocol is used to eliminate priority inversion, then (1.1) can be extended as:


\begin{displaymath}
\sum_{i=1}^{n} \frac{C_i}{P_i} + max\left
( \frac{B_1}{P_1},\ldots,\frac{B_{(n-1)}}{P_{(n-1)}},\right)
\leq n(2^{1/n} -1 )
\end{displaymath} (1.3)

where $B_j$ denotes the worst case blocking time that may occur from any of the lower priority tasks (where increasing $j$ denotes tasks with decreasing priority).

The most important point to note about these scheduling tests is that we now have an effective method of selecting a CPU for a given application (or determining that no CPU will handle the given controller at the given sampling rate). That is, if we know the target execution periods (for example, a controller may repeat itself every 100 $\mu s$) and the worst case execution time for each of our control tasks (for example, the execution time of the aforementioned controller may be 80 $\mu s$), then by use of the aforementioned schedulability tests, we can scientifically select a computer that is appropriate for our application. Stated differently, if the left hand side of these tests is much smaller than the right hand side, then we know that the target CPU is much too fast for our application and thus we can select a lower cost system. Alternatively, if the left hand side is too large compared with the right, then we should either consider upgrading to a faster CPU or redesigning the controller given that our given CPU cannot compute this controller at the given rate). From our example and (1.1), we find that $80/100 = 0.8 \leq 1.0$, therefore our individual task is schedulable in this CPU and will meet all of its deadlines.

Real Time Systems literature is too vast to be summarized in this paper. Interested readers are encouraged to pursue further reading in many of the excellent books and papers on the subject.


next up previous
Next: Remarks Up: Real Time Systems Previous: Scheduling Theory
Michael Barabanov 2001-06-19