summaryrefslogtreecommitdiffstats
path: root/report/chapter2.tex
blob: 3a8dd529e927ff0145a7a1a1eca0998113f4492a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
\section{Development considerations}

Arbitrary preemptions of tasks which are used in fixed priority preemptive scheduling, often result in cache state changes in between execution of the program. This causes difficulties in determining the effect of cache reload overheads on execution time. These preemptions make the problem of determining cache hit rates virtually intractable, making the system unpredictable. Although fixed priority non-preemptive scheduling can solve this problem, it comes at a cost of reduced schedulability. \\

Another solution to this problem is to use Fixed Priority Scheduling with Deferred Preemption (FPDS). FPDS allows for preemptions at pre-specified points within a task's execution. This has two main benefits. The first is that it allows for greater accuracy in determining the Worst Case Execution Time (WCET). By allowing preemptions at only specified points, we can more accurately predict the effect these preemptions will have on schedulability. More importantly, the second benefit which this provides is that it reduces the costs of preemption through selecting appropriate points of preemption. In selecting the preemption points, we can limit preemptions to take place at points where less amount of cache reloads are required, hence reducing the costs of preemption. Therefore, special care has to be taken in selecting these preemption points. \\

Let us first define two terms now. First, an active cache line is defined in [1] as a cache line that that contains a block of data that will be referenced in the future prior to its replacement. In other words, it is a cache line in which the next reference is a hit, had the task been allowed to run to completion.
Secondly, a preferred preemption point for a given task interval $ t_i $ to $ t_j $ is defined in [1] as the instant within the interval having the minimum number of live cache lines. \\

% ??????
Bearing these in mind, the task we have at hand is to search for the preferred preemption points. There are two main ways that these points can be determined, using a compiler or manually. \\
% ??????

\subsection{Using a compiler}

The compiler is used to analyze the worst-case execution path and preferred preemption points are determined along this path. The number of active cache lines at any point can be determined by analyzing the activity of the cache line with the help of the compiler. After the worst-case execution path is analyzed, the next worst is analyzed using the preemption points that have been found. If the execution time of this path is lower than the WCET, the analysis can stop. Otherwise, the analysis has to be repeated. The help of a compiler is needed to do the analysis because the analysis requires a lot of processing and monitoring. Also, the effects of caching become transparent to the user if a compiler is used. \\

\subsection{Manually}

The preferred preemption points can also be determined by the programmer with the help of insertion of traps in the source code to monitor when tasks of higher priority are ready to run. The programmer would already have an intuition as to where the number of live caches is minimized. Using this method, preferred preemption points can also be determined. \\

Considering the two methods, it is clear that using the compiler is a more systematic and transparent way of finding preemption points. In addition, preemption points found using a compiler will generally perform better because the analysis is more extensive. However, this analysis can also be performed manually as described above is a compiler cannot be used.