summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--report/chapter1.tex12
-rw-r--r--report/chapter3.tex2
-rw-r--r--report/chapter5.tex17
-rw-r--r--report/figs/feasible.pdfbin10958 -> 11265 bytes
-rw-r--r--report/figs/infeasible.pdfbin10956 -> 11276 bytes
5 files changed, 24 insertions, 7 deletions
diff --git a/report/chapter1.tex b/report/chapter1.tex
index 9102563..f76a87e 100644
--- a/report/chapter1.tex
+++ b/report/chapter1.tex
@@ -2,12 +2,16 @@
Our world is getting more and more surrounded by electronic devices. Only a few years ago our surroundings were limited to radio's and televisions. Soon after we got our microwaves and washing machines, not to mention the boom in the mobile phone corner. All these devices require operating systems and nearly all of these OS require real-time computations. \\
-To define a real-time operating system is beyond the scope of this article, what is not however is that all (real-time) operating systems use some sort of scheduling algorithm. \\
+%To define a real-time operating system is beyond the scope of this article, what is not however is that all (real-time) operating systems use some sort of scheduling algorithm. \\
-We have done a literature study concerning one of these algorithms, called Fixed Priority Scheduling with Deferred Preemtion (FPDS) and looked at various aspects of it. In chapter 2 we discuss development considerations while the architectural considerations are dealt with in chapter 3. The next chapter, chapter 4, is all about the application domains of FPDS. We conclude our report with some small results and notes on the discussion held after our presentation on this subject.
+We have done a literature study on Fixed Priority Scheduling with Deferred Preemption (FPDS) in which we looked at various design and implementation aspects of this type of scheduling. In chapter 2 we discuss development considerations while the architectural considerations are dealt with in chapter 3. The next chapter, chapter 4, is all about the application domains of FPDS. We conclude our report with some small results and notes on the discussion held after our presentation on this subject.
\section{Motivation}
-Fixed priority scheduling with preemption (FPPS) is already widely used. In the case of FPPS with the use of cash brings great unpredictability. Caches are great for performance improvements, if you can use them. Audio/Video almost always work a lot faster with caches. So if you like to use caches in a real-time system, there is a scheduling algorithm that allows exactly this, namely is FPDS. FPDS allows the uses of caches, and still behaves like a real-time system. \\
+Fixed priority scheduling with \emph{arbitrary} preemptions (FPPS) is extensively documented in literature. This class of schedulers preempt a lower priority task as soon as a higher priority task becomes runnable. Upon preemption of a task a context switch is performed, in which the cpu postpones the execution of the current task and prepares itself to execute a different task. The operations associated with context switches take time, how much time usually depends on the system architecture and the state of the computation. The costs of context switches are generally neglected in FPPS literature because it is really hard to get a good estimate or bound on these costs. \\
-Resource control can get complex very fast, due to things like Interrupt Service Routines (ISR) and buffers to actually access certain resources. With FPPS this is very complex task and introduces a lot of overhead. One of the design goals of FPDS was to simply this and thus also reducing the overhead.
+As a compromise between \emph{no} preemptions and \emph{arbitrary} preemptions, FPDS was introduced. A scheduler that employs deferred preemptive scheduling, preempts tasks only at certain points that are defined by the system designer. By controlling the points at which a task may be preempted, a system designer has some knowledge about the state of the system and computation at which the task is preempted and at the same time may limit the number of preemptions that occur. This allows for more educated guesses of the costs of context switches and hence more accurate analysis of the (real world) schedulability of task sets.
+
+% In the case of FPPS with the use of cash brings great unpredictability. Caches are great for performance improvements, if you can use them. Audio/Video almost always work a lot faster with caches. So if you like to use caches in a real-time system, there is a scheduling algorithm that allows exactly this, namely is FPDS. FPDS allows the uses of caches, and still behaves like a real-time system. \\
+
+%Resource control can get complex very fast, due to things like Interrupt Service Routines (ISR) and buffers to actually access certain resources. With FPPS this is very complex task and introduces a lot of overhead. One of the design goals of FPDS was to simply this and thus also reducing the overhead.
diff --git a/report/chapter3.tex b/report/chapter3.tex
index a24b9f3..477053c 100644
--- a/report/chapter3.tex
+++ b/report/chapter3.tex
@@ -49,7 +49,7 @@ In conclusion, we find that using the segmented interrupt architectures have the
\subsection{Multi-processor systems}
-Gai et. al. \cite{gab-mdssa-02} Describe scheduling of tasks in asymmetric multiprocessor systems consisting of a general purpose CPU and DSP acting as a co-processor to the GPP master. The DSP is designed to execute algorithms on a set of data without interruption, hence the schedule for the DSP is non-preemptive. Gai et. al. treat the DSP scheduling as a special case of scheduling with shared resources in a multiprocessor distributed system, using a variant of the \emph{cooperative scheduling} method presented in \cite{sr-csmr-99} by Seawong and Rajkumar. Cooperative scheduling is appropriate in situations where a task can be decomposed into multiple phases, such that each phase requires a different resource. The basic idea of cooperative scheduling as described by Seawong and Rajkumar is to associate suitable deadlines to each phase of a job in order to meet the job deadline.
+Gai et. al. \cite{gab-mdssa-02} Describe scheduling of tasks in asymmetric multiprocessor systems consisting of a general purpose CPU and DSP acting as a co-processor to the master CPU. The DSP is designed to execute algorithms on a set of data without interruption, hence the schedule for the DSP is non-preemptive. Gai et. al. treat the DSP scheduling as a special case of scheduling with shared resources in a multiprocessor distributed system, using a variant of the \emph{cooperative scheduling} method presented in \cite{sr-csmr-99} by Seawong and Rajkumar. Cooperative scheduling is appropriate in situations where a task can be decomposed into multiple phases, such that each phase requires a different resource. The basic idea of cooperative scheduling as described by Seawong and Rajkumar is to associate suitable deadlines to each phase of a job in order to meet the job deadline.
In order to apply this to the GPP + DSP multiprocessor architecture, Gai et. al. define their real-time model to consist of periodic and sporadic tasks subdivided into regular tasks and DSP tasks. The regular tasks (application tasks, interrupt service routines, ...) are executed entirely on the master CPU for $ C_{i} $ units of time. The DSP tasks execute for $ C_{i} $ units of time on the master CPU and an additional $ C^{DSP}_{i} $ units of time on the DSP. It is assumed that each DSP job performs at most one DSP request after $ C^{pre}_{i} $ units of time, and then executes for another $ C^{post}_{i} $ units of time, such that $ C_{i} = C^{pre}_{i} + C^{post}_{i} $ as depicted in Figure \ref{fig:dsptaskstruct}.
diff --git a/report/chapter5.tex b/report/chapter5.tex
index 53a7e9e..8a5f5b2 100644
--- a/report/chapter5.tex
+++ b/report/chapter5.tex
@@ -1,5 +1,18 @@
\section{Discussion and conclusion}
-If an application requires caches, like Video decoding does, then FPDS is a very interesting option. This is because FPDS allows the use of caches, yet allowing the system to act in a real-time manner. This however only, and only if, very occasional misses deadlines are acceptable of high priority tasks, since FPDS allows a lower priority task to block a high priority task. \\
+The correct use of preferred preemption points in FPDS allows as system designer to put a bound on context switching costs, even when using features such as cashes and instruction pipelining. These relatively cheap hardware acceleration techniques can benefit real-time systems as they gain performance and/or cut the hardware costs, while still allowing rigorous schedulability analysis for tasks sets consisting of periodic and sporadic hard real-time tasks. One drawback however, is that the response times of high priority tasks may be higher as opposed to arbitrary preemptive scheduling, because a higher priority tasks that is released during the execution of a lower priority task has to wait for lower priority task to reach a preemption point. \\
+
+Note that tasks/algorithms used in a real-time system have to be suitable for the use of the mentioned acceleration techniques to be beneficial for the real-time system. In the case of memory cashes for example, an application that accesses the memory in random manner, such that the cache controller is not able to reliably predict the next memory access, won't benefit from cashes. \\
+
+However, using preferred preemption points alone won't improve the predictability much when the real-time system receives (a lot of) interrupts. The segmented interrupt architecture for interrupt handling as described in this article, can help a great deal in improving the predictability. The interrupt service routine still preempts the running task, but because the ISR is implemented as a very simple function that only schedules a job (LSR) to handle the interrupt request, the overhead introduced can be bounded. These LSR's can be modeled as periodic or sporadic tasks and be taken into account in the schedulability analysis. \\
+
+In conclusion we believe that fixed priority scheduling with deferred preemption, when used correctly, in combination with the segmented interrupt architecture, greatly improves the accuracy of the schedulability analysis for actual real-world real-time systems.
+
+%
+% the claim that tasks miss their deadline is a bit bogus, because that is what the schedulability analysis is for.
+%
+
+%If an application requires caches, like Video decoding does, then FPDS is a very interesting option. This is because FPDS allows the use of caches, yet allowing the system to act in a real-time manner. This however only, and only if, very occasional misses deadlines are acceptable of high priority tasks, since FPDS allows a lower priority task to block a high priority task. \\
+
+%What however, if there are a lot of cache misses? If this is the case, then you could argue that either, the system was designed wrong, either by using a cache that shouldn't have been used in the first place. Or the cache would not be sufficient. On the other hand, it can be argued that there are several algorithms that prove a system is always schedulable. What these algorithms do not account for when the input data is variable. Like for example a Video-data stream from a satellite. This video data stream is unpredictable and can be out of specification. If this is the case, it may still happen that the scheduler misses it's deadline if the video decoding all of a sudden requires more CPU power.
-What however, if there are a lot of cache misses? If this is the case, then you could argue that either, the system was designed wrong, either by using a cache that shouldn't have been used in the first place. Or the cache would not be sufficient. On the other hand, it can be argued that there are several algorithms that prove a system is always schedulable. What these algorithms do not account for when the input data is variable. Like for example a Video-data stream from a satellite. This video data stream is unpredictable and can be out of specification. If this is the case, it may still happen that the scheduler misses it's deadline if the video decoding all of a sudden requires more CPU power.
diff --git a/report/figs/feasible.pdf b/report/figs/feasible.pdf
index 0ba4459..d1a7a7d 100644
--- a/report/figs/feasible.pdf
+++ b/report/figs/feasible.pdf
Binary files differ
diff --git a/report/figs/infeasible.pdf b/report/figs/infeasible.pdf
index c28ab96..9e3faf4 100644
--- a/report/figs/infeasible.pdf
+++ b/report/figs/infeasible.pdf
Binary files differ