summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorOliver Schinagl <oliver@schinagl.nl>2008-05-25 18:53:00 (GMT)
committerOliver Schinagl <oliver@schinagl.nl>2008-05-25 18:53:00 (GMT)
commit3149e013a585899c9488d508a6ab491e45ecfbd6 (patch)
tree3380dc85c1e97062b53992fd926c4ba6922118df
parent037c73cb6bbbe0ad5031ebb51cce73d3f286a0f6 (diff)
download2in25-3149e013a585899c9488d508a6ab491e45ecfbd6.zip
2in25-3149e013a585899c9488d508a6ab491e45ecfbd6.tar.gz
2in25-3149e013a585899c9488d508a6ab491e45ecfbd6.tar.bz2
updated chapters 1 and 5
-rw-r--r--report/chapter1.tex10
-rw-r--r--report/chapter5.tex5
2 files changed, 7 insertions, 8 deletions
diff --git a/report/chapter1.tex b/report/chapter1.tex
index f76a87e..ff1a41c 100644
--- a/report/chapter1.tex
+++ b/report/chapter1.tex
@@ -2,16 +2,14 @@
Our world is getting more and more surrounded by electronic devices. Only a few years ago our surroundings were limited to radio's and televisions. Soon after we got our microwaves and washing machines, not to mention the boom in the mobile phone corner. All these devices require operating systems and nearly all of these OS require real-time computations. \\
-%To define a real-time operating system is beyond the scope of this article, what is not however is that all (real-time) operating systems use some sort of scheduling algorithm. \\
-
-We have done a literature study on Fixed Priority Scheduling with Deferred Preemption (FPDS) in which we looked at various design and implementation aspects of this type of scheduling. In chapter 2 we discuss development considerations while the architectural considerations are dealt with in chapter 3. The next chapter, chapter 4, is all about the application domains of FPDS. We conclude our report with some small results and notes on the discussion held after our presentation on this subject.
+We have done a literature study on Fixed Priority Scheduling with Deferred Preemption (FPDS) in which we looked at various design and implementation aspects of this type of scheduling. In chapter 2 we discuss development considerations while the architectural considerations are dealt with in chapter 3. The next chapter, chapter 4, is all about the application domains of FPDS. We conclude our report with some small results and notes on the discussion held after our presentation on this subject. \\
\section{Motivation}
Fixed priority scheduling with \emph{arbitrary} preemptions (FPPS) is extensively documented in literature. This class of schedulers preempt a lower priority task as soon as a higher priority task becomes runnable. Upon preemption of a task a context switch is performed, in which the cpu postpones the execution of the current task and prepares itself to execute a different task. The operations associated with context switches take time, how much time usually depends on the system architecture and the state of the computation. The costs of context switches are generally neglected in FPPS literature because it is really hard to get a good estimate or bound on these costs. \\
-As a compromise between \emph{no} preemptions and \emph{arbitrary} preemptions, FPDS was introduced. A scheduler that employs deferred preemptive scheduling, preempts tasks only at certain points that are defined by the system designer. By controlling the points at which a task may be preempted, a system designer has some knowledge about the state of the system and computation at which the task is preempted and at the same time may limit the number of preemptions that occur. This allows for more educated guesses of the costs of context switches and hence more accurate analysis of the (real world) schedulability of task sets.
+As a compromise between \emph{no} preemptions and \emph{arbitrary} preemptions, FPDS was introduced. A scheduler that employs deferred preemptive scheduling, preempts tasks only at certain points that are defined by the system designer. By controlling the points at which a task may be preempted, a system designer has some knowledge about the state of the system and computation at which the task is preempted and at the same time may limit the number of preemptions that occur. This allows for more educated guesses of the costs of context switches and hence more accurate analysis of the (real world) schedulability of task sets. \\
-% In the case of FPPS with the use of cash brings great unpredictability. Caches are great for performance improvements, if you can use them. Audio/Video almost always work a lot faster with caches. So if you like to use caches in a real-time system, there is a scheduling algorithm that allows exactly this, namely is FPDS. FPDS allows the uses of caches, and still behaves like a real-time system. \\
+An essential goal of OS resource managemnt for real-time and multimeda systems is to provide timely, guaranteed and protected access to system recources. Allthough extensive research has been commited on processor scheduling, or network scheduling alone, disk scheduling, co-processor scheduling et cetra have been studied to a smaller extend. Not to mention using those resources \emph{simultaneously} wihtin a single node. Video applications may access high volume data from a disk, processes the data and transmit it accross the network. All these stages must be completed by a deadline. Obtaining simultaneous and timely access to multiple resources is known to be an NP-complete problem, making it a very complex to handle properly. \\
-%Resource control can get complex very fast, due to things like Interrupt Service Routines (ISR) and buffers to actually access certain resources. With FPPS this is very complex task and introduces a lot of overhead. One of the design goals of FPDS was to simply this and thus also reducing the overhead.
+% One of the suggested solutions for this problem would be to decouple the resources where possible, if resources are independent of one a nother of course, and then have servers for each resources. This leads to more problems however, when for example the disk and processor pair. To access the disk one needs the CPU.
diff --git a/report/chapter5.tex b/report/chapter5.tex
index 8a5f5b2..7505f7b 100644
--- a/report/chapter5.tex
+++ b/report/chapter5.tex
@@ -1,12 +1,12 @@
\section{Discussion and conclusion}
-The correct use of preferred preemption points in FPDS allows as system designer to put a bound on context switching costs, even when using features such as cashes and instruction pipelining. These relatively cheap hardware acceleration techniques can benefit real-time systems as they gain performance and/or cut the hardware costs, while still allowing rigorous schedulability analysis for tasks sets consisting of periodic and sporadic hard real-time tasks. One drawback however, is that the response times of high priority tasks may be higher as opposed to arbitrary preemptive scheduling, because a higher priority tasks that is released during the execution of a lower priority task has to wait for lower priority task to reach a preemption point. \\
+The correct use of preferred preemption points in FPDS allows as system designer to put a bound on context switching costs, even when using features such as caches and instruction pipelining. These relatively cheap hardware acceleration techniques can benefit real-time systems as they gain performance and/or cut the hardware costs, while still allowing rigorous schedulability analysis for tasks sets consisting of periodic and sporadic hard real-time tasks. One drawback however, is that the response times of high priority tasks may be higher as opposed to arbitrary preemptive scheduling, because a higher priority tasks that is released during the execution of a lower priority task has to wait for lower priority task to reach a preemption point. \\
Note that tasks/algorithms used in a real-time system have to be suitable for the use of the mentioned acceleration techniques to be beneficial for the real-time system. In the case of memory cashes for example, an application that accesses the memory in random manner, such that the cache controller is not able to reliably predict the next memory access, won't benefit from cashes. \\
However, using preferred preemption points alone won't improve the predictability much when the real-time system receives (a lot of) interrupts. The segmented interrupt architecture for interrupt handling as described in this article, can help a great deal in improving the predictability. The interrupt service routine still preempts the running task, but because the ISR is implemented as a very simple function that only schedules a job (LSR) to handle the interrupt request, the overhead introduced can be bounded. These LSR's can be modeled as periodic or sporadic tasks and be taken into account in the schedulability analysis. \\
-In conclusion we believe that fixed priority scheduling with deferred preemption, when used correctly, in combination with the segmented interrupt architecture, greatly improves the accuracy of the schedulability analysis for actual real-world real-time systems.
+In conclusion we believe that fixed priority scheduling with deferred preemption, when used correctly, in combination with the segmented interrupt architecture, greatly improves the accuracy of the schedulability analysis for actual real-world real-time systems. \\
%
% the claim that tasks miss their deadline is a bit bogus, because that is what the schedulability analysis is for.
@@ -16,3 +16,4 @@ In conclusion we believe that fixed priority scheduling with deferred preemption
%What however, if there are a lot of cache misses? If this is the case, then you could argue that either, the system was designed wrong, either by using a cache that shouldn't have been used in the first place. Or the cache would not be sufficient. On the other hand, it can be argued that there are several algorithms that prove a system is always schedulable. What these algorithms do not account for when the input data is variable. Like for example a Video-data stream from a satellite. This video data stream is unpredictable and can be out of specification. If this is the case, it may still happen that the scheduler misses it's deadline if the video decoding all of a sudden requires more CPU power.
+