summaryrefslogtreecommitdiffstats
path: root/report/chapter5.tex
diff options
context:
space:
mode:
Diffstat (limited to 'report/chapter5.tex')
-rw-r--r--report/chapter5.tex5
1 files changed, 3 insertions, 2 deletions
diff --git a/report/chapter5.tex b/report/chapter5.tex
index 8a5f5b2..7505f7b 100644
--- a/report/chapter5.tex
+++ b/report/chapter5.tex
@@ -1,12 +1,12 @@
\section{Discussion and conclusion}
-The correct use of preferred preemption points in FPDS allows as system designer to put a bound on context switching costs, even when using features such as cashes and instruction pipelining. These relatively cheap hardware acceleration techniques can benefit real-time systems as they gain performance and/or cut the hardware costs, while still allowing rigorous schedulability analysis for tasks sets consisting of periodic and sporadic hard real-time tasks. One drawback however, is that the response times of high priority tasks may be higher as opposed to arbitrary preemptive scheduling, because a higher priority tasks that is released during the execution of a lower priority task has to wait for lower priority task to reach a preemption point. \\
+The correct use of preferred preemption points in FPDS allows as system designer to put a bound on context switching costs, even when using features such as caches and instruction pipelining. These relatively cheap hardware acceleration techniques can benefit real-time systems as they gain performance and/or cut the hardware costs, while still allowing rigorous schedulability analysis for tasks sets consisting of periodic and sporadic hard real-time tasks. One drawback however, is that the response times of high priority tasks may be higher as opposed to arbitrary preemptive scheduling, because a higher priority tasks that is released during the execution of a lower priority task has to wait for lower priority task to reach a preemption point. \\
Note that tasks/algorithms used in a real-time system have to be suitable for the use of the mentioned acceleration techniques to be beneficial for the real-time system. In the case of memory cashes for example, an application that accesses the memory in random manner, such that the cache controller is not able to reliably predict the next memory access, won't benefit from cashes. \\
However, using preferred preemption points alone won't improve the predictability much when the real-time system receives (a lot of) interrupts. The segmented interrupt architecture for interrupt handling as described in this article, can help a great deal in improving the predictability. The interrupt service routine still preempts the running task, but because the ISR is implemented as a very simple function that only schedules a job (LSR) to handle the interrupt request, the overhead introduced can be bounded. These LSR's can be modeled as periodic or sporadic tasks and be taken into account in the schedulability analysis. \\
-In conclusion we believe that fixed priority scheduling with deferred preemption, when used correctly, in combination with the segmented interrupt architecture, greatly improves the accuracy of the schedulability analysis for actual real-world real-time systems.
+In conclusion we believe that fixed priority scheduling with deferred preemption, when used correctly, in combination with the segmented interrupt architecture, greatly improves the accuracy of the schedulability analysis for actual real-world real-time systems. \\
%
% the claim that tasks miss their deadline is a bit bogus, because that is what the schedulability analysis is for.
@@ -16,3 +16,4 @@ In conclusion we believe that fixed priority scheduling with deferred preemption
%What however, if there are a lot of cache misses? If this is the case, then you could argue that either, the system was designed wrong, either by using a cache that shouldn't have been used in the first place. Or the cache would not be sufficient. On the other hand, it can be argued that there are several algorithms that prove a system is always schedulable. What these algorithms do not account for when the input data is variable. Like for example a Video-data stream from a satellite. This video data stream is unpredictable and can be out of specification. If this is the case, it may still happen that the scheduler misses it's deadline if the video decoding all of a sudden requires more CPU power.
+