summaryrefslogtreecommitdiffstats
path: root/report/chapter3.tex
diff options
context:
space:
mode:
Diffstat (limited to 'report/chapter3.tex')
-rw-r--r--report/chapter3.tex8
1 files changed, 5 insertions, 3 deletions
diff --git a/report/chapter3.tex b/report/chapter3.tex
index ee1650e..a24b9f3 100644
--- a/report/chapter3.tex
+++ b/report/chapter3.tex
@@ -4,7 +4,7 @@ When using fixed priority scheduling with deferred preemption, one must take som
\subsection{Processors}
-In this section, two types of processors are being compared. These two are pipelined processors and general purpose processors. Pipelining, a standard feature in RISC\footnote[1]{RISC, or Reduced Instruction Set Computer, is a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions.} processors, is much like an assembly line. Because the processor works on different steps of the instruction at the same time, more instructions can be executed in a shorter period of time. General purpose processors on the other hand are mainly used to achieve a descent average response time (although most architectures nowadays contain a pipeline feature). \\
+In this section, two types of processors are being compared. These two are pipelined processors and general purpose processors. Pipelining, a standard feature in RISC\footnote[1]{RISC, or Reduced Instruction Set Computer, is a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions.} processors, is much like an assembly line. Because the processor works on different steps of the instruction at the same time, more instructions can be executed in a shorter period of time. General purpose processors on the other hand are mainly used to achieve a decent average response time (although most architectures nowadays contain a pipeline feature). \\
The use of a pipelined processor can result in significant performance improvements. However, dependencies between instructions can cause pipeline hazards that may delay the completion of instructions. Christopher A. Healy et. al. posed a method to determine the worst case execution times of pipelined processors with cache in \cite{hwh-itapic-95}. This approach reads all kinds of information about each type of instruction from a machine-dependent data file. This way the performance of a sequence of instructions in a pipeline can be analyzed. \\
@@ -18,7 +18,7 @@ It's clear that the use of pipelined processors have a huge advantage over gener
When cache memory is to be used in real-time systems, special attention must be paid since cache memory introduces unpredictability to the system. This unpredictable behavior is due to cache reloads upon preemption of a task. When preemptions are frequent, the sum of such cache reloading delays takes a significant portion of task execution time. The cache reloading costs due to preemptions have largely been ignored in real-time scheduling. \\
-Buttazzo states in \cite{but-hrtcs-05} that it would be more efficient to have processors without cache or with cache disable. Although this would obviously get rid of the problem, it is not desirable. Another approach is to allows preemptions only at predetermined execution points with small cache reloading costs. A scheduling scheme was given in \cite{scmsc-lpsecmrts-98} and is called Limited Preemptible Scheduling (LPS). The scheduling scheme only allows preemptions only at predetermined execution points with small cache reloading costs. This means that the method can be applied to FPDS. \\
+Buttazzo states in \cite{but-hrtcs-05} that it would be more efficient to have processors without cache or with cache disable. Although this would obviously get rid of the problem, it is not desirable because the use of cache can really improve system performance. Another approach is to allows preemptions only at predetermined execution points with small cache reloading costs. A scheduling scheme was given in \cite{scmsc-lpsecmrts-98} and is called Limited Preemptible Scheduling (LPS). The scheduling scheme only allows preemptions only at predetermined execution points with small cache reloading costs. This means that the method can be applied to FPDS. \\
In \cite{hwh-itapic-95} a method to determine the worst case execution times was given for multiprocessors with cache. In that paper a cache simulation is used to categorize a cache operation. Using the outcome of such a simulation the blocking time can be determined very precisely. It was already stated that this approach is applicable to FPDS in paragraph 3.1. \\
@@ -31,7 +31,9 @@ In any multiprocessing system cooperating processes share data via shared data o
Figure 1. Shared memory multiprocessor system structure
\end {center}
-However, the literature doesn't state much about the use of local and global memory when applying FPDS. Therefore it is difficult to make a statement about this subject. Most likely the use of local memory shouldn't have much impact on the schedulability of the system under FPDS. Global memory on the other hand can indeed bring nondeterminism. When guarding a critical section with semaphores for instance, a lower priority job can block a higher priority job. This can be overcome with various protocols like PIP, PCP and SRP (although the implementations of PIP and PCP recently have been proven flawed).
+However, the literature doesn't state much about the use of local and global memory when using FPDS. Therefore it is difficult to make a statement about this subject. Most likely the use of local memory shouldn't have much impact on the schedulability of the system under FPDS. Global memory on the other hand can indeed bring nondeterminism when using cache. FPDS is therefor more predictable because the cache misses can be determined more accurately. \\
+
+Shared variables in a critical section can also bring nondeterminism. When guarding a critical section with semaphores, a lower priority job can block a higher priority job. This can be overcome with various protocols like PIP, PCP and SRP (although the implementations of PIP and PCP recently have been proven flawed).
\subsection{Interrupt handling}