summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--report/2in25-report.tex1
-rw-r--r--report/chapter2.tex2
-rw-r--r--report/chapter3.tex29
-rw-r--r--report/chapter6.tex14
4 files changed, 38 insertions, 8 deletions
diff --git a/report/2in25-report.tex b/report/2in25-report.tex
index d0b2ec2..eb2d81e 100644
--- a/report/2in25-report.tex
+++ b/report/2in25-report.tex
@@ -1,4 +1,5 @@
\documentclass[10pt,a4paper]{article}
+\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{moreverb}
diff --git a/report/chapter2.tex b/report/chapter2.tex
index 2d1959b..3a8dd52 100644
--- a/report/chapter2.tex
+++ b/report/chapter2.tex
@@ -19,4 +19,4 @@ The compiler is used to analyze the worst-case execution path and preferred pree
The preferred preemption points can also be determined by the programmer with the help of insertion of traps in the source code to monitor when tasks of higher priority are ready to run. The programmer would already have an intuition as to where the number of live caches is minimized. Using this method, preferred preemption points can also be determined. \\
-Considering the two methods, it is clear that using the compiler is a more systematic and transparent way of finding preemption points. In addition, preemption points found using a compiler will generally perform better because the analysis is more extensive. However, this analysis can also be performed manually as described above is a compiler cannot be used. \\
+Considering the two methods, it is clear that using the compiler is a more systematic and transparent way of finding preemption points. In addition, preemption points found using a compiler will generally perform better because the analysis is more extensive. However, this analysis can also be performed manually as described above is a compiler cannot be used.
diff --git a/report/chapter3.tex b/report/chapter3.tex
index 417aaca..cf4589c 100644
--- a/report/chapter3.tex
+++ b/report/chapter3.tex
@@ -1,19 +1,38 @@
\section{Architectural considerations}
-When using fixed priority scheduling with deferred preemption, one must take some extra architectural considerations into account. This chapter deals with a number of aspects such as the choice of processor, use of the memory and interrupt handling. \\
+When using fixed priority scheduling with deferred preemption, one must take some extra architectural considerations into account. This chapter deals with a number of aspects such as the choice of processor, use of the memory and interrupt handling.
\subsection{Processors}
-In this section, two types of processors are being compared. These two are pipelined processors and general purpose processors. Pipelining, a standard feature in RISC\footnote[1]{RISC, or Reduced Instruction Set Computer, is a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions.} processors, is much like an assembly line. Because the processor works on different steps of the instruction at the same time, more instructions can be executed in a shorter period of time. General purpose processors on the other hand are mainly used to achieve a descent average response time. \\
+In this section, two types of processors are being compared. These two are pipelined processors and general purpose processors. Pipelining, a standard feature in RISC\footnote[1]{RISC, or Reduced Instruction Set Computer, is a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions.} processors, is much like an assembly line. Because the processor works on different steps of the instruction at the same time, more instructions can be executed in a shorter period of time. General purpose processors on the other hand are mainly used to achieve a descent average response time (although most architectures nowadays contain a pipeline feature). \\
The use of a pipelined processor can result in significant performance improvements. However, dependencies between instructions can cause pipeline hazards that may delay the completion of instructions. Christopher A. Healy et. al. posed a method to determine the worst case execution times of pipelined processors with cache in [2]. This approach reads all kinds of information about each type of instruction from a machine-dependent data file. This way the performance of a sequence of instructions in a pipeline can be analyzed. \\
-In the paper of Healy et. al. they assume a system with a non-preemptive scheduling paradigm. In the case of deferred preemption scheduling, a task is split up in one or more sub-tasks which are all non-preemptive, thus making the approach applicable to FPDS. \\
+In the paper of Healy et. al. they assume a system with a non-preemptive scheduling paradigm. In the case of deferred preemption scheduling, a task is split up in one or more sub-tasks which are all non-preemptive, thus making the approach applicable to FPDS. This means the worst case execution times can be calculated and thus the system can be analyzed if it's schedulable under FPDS. \\
-It's clear that the use of pipelined processors have a huge advantage over general purpose processors when it comes down to performance. Yet, determining the worst case response times is much more elaborate and complex. Both aspects need to be taken into account when choosing the right architecture for a specific real-time application. \\
+It's clear that the use of pipelined processors have a huge advantage over general purpose processors when it comes down to performance. Yet, determining the worst case execution times is much more elaborate and complex. Both aspects need to be taken into account when choosing the right architecture for a specific real-time application.
\subsection{Memory}
+\subsubsection{Cache}
+
+When cache memory is to be used in real-time systems, special attention must be paid since cache memory introduces unpredictability to the system. This unpredictable behavior is due to cache reloads upon preemption of a task. When preemptions are frequent, the sum of such cache reloading delays takes a significant portion of task execution time. The cache reloading costs due to preemptions have largely been ignored in real-time scheduling. \\
+
+Buttazzo states in [3] that it would be more efficient to have processors without cache or with cache disable. Although this would obviously get rid of the problem, it is not desirable. Another approach is to allows preemptions only at predetermined execution points with small cache reloading costs. A scheduling scheme was given in [4] and is called Limited Preemptible Scheduling (LPS). The scheduling scheme only allows preemptions only at predetermined execution points with small cache reloading costs. This means that the method can be applied to FPDS. \\
+
+In [2] a method to determine the worst case execution times was given for multiprocessors with cache. In that paper a cache simulation is used to categorize a cache operation. Using the outcome of such a simulation the blocking time can be determined very precisely. It was already stated that this approach is applicable to FPDS in paragraph 3.1. \\
+
+\subsubsection{Local and global memory}
+
+In any multiprocessing system cooperating processes share data via shared data objects. A typical abstraction of a shared memory multiprocessor real-time system configuration is depicted in figure 1. Each node of the system contains a processor together with its local memory. All nodes are connected to the shared memory via an interconnection network.
+
+\begin {center}
+ \includegraphics[width=80mm]{shared_memory_multiprocessor.png} \\
+ Figure 1. Shared memory multiprocessor system structure
+\end {center}
+
+However, the literature doesn't state much about the use of local and global memory when applying FPDS. Therefore it is difficult to make a statement about this subject. Most likely the use of local memory shouldn't have much impact on the schedulability of the system under FPDS. Global memory on the other hand can indeed bring nondeterminism. When guarding a critical section with semaphores for instance, a lower priority job can block a higher priority job. This can be overcome with various protocols like PIP, PCP and SRP (although the implementations of PIP and PCP recently have been proven flawed).
+
\subsection{Interrupt handling}
Real time systems usually have at least two groups of tasks. They are the application tasks and the interrupts. Both classes of task are repeatedly fired due to certain events. The difference between the two classes is that application tasks are usually periodic and they start executing due to events that are generated internally. In contrast, interrupts execute in response to external events. \\
@@ -24,6 +43,6 @@ In using the Unified interrupt architecture, interrupts are served immediately a
In contrast, when using the Segmented Interrupt architecture, ISRs cannot call kernel services. Instead, the ISRs invoke a Link Service Routine (LSR), which are then scheduled by a LSR scheduler to run at a later time. LSRs can run only after all ISRs have been completed. They then call kernel services which schedule the LSR with respect to all other tasks. The kernel services schedules the LSR so that it only starts running if and when the appropriate resources are available. This means that it incurs a lower task switching overhead. Using this method of serving interrupts also helps to smooth peak interrupt overloads. When a burst of ISRs are invoked in rapid succession, the LSR scheduler helps to ensure that temporal integrity is maintained and will allow the interrupts to run in order of the way they were invoked. Additionally, LSRs run with interrupts fully enabled, which prevent missing of any interrupts during the execution of LSRs. \\
-In conclusion, we find that using the segmented interrupt architectures have the benefit of lower task switching overhead, smoothing peak interrupts overloads and prevent the missing of interrupts that occur while LSRs are being served. Thus, Segmented Interrupt architecture is superior compared to the Unified interrupt architecture. \\
+In conclusion, we find that using the segmented interrupt architectures have the benefit of lower task switching overhead, smoothing peak interrupts overloads and prevent the missing of interrupts that occur while LSRs are being served. Thus, Segmented Interrupt architecture is superior compared to the Unified interrupt architecture.
\subsection{Multi-processor systems}
diff --git a/report/chapter6.tex b/report/chapter6.tex
index 52b9773..dd6a434 100644
--- a/report/chapter6.tex
+++ b/report/chapter6.tex
@@ -4,7 +4,7 @@
\begin{tabbing}
-\textbf{[2]} \= Jonathan Simonson, Janak H. Patel. Use of preferred preemption points in cache-based \\
+\textbf{[1]} \= Jonathan Simonson, Janak H. Patel. Use of preferred preemption points in cache-based \\
\> real-time systems. IEEE Computer Society Washington, DC, USA 1995 \\
\\
@@ -14,7 +14,17 @@
\\
-\textbf{[3]} \= Jonathan Simonson, Janak H. Patel. Use of preferred preemption points in cache-based \\
+\textbf{[3]} \= Giorgio C. Buttazzo. Hard Real-Time Computing Systems, 2nd Revised edition \\
+
+\\
+
+\textbf{[4]} \= Sheayun Lee, Chang-Gun Lee, Minsuk Lee, Sang Lyul Min, Chong Sang Kim. Limited \\
+\> preemptible scheduling to embrace cache memory in real-time systems. Dept. of Computer \\
+\> Engineering, Seoul National University and Hansung University Seoul, Korea 1998 \\
+
+\\
+
+\textbf{[5]} \= Jonathan Simonson, Janak H. Patel. Use of preferred preemption points in cache-based \\
\> real-time systems. IEEE Computer Society Washington, DC, USA 1995 \\
\\