summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--report/2in25-report.tex6
-rw-r--r--report/chapter1.tex4
-rw-r--r--report/chapter2.tex20
-rw-r--r--report/chapter3.tex27
-rw-r--r--report/chapter5.tex2
-rw-r--r--report/chapter6.tex22
6 files changed, 76 insertions, 5 deletions
diff --git a/report/2in25-report.tex b/report/2in25-report.tex
index 09b6335..d0b2ec2 100644
--- a/report/2in25-report.tex
+++ b/report/2in25-report.tex
@@ -6,8 +6,8 @@
\begin{document}
% Title page
- \title{Real-Time Architecture (2IN20)\\Exercise A}
- \author{Dennis Peeten (0571361)\\Oliver Schinagl (0580852)\\Joshua (???????)\\Wilrik De Loose (0601583)}
+ \title{Real-Time Architecture (2IN25)\\Assignment 4}
+ \author{Dennis Peeten (0571361)\\Oliver Schinagl (0580852)\\Wilrik De Loose (0601583)\\Tan Zhi Ming Joshua (0645373)}
%\date{7 February 2006}
\maketitle
@@ -19,5 +19,7 @@
\include{chapter2}
\include{chapter3}
\include{chapter4}
+\include{chapter5}
+\include{chapter6}
\end{document}
diff --git a/report/chapter1.tex b/report/chapter1.tex
index 5fba525..563e5ea 100644
--- a/report/chapter1.tex
+++ b/report/chapter1.tex
@@ -1,3 +1 @@
-\section{Background}
-
-
+\section{Background} \ No newline at end of file
diff --git a/report/chapter2.tex b/report/chapter2.tex
index b6494ef..2d1959b 100644
--- a/report/chapter2.tex
+++ b/report/chapter2.tex
@@ -1,2 +1,22 @@
\section{Development considerations}
+Arbitrary preemptions of tasks which are used in fixed priority preemptive scheduling, often result in cache state changes in between execution of the program. This causes difficulties in determining the effect of cache reload overheads on execution time. These preemptions make the problem of determining cache hit rates virtually intractable, making the system unpredictable. Although fixed priority non-preemptive scheduling can solve this problem, it comes at a cost of reduced schedulability. \\
+
+Another solution to this problem is to use Fixed Priority Scheduling with Deferred Preemption (FPDS). FPDS allows for preemptions at pre-specified points within a task's execution. This has two main benefits. The first is that it allows for greater accuracy in determining the Worst Case Execution Time (WCET). By allowing preemptions at only specified points, we can more accurately predict the effect these preemptions will have on schedulability. More importantly, the second benefit which this provides is that it reduces the costs of preemption through selecting appropriate points of preemption. In selecting the preemption points, we can limit preemptions to take place at points where less amount of cache reloads are required, hence reducing the costs of preemption. Therefore, special care has to be taken in selecting these preemption points. \\
+
+Let us first define two terms now. First, an active cache line is defined in [1] as a cache line that that contains a block of data that will be referenced in the future prior to its replacement. In other words, it is a cache line in which the next reference is a hit, had the task been allowed to run to completion.
+Secondly, a preferred preemption point for a given task interval $ t_i $ to $ t_j $ is defined in [1] as the instant within the interval having the minimum number of live cache lines. \\
+
+% ??????
+Bearing these in mind, the task we have at hand is to search for the preferred preemption points. There are two main ways that these points can be determined, using a compiler or manually. \\
+% ??????
+
+\subsection{Using a compiler}
+
+The compiler is used to analyze the worst-case execution path and preferred preemption points are determined along this path. The number of active cache lines at any point can be determined by analyzing the activity of the cache line with the help of the compiler. After the worst-case execution path is analyzed, the next worst is analyzed using the preemption points that have been found. If the execution time of this path is lower than the WCET, the analysis can stop. Otherwise, the analysis has to be repeated. The help of a compiler is needed to do the analysis because the analysis requires a lot of processing and monitoring. Also, the effects of caching become transparent to the user if a compiler is used. \\
+
+\subsection{Manually}
+
+The preferred preemption points can also be determined by the programmer with the help of insertion of traps in the source code to monitor when tasks of higher priority are ready to run. The programmer would already have an intuition as to where the number of live caches is minimized. Using this method, preferred preemption points can also be determined. \\
+
+Considering the two methods, it is clear that using the compiler is a more systematic and transparent way of finding preemption points. In addition, preemption points found using a compiler will generally perform better because the analysis is more extensive. However, this analysis can also be performed manually as described above is a compiler cannot be used. \\
diff --git a/report/chapter3.tex b/report/chapter3.tex
index 0b5591a..417aaca 100644
--- a/report/chapter3.tex
+++ b/report/chapter3.tex
@@ -1,2 +1,29 @@
\section{Architectural considerations}
+When using fixed priority scheduling with deferred preemption, one must take some extra architectural considerations into account. This chapter deals with a number of aspects such as the choice of processor, use of the memory and interrupt handling. \\
+
+\subsection{Processors}
+
+In this section, two types of processors are being compared. These two are pipelined processors and general purpose processors. Pipelining, a standard feature in RISC\footnote[1]{RISC, or Reduced Instruction Set Computer, is a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions.} processors, is much like an assembly line. Because the processor works on different steps of the instruction at the same time, more instructions can be executed in a shorter period of time. General purpose processors on the other hand are mainly used to achieve a descent average response time. \\
+
+The use of a pipelined processor can result in significant performance improvements. However, dependencies between instructions can cause pipeline hazards that may delay the completion of instructions. Christopher A. Healy et. al. posed a method to determine the worst case execution times of pipelined processors with cache in [2]. This approach reads all kinds of information about each type of instruction from a machine-dependent data file. This way the performance of a sequence of instructions in a pipeline can be analyzed. \\
+
+In the paper of Healy et. al. they assume a system with a non-preemptive scheduling paradigm. In the case of deferred preemption scheduling, a task is split up in one or more sub-tasks which are all non-preemptive, thus making the approach applicable to FPDS. \\
+
+It's clear that the use of pipelined processors have a huge advantage over general purpose processors when it comes down to performance. Yet, determining the worst case response times is much more elaborate and complex. Both aspects need to be taken into account when choosing the right architecture for a specific real-time application. \\
+
+\subsection{Memory}
+
+\subsection{Interrupt handling}
+
+Real time systems usually have at least two groups of tasks. They are the application tasks and the interrupts. Both classes of task are repeatedly fired due to certain events. The difference between the two classes is that application tasks are usually periodic and they start executing due to events that are generated internally. In contrast, interrupts execute in response to external events. \\
+
+Interrupts generally pre-empt a task in the same way a higher priority task would pre-empt a lower priority one. There are two main ways an interrupt can be handled. Interrupts can be handled using a Unified interrupt architecture where system services can be accessed from Interrupt Service Routines (ISR) or using a Segmented Interrupt Architecture wherein systems services may not be accessed from ISR. \\
+
+In using the Unified interrupt architecture, interrupts are served immediately after they are invoked, and all interrupts must be disabled during the time the initial interrupt is served because the ISR can access the system services directly and there is no way to ascertain which ISRs make which kernel calls. Hence, there exists a possibility that interrupts may be disabled for too long and an interrupt may be missed. \\
+
+In contrast, when using the Segmented Interrupt architecture, ISRs cannot call kernel services. Instead, the ISRs invoke a Link Service Routine (LSR), which are then scheduled by a LSR scheduler to run at a later time. LSRs can run only after all ISRs have been completed. They then call kernel services which schedule the LSR with respect to all other tasks. The kernel services schedules the LSR so that it only starts running if and when the appropriate resources are available. This means that it incurs a lower task switching overhead. Using this method of serving interrupts also helps to smooth peak interrupt overloads. When a burst of ISRs are invoked in rapid succession, the LSR scheduler helps to ensure that temporal integrity is maintained and will allow the interrupts to run in order of the way they were invoked. Additionally, LSRs run with interrupts fully enabled, which prevent missing of any interrupts during the execution of LSRs. \\
+
+In conclusion, we find that using the segmented interrupt architectures have the benefit of lower task switching overhead, smoothing peak interrupts overloads and prevent the missing of interrupts that occur while LSRs are being served. Thus, Segmented Interrupt architecture is superior compared to the Unified interrupt architecture. \\
+
+\subsection{Multi-processor systems}
diff --git a/report/chapter5.tex b/report/chapter5.tex
new file mode 100644
index 0000000..033182e
--- /dev/null
+++ b/report/chapter5.tex
@@ -0,0 +1,2 @@
+\section{Discussion and conclusion}
+
diff --git a/report/chapter6.tex b/report/chapter6.tex
new file mode 100644
index 0000000..52b9773
--- /dev/null
+++ b/report/chapter6.tex
@@ -0,0 +1,22 @@
+\section{Literature}
+
+\small
+
+\begin{tabbing}
+
+\textbf{[2]} \= Jonathan Simonson, Janak H. Patel. Use of preferred preemption points in cache-based \\
+\> real-time systems. IEEE Computer Society Washington, DC, USA 1995 \\
+
+\\
+
+\textbf{[2]} \= Healy, C.A., Whalley, D.B., Harmon, M.G. Integrating the Timing Analysis of Pipelining \\
+\> and Instruction Caching. Dept. of Comput. Sci., Florida State Univ., Tallahassee, FL 1995 \\
+
+\\
+
+\textbf{[3]} \= Jonathan Simonson, Janak H. Patel. Use of preferred preemption points in cache-based \\
+\> real-time systems. IEEE Computer Society Washington, DC, USA 1995 \\
+
+\\
+
+\end{tabbing}