summaryrefslogtreecommitdiffstats
path: root/report/chapter3.tex
blob: 417aaca29a854bc89da72f31929b61e6e0bf88c7 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
\section{Architectural considerations}

When using fixed priority scheduling with deferred preemption, one must take some extra architectural considerations into account. This chapter deals with a number of aspects such as the choice of processor, use of the memory and interrupt handling. \\

\subsection{Processors}

In this section, two types of processors are being compared. These two are pipelined processors and general purpose processors. Pipelining, a standard feature in RISC\footnote[1]{RISC, or Reduced Instruction Set Computer, is a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions.} processors, is much like an assembly line. Because the processor works on different steps of the instruction at the same time, more instructions can be executed in a shorter period of time. General purpose processors on the other hand are mainly used to achieve a descent average response time. \\

The use of a pipelined processor can result in significant performance improvements. However, dependencies between instructions can cause pipeline hazards that may delay the completion of instructions. Christopher A. Healy et. al. posed a method to determine the worst case execution times of pipelined processors with cache in [2]. This approach reads all kinds of information about each type of instruction from a machine-dependent data file. This way the performance of a sequence of instructions in a pipeline can be analyzed. \\

In the paper of Healy et. al. they assume a system with a non-preemptive scheduling paradigm. In the case of deferred preemption scheduling, a task is split up in one or more sub-tasks which are all non-preemptive, thus making the approach applicable to FPDS. \\

It's clear that the use of pipelined processors have a huge advantage over general purpose processors when it comes down to performance. Yet, determining the worst case response times is much more elaborate and complex. Both aspects need to be taken into account when choosing the right architecture for a specific real-time application. \\

\subsection{Memory}

\subsection{Interrupt handling}

Real time systems usually have at least two groups of tasks. They are the application tasks and the interrupts. Both classes of task are repeatedly fired due to certain events. The difference between the two classes is that application tasks are usually periodic and they start executing due to events that are generated internally. In contrast, interrupts execute in response to external events. \\
	
Interrupts generally pre-empt a task in the same way a higher priority task would pre-empt a lower priority one. There are two main ways an interrupt can be handled. Interrupts can be handled using a Unified interrupt architecture where system services can be accessed from Interrupt Service Routines (ISR) or using a Segmented Interrupt Architecture wherein systems services may not be accessed from ISR. \\

In using the Unified interrupt architecture, interrupts are served immediately after they are invoked, and all interrupts must be disabled during the time the initial interrupt is served because the ISR can access the system services directly and there is no way to ascertain which ISRs make which kernel calls. Hence, there exists a possibility that interrupts may be disabled for too long and an interrupt may be missed. \\

In contrast, when using the Segmented Interrupt architecture, ISRs cannot call kernel services. Instead, the ISRs invoke a Link Service Routine (LSR), which are then scheduled by a LSR scheduler to run at a later time. LSRs can run only after all ISRs have been completed. They then call kernel services which schedule the LSR with respect to all other tasks. The kernel services schedules the LSR so that it only starts running if and when the appropriate resources are available. This means that it incurs a lower task switching overhead. Using this method of serving interrupts also helps to smooth peak interrupt overloads. When a burst of ISRs are invoked in rapid succession, the LSR scheduler helps to ensure that temporal integrity is maintained and will allow the interrupts to run in order of the way they were invoked.  Additionally, LSRs run with interrupts fully enabled, which prevent missing of any interrupts during the execution of LSRs. \\

In conclusion, we find that using the segmented interrupt architectures have the benefit of lower task switching overhead, smoothing peak interrupts overloads and prevent the missing of interrupts that occur while LSRs are being served. Thus, Segmented Interrupt architecture is superior compared to the Unified interrupt architecture. \\

\subsection{Multi-processor systems}