summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorDennis Peeten <dpeeten@onsneteindhoven.nl>2008-05-18 13:12:14 (GMT)
committerDennis Peeten <dpeeten@onsneteindhoven.nl>2008-05-18 13:12:14 (GMT)
commit44d2a61beea25720530e49191ea693d9e5e111d3 (patch)
treecbed40e900f713e65cd197410955e377270dfb25
parente4999f5859edc905f7db59b2865fe6b150974cbc (diff)
download2in25-44d2a61beea25720530e49191ea693d9e5e111d3.zip
2in25-44d2a61beea25720530e49191ea693d9e5e111d3.tar.gz
2in25-44d2a61beea25720530e49191ea693d9e5e111d3.tar.bz2
Added multiprocessor stuff, converted references to BibTex
-rw-r--r--report/chapter2.tex4
-rw-r--r--report/chapter3.tex46
-rw-r--r--report/chapter6.tex64
-rw-r--r--report/figs/dspscheduler.pdfbin0 -> 12820 bytes
-rw-r--r--report/figs/dsptaskstruct.pdfbin0 -> 16082 bytes
-rw-r--r--report/figs/feasible.pdfbin0 -> 10958 bytes
-rw-r--r--report/figs/infeasible.pdfbin0 -> 10956 bytes
7 files changed, 78 insertions, 36 deletions
diff --git a/report/chapter2.tex b/report/chapter2.tex
index 3a8dd52..754a33e 100644
--- a/report/chapter2.tex
+++ b/report/chapter2.tex
@@ -4,8 +4,8 @@ Arbitrary preemptions of tasks which are used in fixed priority preemptive sched
Another solution to this problem is to use Fixed Priority Scheduling with Deferred Preemption (FPDS). FPDS allows for preemptions at pre-specified points within a task's execution. This has two main benefits. The first is that it allows for greater accuracy in determining the Worst Case Execution Time (WCET). By allowing preemptions at only specified points, we can more accurately predict the effect these preemptions will have on schedulability. More importantly, the second benefit which this provides is that it reduces the costs of preemption through selecting appropriate points of preemption. In selecting the preemption points, we can limit preemptions to take place at points where less amount of cache reloads are required, hence reducing the costs of preemption. Therefore, special care has to be taken in selecting these preemption points. \\
-Let us first define two terms now. First, an active cache line is defined in [1] as a cache line that that contains a block of data that will be referenced in the future prior to its replacement. In other words, it is a cache line in which the next reference is a hit, had the task been allowed to run to completion.
-Secondly, a preferred preemption point for a given task interval $ t_i $ to $ t_j $ is defined in [1] as the instant within the interval having the minimum number of live cache lines. \\
+Let us first define two terms now. First, an active cache line is defined in \cite{sp-pppcbrts-95} as a cache line that that contains a block of data that will be referenced in the future prior to its replacement. In other words, it is a cache line in which the next reference is a hit, had the task been allowed to run to completion.
+Secondly, a preferred preemption point for a given task interval $ t_i $ to $ t_j $ is defined in \cite{sp-pppcbrts-95} as the instant within the interval having the minimum number of live cache lines. \\
% ??????
Bearing these in mind, the task we have at hand is to search for the preferred preemption points. There are two main ways that these points can be determined, using a compiler or manually. \\
diff --git a/report/chapter3.tex b/report/chapter3.tex
index cf4589c..22d0dd9 100644
--- a/report/chapter3.tex
+++ b/report/chapter3.tex
@@ -6,7 +6,7 @@ When using fixed priority scheduling with deferred preemption, one must take som
In this section, two types of processors are being compared. These two are pipelined processors and general purpose processors. Pipelining, a standard feature in RISC\footnote[1]{RISC, or Reduced Instruction Set Computer, is a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions.} processors, is much like an assembly line. Because the processor works on different steps of the instruction at the same time, more instructions can be executed in a shorter period of time. General purpose processors on the other hand are mainly used to achieve a descent average response time (although most architectures nowadays contain a pipeline feature). \\
-The use of a pipelined processor can result in significant performance improvements. However, dependencies between instructions can cause pipeline hazards that may delay the completion of instructions. Christopher A. Healy et. al. posed a method to determine the worst case execution times of pipelined processors with cache in [2]. This approach reads all kinds of information about each type of instruction from a machine-dependent data file. This way the performance of a sequence of instructions in a pipeline can be analyzed. \\
+The use of a pipelined processor can result in significant performance improvements. However, dependencies between instructions can cause pipeline hazards that may delay the completion of instructions. Christopher A. Healy et. al. posed a method to determine the worst case execution times of pipelined processors with cache in \cite{hwh-itapic-95}. This approach reads all kinds of information about each type of instruction from a machine-dependent data file. This way the performance of a sequence of instructions in a pipeline can be analyzed. \\
In the paper of Healy et. al. they assume a system with a non-preemptive scheduling paradigm. In the case of deferred preemption scheduling, a task is split up in one or more sub-tasks which are all non-preemptive, thus making the approach applicable to FPDS. This means the worst case execution times can be calculated and thus the system can be analyzed if it's schedulable under FPDS. \\
@@ -18,16 +18,16 @@ It's clear that the use of pipelined processors have a huge advantage over gener
When cache memory is to be used in real-time systems, special attention must be paid since cache memory introduces unpredictability to the system. This unpredictable behavior is due to cache reloads upon preemption of a task. When preemptions are frequent, the sum of such cache reloading delays takes a significant portion of task execution time. The cache reloading costs due to preemptions have largely been ignored in real-time scheduling. \\
-Buttazzo states in [3] that it would be more efficient to have processors without cache or with cache disable. Although this would obviously get rid of the problem, it is not desirable. Another approach is to allows preemptions only at predetermined execution points with small cache reloading costs. A scheduling scheme was given in [4] and is called Limited Preemptible Scheduling (LPS). The scheduling scheme only allows preemptions only at predetermined execution points with small cache reloading costs. This means that the method can be applied to FPDS. \\
+Buttazzo states in \cite{but-hrtcs-05} that it would be more efficient to have processors without cache or with cache disable. Although this would obviously get rid of the problem, it is not desirable. Another approach is to allows preemptions only at predetermined execution points with small cache reloading costs. A scheduling scheme was given in \cite{scmsc-lpsecmrts-98} and is called Limited Preemptible Scheduling (LPS). The scheduling scheme only allows preemptions only at predetermined execution points with small cache reloading costs. This means that the method can be applied to FPDS. \\
-In [2] a method to determine the worst case execution times was given for multiprocessors with cache. In that paper a cache simulation is used to categorize a cache operation. Using the outcome of such a simulation the blocking time can be determined very precisely. It was already stated that this approach is applicable to FPDS in paragraph 3.1. \\
+In \cite{hwh-itapic-95} a method to determine the worst case execution times was given for multiprocessors with cache. In that paper a cache simulation is used to categorize a cache operation. Using the outcome of such a simulation the blocking time can be determined very precisely. It was already stated that this approach is applicable to FPDS in paragraph 3.1. \\
\subsubsection{Local and global memory}
In any multiprocessing system cooperating processes share data via shared data objects. A typical abstraction of a shared memory multiprocessor real-time system configuration is depicted in figure 1. Each node of the system contains a processor together with its local memory. All nodes are connected to the shared memory via an interconnection network.
\begin {center}
- \includegraphics[width=80mm]{shared_memory_multiprocessor.png} \\
+ %\includegraphics[width=80mm]{shared_memory_multiprocessor.png} \\
Figure 1. Shared memory multiprocessor system structure
\end {center}
@@ -46,3 +46,41 @@ In contrast, when using the Segmented Interrupt architecture, ISRs cannot call k
In conclusion, we find that using the segmented interrupt architectures have the benefit of lower task switching overhead, smoothing peak interrupts overloads and prevent the missing of interrupts that occur while LSRs are being served. Thus, Segmented Interrupt architecture is superior compared to the Unified interrupt architecture.
\subsection{Multi-processor systems}
+
+Gai et. al. \cite{gab-mdssa-02} Describe scheduling of tasks in asymmetric multiprocessor systems consisting of a general purpose CPU and DSP acting as a co-processor to the GPP master. The DSP is designed to execute algorithms on a set of data without interruption, hence the schedule for the DSP is non-preemptive. Gai et. al. treat the DSP scheduling as a special case of scheduling with shared resources in a multiprocessor distributed system, using a variant of the \emph{cooperative scheduling} method presented in \cite{sr-csmr-99} by Seawong and Rajkumar. Cooperative scheduling is appropriate in situations where a task can be decomposed into multiple phases, such that each phase requires a different resource. The basic idea of cooperative scheduling as described by Seawong and Rajkumar is to associate suitable deadlines to each phase of a job in order to meet the job deadline.
+
+In order to apply this to the GPP + DSP multiprocessor architecture, Gai et. al. define their real-time model to consist of periodic and sporadic tasks subdivided into regular tasks and DSP tasks. The regular tasks (application tasks, interrupt service routines, ...) are executed entirely on the master CPU for $ C_{i} $ units of time. The DSP tasks execute for $ C_{i} $ units of time on the master CPU and an additional $ C^{DSP}_{i} $ units of time on the DSP. It is assumed that each DSP job performs at most one DSP request after $ C^{pre}_{i} $ units of time, and then executes for another $ C^{post}_{i} $ units of time, such that $ C_{i} = C^{pre}_{i} + C^{post}_{i} $ as depicted in Figure \ref{fig:dsptaskstruct}.
+
+\begin{figure}[ht]
+ \includegraphics{figs/dsptaskstruct.pdf}
+ \label{fig:dsptaskstruct}
+ \caption{Structure of a DSP task}
+\end{figure}
+
+When executing a DSP task, a hole within each job is generated in the schedule of the master processor. Gai et. al. show that the earliest deadline first (EDF) and rate monotonic (RM) scheduling algorithms do not always yield a feasible schedule, while one exists. Figure \ref{fig:infeasible} shows an infeasible task set when scheduled by RM or EDF. Figure \ref{fig:feasible} shows the same task set scheduled using a fixed priority assignment where $ \tau_2 \prec \tau_1 $, such that $ \tau_2 $ executes in the holes of the master CPU's schedule.
+
+\begin{figure}[ht]
+ \includegraphics{figs/infeasible.pdf}
+ \label{fig:infeasible}
+ \caption{A task set cannot be scheduled by RM and EDF ( $\tau_2$ misses all deadlines).}
+\end{figure}
+
+\begin{figure}[ht]
+ \includegraphics{figs/feasible.pdf}
+ \label{fig:feasible}
+ \caption{A feasible schedule with fixed priority assignment $ \tau_2 \prec \tau_1 $.}
+\end{figure}
+
+The main idea is to modify the scheduler to exploit these holes to schedule some other task on the master processor to improve the schedulability bound of the system. This is achieved by modeling the DSP request of a task $ \tau_i $ as a remote procedure call that blocks $ \tau_i $ for $ B_i $ units of time, waiting for its completion. The scheduling algorithm uses a fixed priority assignment. In order to determine the next task to be executed, the scheduler enqueues regular tasks and DSP tasks in two separate queues that are ordered by priority, as shown in Figure \ref{fig:dspscheduler}.
+
+\begin{figure}[ht]
+ \includegraphics{figs/dspscheduler.pdf}
+ \label{fig:dspscheduler}
+ \caption{Structure of a DSP task}
+\end{figure}
+
+When the DSP is idle the scheduler selects the task with the highest priority between those at the head of the two queues. When the DSP is active, the scheduler select the highest priority task from the head of the regular queue only. In this way, a task using the DSP blocks all other tasks requiring the DSP, but not the regular tasks, which can freely execute on the master processor in the holes created by DSP activities.
+
+Because the DSP tasks executing on the DSP cannot be preempted, it can happen that a lower priority DSP task $ \tau_i $ is blocking a higher priority task that was released during the execution of $ \tau_i $ on the DSP. The blocking of high priority DSP tasks by lower priority DSP task has to be accounted for in the schedulability test. This test is presented {gab-mdssa-02} using the hyperbolic bound.
+
+The article furthermore present results of a simulation of the described algorithm and compares them to schedules generated by the distributed priority ceiling protocol (DPCP). However they do not mention the method of priority assignment used in their simulation.
diff --git a/report/chapter6.tex b/report/chapter6.tex
index dd6a434..e2c6040 100644
--- a/report/chapter6.tex
+++ b/report/chapter6.tex
@@ -1,32 +1,36 @@
\section{Literature}
-\small
-
-\begin{tabbing}
-
-\textbf{[1]} \= Jonathan Simonson, Janak H. Patel. Use of preferred preemption points in cache-based \\
-\> real-time systems. IEEE Computer Society Washington, DC, USA 1995 \\
-
-\\
-
-\textbf{[2]} \= Healy, C.A., Whalley, D.B., Harmon, M.G. Integrating the Timing Analysis of Pipelining \\
-\> and Instruction Caching. Dept. of Comput. Sci., Florida State Univ., Tallahassee, FL 1995 \\
-
-\\
-
-\textbf{[3]} \= Giorgio C. Buttazzo. Hard Real-Time Computing Systems, 2nd Revised edition \\
-
-\\
-
-\textbf{[4]} \= Sheayun Lee, Chang-Gun Lee, Minsuk Lee, Sang Lyul Min, Chong Sang Kim. Limited \\
-\> preemptible scheduling to embrace cache memory in real-time systems. Dept. of Computer \\
-\> Engineering, Seoul National University and Hansung University Seoul, Korea 1998 \\
-
-\\
-
-\textbf{[5]} \= Jonathan Simonson, Janak H. Patel. Use of preferred preemption points in cache-based \\
-\> real-time systems. IEEE Computer Society Washington, DC, USA 1995 \\
-
-\\
-
-\end{tabbing}
+%\small
+%
+%\begin{tabbing}
+%
+%\textbf{[1]} \= Jonathan Simonson, Janak H. Patel. Use of preferred preemption points in cache-based \\
+%\> real-time systems. IEEE Computer Society Washington, DC, USA 1995 \\
+%
+%\\
+%
+%\textbf{[2]} \= Healy, C.A., Whalley, D.B., Harmon, M.G. Integrating the Timing Analysis of Pipelining \\
+%\> and Instruction Caching. Dept. of Comput. Sci., Florida State Univ., Tallahassee, FL 1995 \\
+%
+%\\
+%
+%\textbf{[3]} \= Giorgio C. Buttazzo. Hard Real-Time Computing Systems, 2nd Revised edition \\
+%
+%\\
+%
+%\textbf{[4]} \= Sheayun Lee, Chang-Gun Lee, Minsuk Lee, Sang Lyul Min, Chong Sang Kim. Limited \\
+%\> preemptible scheduling to embrace cache memory in real-time systems. Dept. of Computer \\
+%\> Engineering, Seoul National University and Hansung University Seoul, Korea 1998 \\
+%
+%\\
+%
+%\textbf{[5]} \= Jonathan Simonson, Janak H. Patel. Use of preferred preemption points in cache-based \\
+%\> real-time systems. IEEE Computer Society Washington, DC, USA 1995 \\
+%
+%\\
+%
+%
+%\end{tabbing}
+
+\bibliographystyle{alpha}
+\bibliography{references} \ No newline at end of file
diff --git a/report/figs/dspscheduler.pdf b/report/figs/dspscheduler.pdf
new file mode 100644
index 0000000..fb2c32e
--- /dev/null
+++ b/report/figs/dspscheduler.pdf
Binary files differ
diff --git a/report/figs/dsptaskstruct.pdf b/report/figs/dsptaskstruct.pdf
new file mode 100644
index 0000000..7ae25b2
--- /dev/null
+++ b/report/figs/dsptaskstruct.pdf
Binary files differ
diff --git a/report/figs/feasible.pdf b/report/figs/feasible.pdf
new file mode 100644
index 0000000..0ba4459
--- /dev/null
+++ b/report/figs/feasible.pdf
Binary files differ
diff --git a/report/figs/infeasible.pdf b/report/figs/infeasible.pdf
new file mode 100644
index 0000000..c28ab96
--- /dev/null
+++ b/report/figs/infeasible.pdf
Binary files differ