summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorWilrik de Loose <wilrik@wilrik.nl>2008-05-25 19:05:54 (GMT)
committerWilrik de Loose <wilrik@wilrik.nl>2008-05-25 19:05:54 (GMT)
commitf4863f5ce325d0a7c79ded9968d9bf9ce27a8332 (patch)
tree1e9278c99e7b5053fb157f8bc6dc39c434cb99d7
parent694107ae8cacc98b15aa871504d855917daf494e (diff)
download2in25-f4863f5ce325d0a7c79ded9968d9bf9ce27a8332.zip
2in25-f4863f5ce325d0a7c79ded9968d9bf9ce27a8332.tar.gz
2in25-f4863f5ce325d0a7c79ded9968d9bf9ce27a8332.tar.bz2
final reportHEADmaster
-rw-r--r--report/2in25-report.pdfbin155559 -> 0 bytes
-rw-r--r--report/chapter1.tex4
-rw-r--r--report/chapter3.tex59
3 files changed, 37 insertions, 26 deletions
diff --git a/report/2in25-report.pdf b/report/2in25-report.pdf
deleted file mode 100644
index 6a85bbc..0000000
--- a/report/2in25-report.pdf
+++ /dev/null
Binary files differ
diff --git a/report/chapter1.tex b/report/chapter1.tex
index ff1a41c..6fbb87d 100644
--- a/report/chapter1.tex
+++ b/report/chapter1.tex
@@ -10,6 +10,6 @@ Fixed priority scheduling with \emph{arbitrary} preemptions (FPPS) is extensivel
As a compromise between \emph{no} preemptions and \emph{arbitrary} preemptions, FPDS was introduced. A scheduler that employs deferred preemptive scheduling, preempts tasks only at certain points that are defined by the system designer. By controlling the points at which a task may be preempted, a system designer has some knowledge about the state of the system and computation at which the task is preempted and at the same time may limit the number of preemptions that occur. This allows for more educated guesses of the costs of context switches and hence more accurate analysis of the (real world) schedulability of task sets. \\
-An essential goal of OS resource managemnt for real-time and multimeda systems is to provide timely, guaranteed and protected access to system recources. Allthough extensive research has been commited on processor scheduling, or network scheduling alone, disk scheduling, co-processor scheduling et cetra have been studied to a smaller extend. Not to mention using those resources \emph{simultaneously} wihtin a single node. Video applications may access high volume data from a disk, processes the data and transmit it accross the network. All these stages must be completed by a deadline. Obtaining simultaneous and timely access to multiple resources is known to be an NP-complete problem, making it a very complex to handle properly. \\
+An essential goal of OS resource management for real-time and multimedia systems is to provide timely, guaranteed and protected access to system resources. Although extensive research has been commited on processor scheduling, or network scheduling alone, disk scheduling, co-processor scheduling et cetera have been studied to a smaller extend. Not to mention using those resources \emph{simultaneously} within a single node. Video applications may access high volume data from a disk, processes the data and transmit it across the network. All these stages must be completed by a deadline. Obtaining simultaneous and timely access to multiple resources is known to be an NP-complete problem, making it a very complex to handle properly.
-% One of the suggested solutions for this problem would be to decouple the resources where possible, if resources are independent of one a nother of course, and then have servers for each resources. This leads to more problems however, when for example the disk and processor pair. To access the disk one needs the CPU.
+% One of the suggested solutions for this problem would be to decouple the resources where possible, if resources are independent of one another of course, and then have servers for each resources. This leads to more problems however, when for example the disk and processor pair. To access the disk one needs the CPU.
diff --git a/report/chapter3.tex b/report/chapter3.tex
index 477053c..975350e 100644
--- a/report/chapter3.tex
+++ b/report/chapter3.tex
@@ -20,19 +20,22 @@ When cache memory is to be used in real-time systems, special attention must be
Buttazzo states in \cite{but-hrtcs-05} that it would be more efficient to have processors without cache or with cache disable. Although this would obviously get rid of the problem, it is not desirable because the use of cache can really improve system performance. Another approach is to allows preemptions only at predetermined execution points with small cache reloading costs. A scheduling scheme was given in \cite{scmsc-lpsecmrts-98} and is called Limited Preemptible Scheduling (LPS). The scheduling scheme only allows preemptions only at predetermined execution points with small cache reloading costs. This means that the method can be applied to FPDS. \\
-In \cite{hwh-itapic-95} a method to determine the worst case execution times was given for multiprocessors with cache. In that paper a cache simulation is used to categorize a cache operation. Using the outcome of such a simulation the blocking time can be determined very precisely. It was already stated that this approach is applicable to FPDS in paragraph 3.1. \\
+In \cite{hwh-itapic-95} a method to determine the worst case execution times was given for multiprocessors with cache. In that paper a cache simulation is used to categorize a cache operation. Using the outcome of such a simulation the blocking time can be determined very precisely. It was already stated that this approach is applicable to FPDS in paragraph 3.1.
\subsubsection{Local and global memory}
-In any multiprocessing system cooperating processes share data via shared data objects. A typical abstraction of a shared memory multiprocessor real-time system configuration is depicted in figure 1. Each node of the system contains a processor together with its local memory. All nodes are connected to the shared memory via an interconnection network.
+In any multiprocessing system cooperating processes share data via shared data objects. A typical abstraction of a shared memory multiprocessor real-time system configuration is depicted in figure \ref{fig:shared_memory_multiprocessor}. Each node of the system contains a processor together with its local memory. All nodes are connected to the shared memory via an interconnection network.
-\begin {center}
- \includegraphics[width=80mm]{figs/shared_memory_multiprocessor.png} \\
- Figure 1. Shared memory multiprocessor system structure
-\end {center}
+\begin{figure}[ht]
+ \begin {center}
+ \includegraphics[width=80mm]{figs/shared_memory_multiprocessor.png}
+ \label{fig:shared_memory_multiprocessor}
+ \caption{Shared memory multiprocessor system structure.}
+ \end {center}
+\end{figure}
However, the literature doesn't state much about the use of local and global memory when using FPDS. Therefore it is difficult to make a statement about this subject. Most likely the use of local memory shouldn't have much impact on the schedulability of the system under FPDS. Global memory on the other hand can indeed bring nondeterminism when using cache. FPDS is therefor more predictable because the cache misses can be determined more accurately. \\
-
+
Shared variables in a critical section can also bring nondeterminism. When guarding a critical section with semaphores, a lower priority job can block a higher priority job. This can be overcome with various protocols like PIP, PCP and SRP (although the implementations of PIP and PCP recently have been proven flawed).
\subsection{Interrupt handling}
@@ -49,40 +52,48 @@ In conclusion, we find that using the segmented interrupt architectures have the
\subsection{Multi-processor systems}
-Gai et. al. \cite{gab-mdssa-02} Describe scheduling of tasks in asymmetric multiprocessor systems consisting of a general purpose CPU and DSP acting as a co-processor to the master CPU. The DSP is designed to execute algorithms on a set of data without interruption, hence the schedule for the DSP is non-preemptive. Gai et. al. treat the DSP scheduling as a special case of scheduling with shared resources in a multiprocessor distributed system, using a variant of the \emph{cooperative scheduling} method presented in \cite{sr-csmr-99} by Seawong and Rajkumar. Cooperative scheduling is appropriate in situations where a task can be decomposed into multiple phases, such that each phase requires a different resource. The basic idea of cooperative scheduling as described by Seawong and Rajkumar is to associate suitable deadlines to each phase of a job in order to meet the job deadline.
+Gai et. al. \cite{gab-mdssa-02} Describe scheduling of tasks in asymmetric multiprocessor systems consisting of a general purpose CPU and DSP acting as a co-processor to the master CPU. The DSP is designed to execute algorithms on a set of data without interruption, hence the schedule for the DSP is non-preemptive. Gai et. al. treat the DSP scheduling as a special case of scheduling with shared resources in a multiprocessor distributed system, using a variant of the \emph{cooperative scheduling} method presented in \cite{sr-csmr-99} by Seawong and Rajkumar. Cooperative scheduling is appropriate in situations where a task can be decomposed into multiple phases, such that each phase requires a different resource. The basic idea of cooperative scheduling as described by Seawong and Rajkumar is to associate suitable deadlines to each phase of a job in order to meet the job deadline. \\
-In order to apply this to the GPP + DSP multiprocessor architecture, Gai et. al. define their real-time model to consist of periodic and sporadic tasks subdivided into regular tasks and DSP tasks. The regular tasks (application tasks, interrupt service routines, ...) are executed entirely on the master CPU for $ C_{i} $ units of time. The DSP tasks execute for $ C_{i} $ units of time on the master CPU and an additional $ C^{DSP}_{i} $ units of time on the DSP. It is assumed that each DSP job performs at most one DSP request after $ C^{pre}_{i} $ units of time, and then executes for another $ C^{post}_{i} $ units of time, such that $ C_{i} = C^{pre}_{i} + C^{post}_{i} $ as depicted in Figure \ref{fig:dsptaskstruct}.
+In order to apply this to the GPP + DSP multiprocessor architecture, Gai et. al. define their real-time model to consist of periodic and sporadic tasks subdivided into regular tasks and DSP tasks. The regular tasks (application tasks, interrupt service routines, ...) are executed entirely on the master CPU for $ C_{i} $ units of time. The DSP tasks execute for $ C_{i} $ units of time on the master CPU and an additional $ C^{DSP}_{i} $ units of time on the DSP. It is assumed that each DSP job performs at most one DSP request after $ C^{pre}_{i} $ units of time, and then executes for another $ C^{post}_{i} $ units of time, such that $ C_{i} = C^{pre}_{i} + C^{post}_{i} $ as depicted in figure \ref{fig:dsptaskstruct}.
\begin{figure}[ht]
- \includegraphics{figs/dsptaskstruct.pdf}
- \label{fig:dsptaskstruct}
- \caption{Structure of a DSP task}
+ \begin {center}
+ \includegraphics[width=80mm]{figs/dsptaskstruct.pdf}
+ \label{fig:dsptaskstruct}
+ \caption{Structure of a DSP task.}
+ \end {center}
\end{figure}
When executing a DSP task, a hole within each job is generated in the schedule of the master processor. Gai et. al. show that the earliest deadline first (EDF) and rate monotonic (RM) scheduling algorithms do not always yield a feasible schedule, while one exists. Figure \ref{fig:infeasible} shows an infeasible task set when scheduled by RM or EDF. Figure \ref{fig:feasible} shows the same task set scheduled using a fixed priority assignment where $ \tau_2 \prec \tau_1 $, such that $ \tau_2 $ executes in the holes of the master CPU's schedule.
\begin{figure}[ht]
- \includegraphics{figs/infeasible.pdf}
- \label{fig:infeasible}
- \caption{A task set cannot be scheduled by RM and EDF ( $\tau_2$ misses all deadlines).}
+ \begin {center}
+ \includegraphics[width=80mm]{figs/infeasible.pdf}
+ \label{fig:infeasible}
+ \caption{A task set cannot be scheduled by RM and EDF ($\tau_2$ misses all deadlines).}
+ \end {center}
\end{figure}
\begin{figure}[ht]
- \includegraphics{figs/feasible.pdf}
- \label{fig:feasible}
- \caption{A feasible schedule with fixed priority assignment $ \tau_2 \prec \tau_1 $.}
+ \begin {center}
+ \includegraphics[width=80mm]{figs/feasible.pdf}
+ \label{fig:feasible}
+ \caption{A feasible schedule with fixed priority assignment $ \tau_2 \prec \tau_1 $.}
+ \end {center}
\end{figure}
-The main idea is to modify the scheduler to exploit these holes to schedule some other task on the master processor to improve the schedulability bound of the system. This is achieved by modeling the DSP request of a task $ \tau_i $ as a remote procedure call that blocks $ \tau_i $ for $ B_i $ units of time, waiting for its completion. The scheduling algorithm uses a fixed priority assignment. In order to determine the next task to be executed, the scheduler enqueues regular tasks and DSP tasks in two separate queues that are ordered by priority, as shown in Figure \ref{fig:dspscheduler}.
+The main idea is to modify the scheduler to exploit these holes to schedule some other task on the master processor to improve the schedulability bound of the system. This is achieved by modeling the DSP request of a task $ \tau_i $ as a remote procedure call that blocks $ \tau_i $ for $ B_i $ units of time, waiting for its completion. The scheduling algorithm uses a fixed priority assignment. In order to determine the next task to be executed, the scheduler enqueues regular tasks and DSP tasks in two separate queues that are ordered by priority, as shown in figure \ref{fig:dspscheduler}.
\begin{figure}[ht]
- \includegraphics{figs/dspscheduler.pdf}
- \label{fig:dspscheduler}
- \caption{Structure of a DSP task}
+ \begin {center}
+ \includegraphics[width=80mm]{figs/dspscheduler.pdf}
+ \label{fig:dspscheduler}
+ \caption{Structure of a DSP task.}
+ \end {center}
\end{figure}
-When the DSP is idle the scheduler selects the task with the highest priority between those at the head of the two queues. When the DSP is active, the scheduler select the highest priority task from the head of the regular queue only. In this way, a task using the DSP blocks all other tasks requiring the DSP, but not the regular tasks, which can freely execute on the master processor in the holes created by DSP activities.
+When the DSP is idle the scheduler selects the task with the highest priority between those at the head of the two queues. When the DSP is active, the scheduler select the highest priority task from the head of the regular queue only. In this way, a task using the DSP blocks all other tasks requiring the DSP, but not the regular tasks, which can freely execute on the master processor in the holes created by DSP activities. \\
-Because the DSP tasks executing on the DSP cannot be preempted, it can happen that a lower priority DSP task $ \tau_i $ is blocking a higher priority task that was released during the execution of $ \tau_i $ on the DSP. The blocking of high priority DSP tasks by lower priority DSP task has to be accounted for in the schedulability test. This test is presented \cite{gab-mdssa-02} using the hyperbolic bound.
+Because the DSP tasks executing on the DSP cannot be preempted, it can happen that a lower priority DSP task $ \tau_i $ is blocking a higher priority task that was released during the execution of $ \tau_i $ on the DSP. The blocking of high priority DSP tasks by lower priority DSP task has to be accounted for in the schedulability test. This test is presented \cite{gab-mdssa-02} using the hyperbolic bound. \\
The article furthermore present results of a simulation of the described algorithm and compares them to schedules generated by the distributed priority ceiling protocol (DPCP). However they do not mention the method of priority assignment used in their simulation.