From b682464751091887cb0c011a5d5ed996716e43b9 Mon Sep 17 00:00:00 2001 From: Wilrik de Loose Date: Tue, 20 May 2008 12:47:04 +0000 Subject: + application domains --- report/2in25-report.tex | 3 --- report/chapter2.tex | 2 -- report/chapter3.tex | 8 +++++--- report/chapter4.tex | 13 +++++++++++++ report/chapter6.tex | 4 ++-- 5 files changed, 20 insertions(+), 10 deletions(-) diff --git a/report/2in25-report.tex b/report/2in25-report.tex index eb2d81e..f74c702 100644 --- a/report/2in25-report.tex +++ b/report/2in25-report.tex @@ -1,8 +1,5 @@ \documentclass[10pt,a4paper]{article} \usepackage{graphicx} -\usepackage{amsmath} -\usepackage{amsfonts} -\usepackage{moreverb} \begin{document} diff --git a/report/chapter2.tex b/report/chapter2.tex index 754a33e..f7d5ce0 100644 --- a/report/chapter2.tex +++ b/report/chapter2.tex @@ -7,9 +7,7 @@ Another solution to this problem is to use Fixed Priority Scheduling with Deferr Let us first define two terms now. First, an active cache line is defined in \cite{sp-pppcbrts-95} as a cache line that that contains a block of data that will be referenced in the future prior to its replacement. In other words, it is a cache line in which the next reference is a hit, had the task been allowed to run to completion. Secondly, a preferred preemption point for a given task interval $ t_i $ to $ t_j $ is defined in \cite{sp-pppcbrts-95} as the instant within the interval having the minimum number of live cache lines. \\ -% ?????? Bearing these in mind, the task we have at hand is to search for the preferred preemption points. There are two main ways that these points can be determined, using a compiler or manually. \\ -% ?????? \subsection{Using a compiler} diff --git a/report/chapter3.tex b/report/chapter3.tex index ee1650e..a24b9f3 100644 --- a/report/chapter3.tex +++ b/report/chapter3.tex @@ -4,7 +4,7 @@ When using fixed priority scheduling with deferred preemption, one must take som \subsection{Processors} -In this section, two types of processors are being compared. These two are pipelined processors and general purpose processors. Pipelining, a standard feature in RISC\footnote[1]{RISC, or Reduced Instruction Set Computer, is a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions.} processors, is much like an assembly line. Because the processor works on different steps of the instruction at the same time, more instructions can be executed in a shorter period of time. General purpose processors on the other hand are mainly used to achieve a descent average response time (although most architectures nowadays contain a pipeline feature). \\ +In this section, two types of processors are being compared. These two are pipelined processors and general purpose processors. Pipelining, a standard feature in RISC\footnote[1]{RISC, or Reduced Instruction Set Computer, is a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions.} processors, is much like an assembly line. Because the processor works on different steps of the instruction at the same time, more instructions can be executed in a shorter period of time. General purpose processors on the other hand are mainly used to achieve a decent average response time (although most architectures nowadays contain a pipeline feature). \\ The use of a pipelined processor can result in significant performance improvements. However, dependencies between instructions can cause pipeline hazards that may delay the completion of instructions. Christopher A. Healy et. al. posed a method to determine the worst case execution times of pipelined processors with cache in \cite{hwh-itapic-95}. This approach reads all kinds of information about each type of instruction from a machine-dependent data file. This way the performance of a sequence of instructions in a pipeline can be analyzed. \\ @@ -18,7 +18,7 @@ It's clear that the use of pipelined processors have a huge advantage over gener When cache memory is to be used in real-time systems, special attention must be paid since cache memory introduces unpredictability to the system. This unpredictable behavior is due to cache reloads upon preemption of a task. When preemptions are frequent, the sum of such cache reloading delays takes a significant portion of task execution time. The cache reloading costs due to preemptions have largely been ignored in real-time scheduling. \\ -Buttazzo states in \cite{but-hrtcs-05} that it would be more efficient to have processors without cache or with cache disable. Although this would obviously get rid of the problem, it is not desirable. Another approach is to allows preemptions only at predetermined execution points with small cache reloading costs. A scheduling scheme was given in \cite{scmsc-lpsecmrts-98} and is called Limited Preemptible Scheduling (LPS). The scheduling scheme only allows preemptions only at predetermined execution points with small cache reloading costs. This means that the method can be applied to FPDS. \\ +Buttazzo states in \cite{but-hrtcs-05} that it would be more efficient to have processors without cache or with cache disable. Although this would obviously get rid of the problem, it is not desirable because the use of cache can really improve system performance. Another approach is to allows preemptions only at predetermined execution points with small cache reloading costs. A scheduling scheme was given in \cite{scmsc-lpsecmrts-98} and is called Limited Preemptible Scheduling (LPS). The scheduling scheme only allows preemptions only at predetermined execution points with small cache reloading costs. This means that the method can be applied to FPDS. \\ In \cite{hwh-itapic-95} a method to determine the worst case execution times was given for multiprocessors with cache. In that paper a cache simulation is used to categorize a cache operation. Using the outcome of such a simulation the blocking time can be determined very precisely. It was already stated that this approach is applicable to FPDS in paragraph 3.1. \\ @@ -31,7 +31,9 @@ In any multiprocessing system cooperating processes share data via shared data o Figure 1. Shared memory multiprocessor system structure \end {center} -However, the literature doesn't state much about the use of local and global memory when applying FPDS. Therefore it is difficult to make a statement about this subject. Most likely the use of local memory shouldn't have much impact on the schedulability of the system under FPDS. Global memory on the other hand can indeed bring nondeterminism. When guarding a critical section with semaphores for instance, a lower priority job can block a higher priority job. This can be overcome with various protocols like PIP, PCP and SRP (although the implementations of PIP and PCP recently have been proven flawed). +However, the literature doesn't state much about the use of local and global memory when using FPDS. Therefore it is difficult to make a statement about this subject. Most likely the use of local memory shouldn't have much impact on the schedulability of the system under FPDS. Global memory on the other hand can indeed bring nondeterminism when using cache. FPDS is therefor more predictable because the cache misses can be determined more accurately. \\ + +Shared variables in a critical section can also bring nondeterminism. When guarding a critical section with semaphores, a lower priority job can block a higher priority job. This can be overcome with various protocols like PIP, PCP and SRP (although the implementations of PIP and PCP recently have been proven flawed). \subsection{Interrupt handling} diff --git a/report/chapter4.tex b/report/chapter4.tex index ddc86a1..4eb27d2 100644 --- a/report/chapter4.tex +++ b/report/chapter4.tex @@ -1,2 +1,15 @@ \section{Application domains} +Having discussed the development and architectural considerations, we can now explore the application domains for FPDS. We have found that FPDS allows for better predictability of the WCET compared to scheduling that allows arbitrary preemption. This is because with FPDS the number of preemption points are limited. However, when deadline misses occur with FPDS, there is a higher chance that the task that misses its deadline is a higher priority task due to the limitation of preemption points as compared to arbitrary preemptions. \\ + +We will now divide the application domains into two main groups, control systems and HQ-video. For each of these groups, we will discuss whether using FPDS is suitable. \\ + +\subsection{Control systems} + +We find that the area of control systems can once again be divided into two main groups. In the first group we have control applications where tasks that miss their deadline preferable have a lower priority. An example would be a fuel injection system in a car. We would like to miss tasks that control the amount of fuel injected into the engine rather than tasks that control whether fuel is injected into the engine at all. If the amount of fuel injected into the engine is wrong, the car can still function even though the ride may be bumpy. However, if the fuel is not provided to the engine, the car will not be able to function. In scenarios that pertain to the first group, schedules with arbitrary preemptions should be used in favor of FPDS because they would reduce the chance that a higher priority task misses its deadline. \\ + +In the second group, we have applications where missing any deadline would be equally disastrous. An example would be a space shuttle where any deadline miss may cause the shuttle to crash. In these cases, it is better to use FPDS with limited preemption points because FPDS allows us to predict the WCET more accurately. + +\subsection{HQ-Video} + +For high quality videos sound is of a higher priority than the image. A deadline miss for the audio would greatly decrease the perceived quality, but a deadline miss for images would not be as noticeable. In this case, we should choose scheduling with arbitrary preemption points so that the chance missing an audio deadline will be reduced. \ No newline at end of file diff --git a/report/chapter6.tex b/report/chapter6.tex index e2c6040..5921ea0 100644 --- a/report/chapter6.tex +++ b/report/chapter6.tex @@ -1,4 +1,4 @@ -\section{Literature} +%\section{Literature} %\small % @@ -33,4 +33,4 @@ %\end{tabbing} \bibliographystyle{alpha} -\bibliography{references} \ No newline at end of file +\bibliography{references} \ No newline at end of file -- cgit v0.12