13.07.2015 Views

An Operating Systems Vade Mecum

An Operating Systems Vade Mecum

An Operating Systems Vade Mecum

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

30 Time Management Chapter 2the user may be annoyed. A response ratio greater than 1 doesn’t make any sense. Similarly,the penalty ratio P ranges from 1 (which is a perfect value) upward.If we are discussing a class of processes with similar requirements, like shortprocesses or long processes, we extend this notation as follows.T (t ): average response time for processes needing t timeM (t ): T (t ) − tP (t ): T (t ) /tR (t ): t/T(t )If the average response measures turn out to be independent of t , we will just write T (),M (), P (), and R ().We will also refer on occasion to kernel time and idle time. Kernel time is thetime spent by the kernel in making policy decisions and carrying them out. This figureincludes context-switch and process-switch time. A well-tuned operating system tries tokeep kernel time between 10 and 30 percent. Idle time is spent when the ready list isempty and no fruitful work can be accomplished.One surprising theoretical result sheds light on the tradeoff between providinggood service to short and to long processes. It turns out that no matter what schedulingmethod you use, if you ignore context- and process-switching costs, you can’t help oneclass of jobs without hurting the other class. In fact, a minor improvement for shortprocesses causes a disproportionate degradation for long processes. We will therefore beespecially interested in comparing various policies with respect to how well they treatprocesses with different time requirements.The values we will get for the service measures under different policies willdepend on how many processes there are, how fast they arrive, and how long they needto run. A fairly simple set of assumptions will suffice for our purposes. First, we willassume that processes arrive (into the view of the short-term scheduler) in a patterndescribed by the exponential distribution. One way to describe this pattern is to saythat no matter how recently the previous process arrived, the next one will arrive within ttime with probability 1−e −αt . As t goes to infinity, this probability goes to 1−e −∞ =1.The average time until the next arrival is 1/α. <strong>An</strong>other way to describe this pattern is tosay that the probability that k processes will arrive within one time unit is e −α α k /k!.The reason we pick this particular distribution is that even though it looks forbidding, itturns out that the exponential distribution is the easiest to deal with mathematically andstill mirror the way processes actually do arrive in practice. The symbol α (‘‘alpha’’) is aparameter of the distribution, which means that we adjust α to form a distribution withthe particular behavior we want. We call α the arrival rate, since as α increases,arrivals happen more frequently. Figure 2.2 shows the exponential distribution for variousvalues of α. The exponential distribution is memoryless: The expected time to thenext arrival is always 1/α, no matter how long it has been since the previous arrival.Observations on real operating systems have shown that the exponential arrival rateassumption is reasonable.Our second assumption is that the service time required by processes also followsthe exponential distribution, this time with parameter β (‘‘beta’’):Probability(k processes serviced in one time unit) = e −β β k /k!The memoryless property here implies that the expected amount of time still needed bythe current process is always 1/β, no matter how long the process has been running sofar.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!