⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 http:^^www.cs.wisc.edu^~cs537-1^scheduling.html

📁 This data set contains WWW-pages collected from computer science departments of various universities
💻 HTML
📖 第 1 页 / 共 3 页
字号:
    <th>Burst    <th>Arrival Time    <th>Burst Length<tr align="center">    <td>A    <td>0    <td>3<tr align="center">    <td>B    <td>1    <td>5<tr align="center">    <td>C    <td>3    <td>2<tr align="center">    <td>D    <td>9    <td>5<tr align="center">    <td>E    <td>12    <td>5</table></center>(All times are in milliseconds).The following <em>Gantt chart</em> shows the schedule that results fromFCFS scheduling.<center><!WA11><img align=top src="http://www.cs.wisc.edu/~cs537-1/fcfs.gif"></center><p>The main advantage of FCFS is that it is easy to write and understand, butit has some severe problems.If one process gets into an infinite loop, it will run forever andshut out all the others.Even if we assume that processes don't have infinite loops (or takespecial precautions to catch such processes), FCFS tends to excessively favorlong bursts.Let's compute the waiting time and penalty ratios for these jobs.<center><table align=center width="50%" compact boarder=0><tr align="center">    <th>Burst <th>Start Time <th>Finish Time <th>Waiting Time <th>Penalty Ratio<tr align="center">    <td>A    <td>0    <td>3    <td>0    <td>1.0<tr align="center">    <td>B    <td>3    <td>8    <td>2    <td>1.4<tr align="center">    <td>C    <td>8    <td>10    <td>5    <td>3.5<tr align="center">    <td>D    <td>10    <td>15    <td>1    <td>1.2<tr align="center">    <td>E    <td>15    <td>20    <td>3    <td>1.6<tr align="center">    <td>Average    <td> <td>    <td>2.2    <td>1.74</table></center>As you can see, the shorted burst (C) has the worst penalty ratio.The situation can be much worse if a short burst arrives after a verylong one.  For example, suppose a burst of length 100 arrives at time0 and a burst of length 1 arrives immediately after it, at time 1.The first burst doesn't have to wait at all, so its penalty ratio is1.0 (perfect), but the second burst waits 99 milliseconds, for a penaltyratio of 99.<p>Favoring long bursts means favoring<em>CPU-bound</em> processes (which have very long CPU bursts betweenI/O operations).In general, we would like to favor I/O-bound processes, since if wegive the CPU to an I/O-bound process, it will quickly finish its burst,start doing some I/O, and get out of the ready list.Consider what happens if we have one CPU-bound process and several I/O-boundprocesses.Suppose we start out on the right foot and run the I/O-bound processesfirst.They will all quickly finish their bursts and go start their I/O operations,leaving us to run the CPU-bound job.After a while, they will finish their I/O and queue up behind the CPU-boundjob, leaving all the I/O devices idle.When the CPU-bound job finishes its burst, it will start an I/O operation,allowing us to run the other jobs.As before, they will quickly finish their bursts and start to do I/O.Now we have the CPU sitting idle, while all the processes are doing I/O.Since the CPU hog started its I/O first, it will likely finish first,grabbing the CPU and making all the other processes wait.The system will continue this way, alternating between periods when theCPU is busy and all the I/O devices are idle with periods when the CPUis idle and all the processes are doing I/O.We have destroyed one of the main motivations for having processes inthe first place:to allow overlap between computation with I/O.This phenomenon is called the <em>convoy effect</em>.<p>In summary, although FCFS is simple, it performs poorly in terms ofglobal performance measures, such as CPU utilization and throughput.It also gives lousy response to interactive jobs (which tend to beI/O bound).The one good thing about FCFS is that there is no starvation:Every burst does get served, if it waits long enough.<h4>Shortest-Job-First</h4><p>A much better policy is called <em>shortest-job-first</em> (SJF).Whenever the CPU has to choose a burst to run, it chooses theshortest one.(The algorithm really should be called ``shortest burst first'', butthe name SJF is traditional).This policy certainly gets around all the problems with FCFS mentionedabove.  In fact, we can prove the SJF is <em>optimal</em> withrespect to average waiting time.That is, any other policy whatsoever will have worse average waiting time.By decreasing average waiting time, we also improve processor utilizationand throughput.<p>Here's the proof that SJF is optimal.Suppose we have a set of bursts ready to run and we run them in someorder other than SJF.Then there must be some burst that is run before shorter burst, say<em>b<sub>1</sub></em> is run before<em>b<sub>2</sub></em>, but<em>b<sub>1</sub></em> &gt;<em>b<sub>1</sub></em>.If we reversed the order, we would increase the waiting time of<em>b<sub>1</sub></em> by<em>b<sub>2</sub></em>, but decrease the waiting time of<em>b<sub>2</sub></em> by <em>b<sub>1</sub></em>.Since<em>b<sub>1</sub></em> &gt;<em>b<sub>1</sub></em>, we have a net decrease in total, and hence average,waiting time.Continuing in this manner to move shorter bursts ahead of longer ones,we eventually end up with the bursts sorted in increasing order of size(think of this as a bubble sort!).<p>Here's our previous example with SJF scheduling<center><table align=center width="50%" compact boarder=0><tr align="center">    <th>Burst <th>Start Time <th>Finish Time <th>Waiting Time <th>Penalty Ratio<tr align="center">    <td>A    <td>0    <td>3    <td>0    <td>1.0<tr align="center">    <td>B    <td>5    <td>10    <td>4    <td>1.4<tr align="center">    <td>C    <td>3    <td>5    <td>0    <td>1.0<tr align="center">    <td>D    <td>10    <td>15    <td>1    <td>1.2<tr align="center">    <td>E    <td>15    <td>20    <td>3    <td>1.6<tr align="center">    <td>Average    <td> <td>    <td>1.6    <td>1.24</table></center>Here's the <em>Gantt chart</em>:<center><!WA12><img align=top src="http://www.cs.wisc.edu/~cs537-1/srtf.gif"></center><p>As described, SJF is a non-preemptive policy.There is also a preemptive version of the SJF, which is sometimes called<em>shortest-remaining-time-first</em> (SRTF).Whenever a new job enters the ready queue, the algorithm reconsiders which jobto run.If the new arrival has a burst shorter than the <em>remaining</em>portion of the current burst, the scheduler moves the current job backto the ready queue (to the appropriate position considering the remainingtime in its burst) and runs the new arrival instead.<p>With SJF or SRTF, starvation is possible.A very long burst may never get run, because shorter bursts keep arrivingin the ready queue.We will return to this problem later.<p>There's only one problem with SJF (or SRTF):We don't know how long a burst is going to be until we run it!Luckily, we can make a pretty good guess.Processes tend to be creatures of habit, so if one burst of a processis long, there's a good chance the next burst will be long as well.Thus we might <em>guess</em> that each burst will be thesame length as the previous burst of the same process.However, that strategy won't work so well if a process has an occasionaloddball burst that unusually long or short burst.Not only will we get that burst wrong, we will guess wrong on the next burst,which is more typical for the process.A better idea is to make each guess the <em>average</em> of the lengthof the immediately preceding burst and the guess we used before thatburst:<samp><font color="0f0fff">guess = (guess + previous_burst)/2</font></samp>.This strategy takes into account the entire past history of a processin guessing the next burst length, but it quickly adapts to changesin the behavior of the process, since the ``weight'' of each burstin computing the guess drops off exponentially with the time since thatburst.If we call the most recent burst length <samp><font color="0f0fff">b<sub>1</sub></font></samp>, theone before that <samp><font color="0f0fff">b<sub>2</sub></font></samp>, etc., then the next guess is<samp><font color="0f0fff">b<sub>1</sub>/2 + b<sub>2</sub>/4 + b<sub>4</sub>/8 + b<sub>8</sub>/16 +...</font></samp>.<h4>Round-Robin and Processor Sharing</h4><p>Another scheme for preventing long bursts from getting too much priorityis a preemptive strategy called <em>round-robin</em> (RR).RR keeps all the burst in a queue and runs the first one, like FCFS.But after a length of time <em>q</em> (called a <em>quantum</em>), if thecurrent burst hasn't completed, it is moved to the tail of the queueand the next burst is started.Here are Gantt charts of our example with round-robin and quantum sizesof 4 and 1.<center><!WA13><img align=top src="http://www.cs.wisc.edu/~cs537-1/rr.gif"></center>With <em>q = 4</em>, we get an average waiting time of 3.6 and an averagepenalty ratio of 1.98 (work it out yourself!).With <em>q = 1</em>, the averages drop to 3.2 and 1.88, respectively.The limit, as <em>q</em> approaches zero, is called <em>processor sharing</em>(PS).PS causes the CPU to be shared equally among all the ready processes.In the steady state of PS, when no bursts enter or leave the ready list,each burst sees a penalty ratio of exactly <em>n</em>,the length of the ready queue.Of course PS is only of theoretical interest.  There is a substantialoverhead in switching from one process to another.  If the quantum istoo small, the CPU will spend most its time switching between processesand practically none of it actually running them!<h4>Priority Scheduling</h4><p>There are a whole family of scheduling algorithms that use <em>priorities</em>.The basic idea is always to run the highest priority burst.Priority algorithms can be preemptive or non-preemptive (if a burst arrivesthat has higher priority than the currently running burst, does do weswitch to it immediately, or do we wait until the current burst finishes?).Priorities can be assigned <em>externally</em> to processes based on theirimportance.  They can also be assigned (and changed) dynamically.For example, priorities can be used to prevent starvation:  If we raise thepriority of a burst the longer it has been in the ready queue, eventuallyit will have the highest priority of all ready burst and be guaranteeda chance to finish.One interesting use of priority is sometimes called <em>multi-level feedbackqueues</em> (MLFQ).We maintain a sequence of FIFO queues, numbered starting at zero.New bursts are added to the tail of queue 0.We always run the burst at the head of the lowest numbered non-emptyqueue.If it doesn't complete in complete within a specified time limit,it is moved to the tail of the next higher queue.Each queue has its own time limit:one unit in queue 0, two units in queue 1, four units in queue 2,eight units in queue 3, etc.This scheme combines many of the best features of the other algorithms:It favors short bursts, since they will be completed while theyare still in low-numbered (high priority) queues.Long bursts, on the other hand, will be run with comparatively fewexpensive process switches.<p>This idea can be generalized.Each queue can have its own scheduling discipline, and you can use anycriterion you like to move bursts from queue to queue.There's no end to the number of algorithms you can dream up.<h4>Analysis</h4><p>It is possible to analyze some of these algorithms mathematically.There is a whole branch of computer science called ``queuing theory''concerned with this sort of analysis.Usually, the analysis uses statistical assumptions.For example, it is common to assume that the arrival of new bursts is<em>Poisson</em>:  The expected time to wait until the next newburst arrives is independent of how long it has been since the lastburst arrived.  In other words, the amount of time that has passed sincethe last arrival is no clue to how long it will be until the nextarrival.  You can show that in this case, the probability of an arrivalin the next <em>t</em> milliseconds is<br><em>1 - e<sup>-at</sup></em>,where <em>a</em> is a parameter called the <em>arrival rate</em>.The average time between arrivals is <em>1/a</em>.Another common assumption is that the burst lengths follow a similar``exponential'' distribution:  the probability that the length of a burstis less than <em>t</em> is <em>1 - e<sup>-bt</sup></em>, where <em>b</em>is another parameter, the <em>service rate</em>.The average burst length is <em>b</em>.This kind of system is called an ``M/M/1 queue.''<p>The ratio <em>p = a/b</em> is of particularinterest:<!WA14><a href="#footnote"><sup>3</sup></a>If <em>p > 1</em>, burst are arriving, on the average, faster thanthey are finishing, so the ready queue grows without bound.(Of course, that can't happen because there is at most one burst perprocess, but this is theory!)If <em>p = 1</em>, arrivals and departures are perfectly balanced.<p>It can be shown that for FCFS, the average penalty ratio forbursts of length <em>t</em> is<ol><em>P(t) = </em>1<em> + p / [ (</em>1<em>-p)bt ] </em></ol>As you can see, as <em>t</em> decreases, the penalty ratio increases,proving that FCFS doesn't like short bursts.Also note that as <em>p</em> approaches one, the penalty ration approachesinfinity.<p>For processor sharing, as we noticed above, all processes have a penaltyratio that is the length of the queue.It can be shown that on the average, that length is 1/(1-p).<hr><a name="footnote"><sup>1</sup>We will see medium-term and long-term scheduling laterin the course.<p><sup>2</sup>A job might be a batch job (such as printing a run of paychecks),an interactive login session, or a command issued by an interactive session.It might consist of a single process or a group of related processes.<p><sup>3</sup>Actually, <em>a</em>, <em>b</em>, and <em>p</em> are supposedto be the Greek letters ``alpha,'' ``beta,'' and ``rho,'' but I can't figureout how to make them in HTML.</a><hr><address><i><!WA15><a HREF="mailto:solomon@cs.wisc.edu">solomon@cs.wisc.edu</a><br>Thu Oct 31 15:38:53 CST 1996</i></address><br>Copyright &#169; 1996 by Marvin Solomon.  All rights reserved.</body></html>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -