⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 homework #2 solution.htm

📁 operating system concepts sixth edition windows XP updat 操作系统课后答案
💻 HTM
字号:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<!-- saved from url=(0059)http://www.cse.ucsd.edu/classes/fa04/cse120/hw/hw2-sol.html -->
<HTML><HEAD><TITLE>CSE 120 (Fall 2004) -- Homework #2 Solution</TITLE>
<META http-equiv=Content-Type content="text/html; charset=gb2312">
<META content="MSHTML 6.00.2900.2627" name=GENERATOR></HEAD>
<BODY>
<H2>CSE 120 (Fall 2004) -- Homework #2 Solution</H2><FONT color=blue 
size=+1><B>Out: 10/12</B></FONT><BR><FONT color=red size=+1><B>Due: 
10/26</B></FONT> 
<OL>
  <LI>Chapter 4: 4.3 
  <P>4.3&nbsp; A DECSYSTEM-20 computer has multiple register sets. Describe the 
  actions of a context switch if the new context is already loaded into one of 
  the register sets. What else must happen if the new context is in memory 
  rather in a register set, and all of the register sets are in use? <I>
  <P>If the new context is already in one of the register sets, the kernel 
  switches the active register set to it. Otherwise the kernel will use some 
  cache replacement algorithm to replace the new context with the some existing 
  sets in one of the register sets, saving an active set to memory and restoring 
  the needed set from memory. </I>
  <P></P>
  <LI>Chapter 5: 5.1, 5.2, 5.3 
  <P>5.1&nbsp; Provide two programming examples of multithreading that improve 
  performance over a single-threaded solution. <I>
  <P>Programs that have high parallelism - multithreading kernel on 
  multiprocessors, parallel scientific computations, etc. 
  <P>Programs that share lots resources between different internal entities - 
  web browsers, web servers, database access, online multi-user games. 
  <P>Programs that easier to program/structure/debug in multithreading model - 
  network servers, GUI systems. </I>
  <P>5.2&nbsp; Provide two programming examples of multithreading that do not 
  improve performance over a single-threaded solution. <I>
  <P>Programs that require sequential processing - Shell programs, printer 
  driver. 
  <P>Simple Programs - Hello world, embedded programs running on simple 
  hardware/chips. </I>
  <P>5.3&nbsp; What are the differences between user-level threads and 
  kernel-level threads? Under what circumstances is one type better than the 
  other? 
  <P>Lecture 5 Slides 12-18 cover all the details. </I>
  <P></P>
  <LI>Chapter 6: 6.8, 6.10 
  <P>6.8&nbsp; Many CPU scheduling algorithms are parameterized. For example, 
  the RR algorithm requires a parameter to indicate the time slice. Multilevel 
  feedback queues require parameters to definite the number of queues, the 
  scheduling algorithms for each queue, the criteria used to move processes 
  between queues, etc. These algorithms are thus really sets of algorithms 
  (e.g., the set of RR algorithms for all time slices, etc.). One set of 
  algorithms may include another (in lecture, we talked about using Priority 
  Scheduling to implement SJF). What, if any, relation holds between the 
  following pair of algorithms? <I>
  <P>a. Multilevel feedback queue and RR - RR is implemented, by default, as the 
  general scheduling algorithm for the individual queues of MFQ. Each job in a 
  particular priority queue is dispatched in an RR fashion. 
  <P>b. Multilevel feedback queue and FCFS - The jobs in the individual queues 
  of the MFQ can be scheduled in FCFS fashion by the dispatcher. 
  <P>c. Priority and FCFS - In FCFS, priority is assigned to the incoming jobs 
  based on their individual times of arrival. The older a job is, the higher is 
  its priority. Thus, FCFS is a sort of Priority scheduling, where priority is 
  based on the time parameter 
  <P>d. RR and SJF - Not related under usual circumstances since RR pays no 
  attention to the total CPU burst of a job while SJF relies solely on this 
  property to schedule the waiting jobs. </I>
  <P>6.10&nbsp; Explain the differences in the degree to which the following 
  scheduling algorithms discriminate in favor of short processes: <I>
  <P>a. FCFS - FCFS does not favor short processes at all, it only cares about 
  the process arrival time. 
  <P>b. RR - RR favors short processes that they finish earlier than long 
  processes that arrive at almost same time. 
  <P>c. Multilevel feedback queues MFQ moves a processes to lower-priority level 
  queue if it consumes too much cpu cycles. Hence it favors short processes more 
  than RR algorithms. </I>
  <P></P>
  <LI>Chapter 7: 7.8, 7.9, 7.13 
  <P>7.8&nbsp; <B>The Sleeping-Barber Problem.</B> A barbershop consists of a 
  waiting room with n chairs and the barber room containing the barber chair. If 
  there are no customers to be served, the barber goes to sleep. If a customer 
  enters the barbershop and all chairs are occupied, then the customer leaves 
  the shop. If the barber is busy but chairs are available, then the customer 
  sits in one of the free chairs. If the barber is asleep, the customer wakes up 
  the barber. Write a program to coordinate the barber and the customers. <I>
  <P><A 
  href="http://www.cse.ucsd.edu/classes/fa04/cse120/hw/sleeping_barber.c">sleeping_barber.c</A> 
  </I>
  <P>7.9&nbsp; <B>The Cigarette-Smokers Problem.</B> Consider a system with 
  three smoker processes and one agent process. Each smoker continuously rolls a 
  cigarette and then smokes it. But to roll and smoke a cigarette, the smoker 
  needs three ingredients: tobacco, paper, and matches. One of the smoker 
  processes has paper, another has tobacco, and the third has matches. The agent 
  has an infinite supply of all three materials. The agent places two of the 
  ingredients on the table. The smoker who has the remaining ingredient then 
  makes and smokes a cigarette, signaling the agent on completion. The agent 
  then puts out another two of the three ingredients, and the cycle repeats. 
  Write a program to synchronize the agent and the smokers. <I>
  <P><A 
  href="http://www.cse.ucsd.edu/classes/fa04/cse120/hw/cigarette_smokers.c">cigarette_smokers.c</A> 
  </I>
  <P>7.13&nbsp; Suppose that the signal statement can appear only as the last 
  statement in a monitor procedure. Suggest how the implementation described in 
  Section 7.7 can be simplified. <I>
  <P>The textbook implements Hoare monitors. When a thread T1 calls signal(), 
  the control switches to another thread T2 that is waiting for the condition 
  immediately. In Hoare's implementation, when T2 leaves the monitor or calls 
  another wait(), the control should be switched back to T1,&nbsp; not to the 
  other threads that are waiting to get into the monitor. Since T1 is still 
  inside the monitor, although it is inactive now, it would be better to let T1 
  finish its work first than admit the others into the monitor. The <FONT 
  face="Courier New" size=2>next</FONT> semaphore and the <FONT 
  face="Courier New" size=2>next_count</FONT> variable are used to keep track of 
  this situation. 
  <P>If <FONT face="Courier New" size=2>signal()</FONT> is the last statement of 
  every monitor procedure, we are ensured that T1 is no longer in the monitor 
  and therefore we do not need <FONT face="Courier&#10;New" size=2>next</FONT> 
  semaphore and the <FONT face="Courier New" size=2>next_count</FONT> variable 
  anymore. <!-- This is actually theimplementation of the Hansen/MESA monitor. <PRE>sem_wait(mutex); F()sem_signal(mutex); </PRE><PRE>class condvar { int count = 0; semaphormutex; semaphor x_sem;  condvar() {    count = 0;    sem_init(&amp;mutex, 1);    sem_init(&amp;x_sem, 0);  }  wait() {    count++;    sem_signal(mutex);    sem_wait(x_sem);    count--;  }    signal() {    if (count &gt; 0)    	signal(x_sem);  }}</pre>--></I>
  <P></P>
  <LI>Chapter 8: 8.4 
  <P>8.4&nbsp; Consider the traffic deadlock depicted in Figure 8.8 (see book). 
  <I>
  <P>a. Show that the four necessary conditions for deadlock indeed hold in this 
  example.<BR>
  <P>
  <OL>Process - vehicle, Resource - road 
    <LI>Mutual Exclusion - the one lane road can accommodate one vehicle in 
    parallel 
    <LI>Hold and wait - a vehicles is occupying some road space and is waiting 
    for more to move. 
    <LI>No Preemption - can't remove vehicle if it is on the road 
    <LI>Circular wait - number the cars. car 0 is waiting for car1, car1 is 
    waiting for car2, ..., car N is waiting for car 0. </LI></OL>
  <P>b. State a simple rule that will avoid deadlocks in this system.<BR>
  <P>Do not block - make sure you can safely pass the crossroad before you move 
  forward. </I><!--<P>Common Mistakes: Only intersection/crossroad cases are not complete   answers. If we allow two cars to stay in one lane, or we can blow out cars in   the road, the deadlock is unlocked. Traffic light is neither a good solution   to avoid deadlock, the deadlock in the figure might be generated when there   are traffic lights. (Try to put traffic lights in the figure) -->
  <P></P>
  <LI>The Intel x86 instruction set architecture provides an atomic instruction 
  called XCHG for implementing synchronization primitives. Semantically, XCHG 
  works as follows (although keep in mind it is executed atomically): 
  <BLOCKQUOTE><PRE>void XCHG(bool *X, bool *Y) {
  bool tmp = *X;
  *X = *Y;
  *Y = tmp;
}
</PRE></BLOCKQUOTE>
  <P>Show how XCHG can be used instead of test-and-set to implement the 
  acquire() and release() functions of the lock data structure described in the 
  "Synchronization" lecture. <I>
  <BLOCKQUOTE><PRE>struct lock {
  bool isAvailable = TRUE;
}

void acquire (struct lock * p_lock) {
  bool isAvailable = FALSE;
  while(!isAvailable)
    XCHG(&amp;(p_lock-&gt;isAvailable), &amp;isAvailable); 
}

void release (struct lock * p_lock) {
  p_lock-&gt;isAvailable = TRUE;
}
</PRE></BLOCKQUOTE></I><!--  Common Mistakes: use two variables in lock structure and swap them to   implement <font face="Courier New" size="2">acquire()</font> and  <font face="Courier New" size="2">release()</font>, this usually results to   dangerous code because there might be context switches between XCHG and while   statements.-->
  <LI>A common pattern in parallel scientific programs is to have a set of 
  threads do a computation in a sequence of phases. In each phase <I>i</I>, all 
  threads must finish phase <I>i</I> before any thread starts computing phase 
  <I>i+1</I>. One way to accomplish this is with barrier synchronization. At the 
  end of each phase, each thread executes <I>Barrier::Done(n)</I>, where 
  <I>n</I> is the number of threads in the computation. A call to 
  <I>Barrier::Done</I> blocks until all of the <I>n</I> threads have called 
  <I>Barrier::Done</I>. Then, all threads proceed. <I>
  <P>a. Write a monitor that implements Barrier using Mesa semantics. 
  <BLOCKQUOTE><PRE>monitor Barrier {
  int called = 0;
  condition barrier;

  void Done (int needed) {
    called++;
    if (called == needed) {
      called = 0;
      barrier.broadcast();
    } else {
      barrier.wait();
    }
  }
}
</PRE></BLOCKQUOTE>
  <P>b. Implement Barrier using an explicit mutex and condition variable. The 
  mutex and condition variable have the semantics described at the end of the 
  "Semaphore and Monitor" lecture in the ping_pong example, and as implemented 
  by you in Project 1. 
  <BLOCKQUOTE><PRE>class Barrier {
  int called = 0;
  Lock mutex;
  Condition barrier;

  void Done (int n) {
    mutex.Acquire();

    called++;
    if (called == needed) {
      called = 0;
      barrier.Broadcast(&amp;mutex);
    } else {
      barrier.Wait(&amp;mutex);
    }

    mutex.Release();
  }
}
</PRE></BLOCKQUOTE></LI></OL><PRE>
</PRE>
<HR>

<ADDRESS><A href="mailto:voelker@cs.ucsd.edu">voelker@cs.ucsd.edu</A> 
</ADDRESS></I></BODY></HTML>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -