📄 soln9-14.txt
字号:
Some systems allow different file operations based on the type of
the file (for instance, an ascii file can be read as a stream while
a database file can be read via an index to a block). Other systems
leave such interpretation of a file's data to the process and provide
no help in accessing the data. The method which is better depends
on the needs of the processes on the system, and the demands the
users place on the operating system. If a system runs mostly database
applications, it may be more efficient for the operating system to
implement a database-type file and provide operations rather than
making each program implement the same thing (possibly in different
ways). For general-purpose systems it may be better to only implement
basic file types to keep the operating system size smaller and allow
maximum freedom to the processes on the system.
Question 11.7:
Explain the purpose of the open and close operations.
Answer:
The open operation informs the system that the named file is about
to become active. The close operation informs the system that the
named file is no longer in active use by the user who issued the
close operation.
Question 11.14:
Consider a file currently consisting of 100 blocks. Assume that the file
control block (and the index block, in the case of indexed allocation)
is already in memory. Calculate how many disk I/O operations are
required for contiguous, linked, and indexed (single-level) allocation
strategies, if, for one block, the following conditions hold. In the
contiguous-allocation case, assume that there is no room to grow in
the beginning, but there is room to grow in the end. Assume that the
block information to be added is stored in memory.
a. The block is added at the beginning.
b. The block is added in the middle.
c. The block is added at the end.
d. The block is removed from the beginning.
e. The block is removed from the middle.
f. The block is removed from the end.
Answer:
Contiguous Linked Indexed
a. 201 2 2
b. 101 52 2
c. 1 3 2
d. 198 2 1
e. 98 51 1
f. 0 100 1
Question 11.17:
Why must the bit map for file allocation be kept on mass storage rather
than in main memory?
Answer:
In case of system crash (memory failure), the free-space list would not
be lost as it would be if the bit map had been stored in main memory.
Question 11.22:
How do caches help improve performance? Why do systems not use more
or larger caches if they are so useful?
Answer:
Caches allow components of differing speeds to communicate more
efficiently by storing data from the slower device, temporarily,
in a faster device (the cache). Caches are, almost by definition,
more expensive than the device they are caching for, so increasing
the number or size of caches would increase system cost.
Question 11.24:
Why is it advantageous for the user of an operating system to
dynamically allocate its internal tables? What are the penalties to
the operating system for doing so?
Answer:
Dynamic tables allow more flexibility in system use growth tables are
never exceeded, avoiding artificial use limits. Unfortunately, kernel
structures and code are more complicated, so there is more potential
for bugs. The use of one resource can take away more system resources
(by growing to accommodate the requests) than with static tables.
Question 12.1:
State three advantages of placing functionality in a device controller
rather than in the kernel. State three disadvantages.
Answer:
Three advantages: Bugs are less likely to cause an operating system
crash. Performance can be improved by utilizing dedicated hardware and
hard-coded algorithms. The kernel is simplified by moving algorithms
out of it.
Three disadvantages: Bugs are harder to fix a new firmware version or
new hardware is needed Improving algorithms likewise requires a hardware
update rather than just a kernel or device driver update. Embedded
algorithms could conflict with the application s use of the device,
causing decreased performance.
Question 12.4:
Describe three circumstances under which blocking I/O should be used.
Describe three circumstances under which nonblocking I/O should be used.
Why not just implement nonblocking I/O and have processes busy-wait
until their device is ready?
Answer:
Generally, blocking I/O is appropriate when the process will only
be waiting for one spe-cific event. Examples include a disk, tape,
or keyboard read by an application program.
Non-blocking I/O is useful when I/O may come from more than one source
and the order of the I/O arrival is not predetermined. Examples
include network daemons listening to more than one network socket,
window managers that accept mouse movement as well as keyboard input,
and I/O-management programs, such as a copy command that copies data
between I/O devices. In the last case, the program could optimize its
performance by buffering the input and output and using non-blocking
I/O to keep both devices fully occupied. Non-blocking I/O is more
complicated for programmers, because of the asynchronous rendezvous that
is needed when an I/O occurs. Also, busy waiting is less efficient than
interrupt-driven I/O so the overall system performance would decrease.
Question 12.5:
Why might a system use interrupt-driven I/O to manage a single serial
port but polling I/O to manage a front-end processor, such as a
terminal concentrator?
Answer:
Polling can be more efficient than interrupt-driven I/O. This is the
case when the I/O is frequent and of short duration. Even though a
single serial port will perform I/O relatively infrequently and should
thus use interrupts, a collection of serial ports such as those in a
terminal concentrator can produce a lot of short I/O operations, and
interrupting for each one could create a heavy load on the system. A
well-timed polling loop could alleviate that load without wasting many
resources through looping with no I/O needed.
Question 12.6:
Polling for an I/O completion can waste a large number of CPU cycles
if the processor iterates a busy-waiting loop many times before
the I/O completes. But if the I/O device is ready for service,
polling can be much more efficient than is catching and dispatching
an interrupt. Describe a hybrid strategy that combines polling,
sleeping, and interrupts for I/O device service. For each of these
three strategies (pure polling, pure interrupts, hybrid), describe a
computing environment in which that strategy is more efficient than
is either of the others.
Answer:
This is a good test question!
Question 12.8:
How does DMA increase system concurrency? How does it complicate
hardware design?
Answer:
DMA increases system concurrency by allowing the CPU to perform
tasks while the DMA system transfers data via the system and memory
busses. Hardware design is complicated because the DMA controller
must be integrated into the system, and the system must allow the DMA
controller to be a bus master. Cycle stealing may also be necessary
to allow the CPU and DMA controller to share use of the memory bus.
Question 13.1:
None of the disk-scheduling disciplines, except FCFS, is truly fair
(starvation may occur).
a. Explain why this assertion is true.
b. Describe a way to modify algorithms such as SCAN to ensure fairness.
c. Explain why fairness is an important goal in a time-sharing system.
d. Give three or more examples of circumstances in which it is important
that the op-erating system be unfair in serving I/O requests.
Answer:
a. New requests for the track over which the head currently resides can
theoretically arrive as quickly as these requests are being serviced.
b. All requests older than some predetermined age could be forced
to the top of the queue, and an associated bit for each could
be set to indicate that no new request could be moved ahead of
these requests. For SSTF, the rest of the queue would have to be
reorganized with respect to the last of these old requests.
c. To prevent unusually long response times.
d. Paging and swapping should take priority over user requests. It may
be desirable for other kernel-initiated I/O, such as the writing
of file system metadata, to take precedence over user I/O. If the
kernel supports real-time process priorities, the I/O requests of
those processes should be favored.
Question 13.2:
Suppose that a disk drive has 5000 cylinders, numbered 0 to 4999. The
drive is currently serving a request at cylinder 143, and the previous
request was at cylinder 125. The queue of pending requests, in FIFO
order, is
86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130
Starting from the current head position, what is the total distance
(in cylinders) that the disk arm moves to satisfy all the pending
requests, for each of the following disk-scheduling algorithms?
a. FCFS
b. SSTF
c. SCAN
d. LOOK
e. C-SCAN
Answer:
a. The FCFS schedule is
143, 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130.
The total seek distance is 7081.
b. The SSTF schedule is
143, 130, 86, 913, 948, 1022, 1470, 1509, 1750, 1774.
The total seek distance is 1745.
c. The SCAN schedule is
143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 130, 86.
The total seek distance is 9769.
d. The LOOK schedule is
143, 913, 948, 1022, 1470, 1509, 1750, 1774, 130, 86.
The total seek distance is 3319.
e. The C-SCAN schedule is
143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 86, 130.
The total seek distance is 9813.
f. (Bonus.) The C-LOOK schedule is
143, 913, 948, 1022, 1470, 1509, 1750, 1774, 86, 130.
The total seek distance is 3363.
Question 13.4:
Suppose that the disk in Exercise 13.3 rotates at 7200 RPM.
a. What is the average rotational latency of this disk drive?
b. What seek distance can be covered in the time that you found
for part a?
Answer:
a. 7200 rpm gives 120 rotations per second. Thus, a full rotation
takes 8.33 ms, and the average rotational latency (a half rotation)
takes 4.167 ms.
b. Solving t =0.7561 + 0.2439 pL for t =4.167 gives L = 195.58,
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -